WorldWideScience

Sample records for rapid estimation method

  1. A simple and rapid method to estimate radiocesium in man

    International Nuclear Information System (INIS)

    Kindl, P.; Steger, F.

    1990-09-01

    A simple and rapid method for monitoring internal contamination of radiocesium in man was developed. This method is based on measurements of the γ-rays emitted from the muscular parts between the thights by a simple NaJ(Tl)-system. The experimental procedure, the calibration, the estimation of the body activity and results are explained and discussed. (Authors)

  2. Benchmarking electrical methods for rapid estimation of root biomass.

    Science.gov (United States)

    Postic, François; Doussan, Claude

    2016-01-01

    To face climate change and subsequent rainfall instabilities, crop breeding strategies now include root traits phenotyping. Rapid estimation of root traits in controlled conditions can be achieved by using parallel electrical capacitance and its linear correlation with root dry mass. The aim of the present study was to improve robustness and efficiency of methods based on capacitance and other electrical variables, such as serial/parallel resistance, conductance, impedance or reactance. Using different electrode configurations and stem contact electrodes, we have measured the electrical impedance spectra of wheat plants grown in pots filled with three types of soil. For each configuration, parallel capacitance and other linearly independent electrical variables were computed and their quality as root dry mass estimator was evaluated by a 'sensitivity score' that we derived from Pearson's correlation coefficient r and linear regression parameters. The highest sensitivity score was obtained by parallel capacitance at an alternating current frequency of 116 Hz in three-terminal configuration. Using a clamp, instead of a needle, as a stem electrode did not significantly affect the capacitance measurements. Finally, in handheld LCR meter equivalent conditions, capacitance had the highest sensitivity score and determination coefficient (r (2) = 0.52) at 10 kHz frequency. Our benchmarking of linear correlations between different electrical variables and root dry mass enables to determine more coherent practices for ensuring a sensitive and robust root dry mass estimation, including in handheld LCR meter conditions. This would enhance the value of electrical capacitance as a tool for screening crops in relation with root systems in breeding programs.

  3. A new rapid method for rockfall energies and distances estimation

    Science.gov (United States)

    Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric

    2016-04-01

    and distances at the base to block and slope features. The validation of the proposed approach was conducted by comparing predictions to experimental data collected in the field and gathered from the scientific literature. The method can be used for both natural and constructed slopes and easily extended to more complicated and articulated slope geometries. The study shows its great potential for a quick qualitative hazard assessment providing indication about impact energy and horizontal distance of the first impact at the base of a rock cliff. Nevertheless, its application cannot substitute a more detailed quantitative analysis required for site-specific design of mitigation measures. Acknowledgements The authors gratefully acknowledge the financial support of the Australian Coal Association Research Program (ACARP). References Dorren, L.K.A. (2003) A review of rockfall mechanics and modelling approaches, Progress in Physical Geography 27(1), 69-87. Agliardi, F., Crosta, G.B., Frattini, P. (2009) Integrating rockfall risk assessment and countermeasure design by 3D modelling techniques. Natural Hazards and Earth System Sciences 9(4), 1059-1073. Ferrari, F., Thoeni, K., Giacomini, A., Lambert, C. (2016) A rapid approach to estimate the rockfall energies and distances at the base of rock cliffs. Georisk, DOI: 10.1080/17499518.2016.1139729.

  4. A rapid method to estimate Westergren sedimentation rates.

    Science.gov (United States)

    Alexy, Tamas; Pais, Eszter; Meiselman, Herbert J

    2009-09-01

    The erythrocyte sedimentation rate (ESR) is a nonspecific but simple and inexpensive test that was introduced into medical practice in 1897. Although it is commonly utilized in the diagnosis and follow-up of various clinical conditions, ESR has several limitations including the required 60 min settling time for the test. Herein we introduce a novel use for a commercially available computerized tube viscometer that allows the accurate prediction of human Westergren ESR rates in as little as 4 min. Owing to an initial pressure gradient, blood moves between two vertical tubes through a horizontal small-bore tube and the top of the red blood cell (RBC) column in each vertical tube is monitored continuously with an accuracy of 0.083 mm. Using data from the final minute of a blood viscosity measurement, a sedimentation index (SI) was calculated and correlated with results from the conventional Westergren ESR test. To date, samples from 119 human subjects have been studied and our results indicate a strong correlation between SI and ESR values (R(2)=0.92). In addition, we found a close association between SI and RBC aggregation indices as determined by an automated RBC aggregometer (R(2)=0.71). Determining SI on human blood is rapid, requires no special training and has minimal biohazard risk, thus allowing physicians to rapidly screen for individuals with elevated ESR and to monitor therapeutic responses.

  5. Are rapid population estimates accurate? A field trial of two different assessment methods.

    Science.gov (United States)

    Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent

    2006-09-01

    Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.

  6. A rapid radiobioassay method for strontium estimation in nuclear/radiological emergencies

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Sawant, Pramilla D.; Rao, D.D.; Pradeepkumar, K.S.

    2014-01-01

    During a nuclear/radiological emergency, workers as well as members of the public (MOP) may get internally contaminated with the radionuclides like Sr and Cs. In such situations, a truly rapid radiobioassay method is required to screen a large number of people in order to assess internal contamination and also to decide on subsequent medical intervention. The current precipitation method used at Bioassay Lab., Trombay is quite lengthy and laborious. Efforts are being made to optimize bioassay methods at Bhabha Atomic Research Centre using Solid Extraction Chromatography (SEC) technique for emergency response. The present work reports standardization of SEC technique for rapid estimation of Sr in urine samples. The method standardized using Sr spec is simpler, shorter, result in higher recoveries and reproducible results. It is most suitable for quick dose assessment of 90 Sr in bioassay samples in case of emergency

  7. A rapid reliability estimation method for directed acyclic lifeline networks with statistically dependent components

    International Nuclear Information System (INIS)

    Kang, Won-Hee; Kliese, Alyce

    2014-01-01

    Lifeline networks, such as transportation, water supply, sewers, telecommunications, and electrical and gas networks, are essential elements for the economic and societal functions of urban areas, but their components are highly susceptible to natural or man-made hazards. In this context, it is essential to provide effective pre-disaster hazard mitigation strategies and prompt post-disaster risk management efforts based on rapid system reliability assessment. This paper proposes a rapid reliability estimation method for node-pair connectivity analysis of lifeline networks especially when the network components are statistically correlated. Recursive procedures are proposed to compound all network nodes until they become a single super node representing the connectivity between the origin and destination nodes. The proposed method is applied to numerical network examples and benchmark interconnected power and water networks in Memphis, Shelby County. The connectivity analysis results show the proposed method's reasonable accuracy and remarkable efficiency as compared to the Monte Carlo simulations

  8. A rapid method for estimation of Pu-isotopes in urine samples using high volume centrifuge.

    Science.gov (United States)

    Kumar, Ranjeet; Rao, D D; Dubla, Rupali; Yadav, J R

    2017-07-01

    The conventional radio-analytical technique used for estimation of Pu-isotopes in urine samples involves anion exchange/TEVA column separation followed by alpha spectrometry. This sequence of analysis consumes nearly 3-4 days for completion. Many a times excreta analysis results are required urgently, particularly under repeat and incidental/emergency situations. Therefore, there is need to reduce the analysis time for the estimation of Pu-isotopes in bioassay samples. This paper gives the details of standardization of a rapid method for estimation of Pu-isotopes in urine samples using multi-purpose centrifuge, TEVA resin followed by alpha spectrometry. The rapid method involves oxidation of urine samples, co-precipitation of plutonium along with calcium phosphate followed by sample preparation using high volume centrifuge and separation of Pu using TEVA resin. Pu-fraction was electrodeposited and activity estimated using 236 Pu tracer recovery by alpha spectrometry. Ten routine urine samples of radiation workers were analyzed and consistent radiochemical tracer recovery was obtained in the range 47-88% with a mean and standard deviation of 64.4% and 11.3% respectively. With this newly standardized technique, the whole analytical procedure is completed within 9h (one working day hour). Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations.

    Science.gov (United States)

    Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris

    2013-01-23

    Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method's development.

  10. Rapid bioassay method for estimation of 90Sr in urine samples by liquid scintillation counting

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Chaudhary, Seema; Sawant, Pramilla D.

    2018-01-01

    Radiostrontium (Sr) is a by-product of the nuclear fission of uranium and plutonium in nuclear reactors and is an important radionuclide in spent nuclear fuel and radioactive waste. Rapid bioassay methods are required for estimating Sr in urine following internal contamination. Decision regarding medical intervention, if any can be based upon the results of urinalysis. The present method used at Bioassay Laboratory, Trombay is by Solid Extraction Chromatography (SEC) technique. The Sr separated from urine sample is precipitated as SrCO 3 and analyzed gravimetrically. However, gravimetric procedure is time consuming and therefore, in the present study, feasibility of Liquid Scintillation Counting for direct detection of radiostrontium in effluent was explored. The results obtained in the present study were compared with those obtained using gravimetric method

  11. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gilani, Syed Irtiza Ali

    2008-09-15

    Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T{sub 1} relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improvement in the k-space sampling trajectory is desired to acquire data in clinically acceptable times. In this thesis, a new trajectory, termed line-sharing, is introduced for TAPIR that can potentially reduce the acquisition time by 40 %. Additionally, the line-sharing method was compared with the GRAPPA parallel imaging method. These methods were employed to reconstruct time-point images from the data acquired on a 4T high-field MR research scanner. Multislice, multipoint in vivo results obtained using these methods are presented. Despite improvement in acquisition speed, through line-sharing, for example, motion remains a problem and artefact-free data cannot always be obtained. Therefore, in this thesis, a rapid technique is introduced to estimate in-plane motion. The presented technique is based on calculating the in-plane motion parameters, i.e., translation and rotation, by registering the low-resolution MR images. The rotation estimation method is based on the pseudo-polar FFT, where the Fourier domain is composed of frequencies that reside in an oversampled set of non-angularly, equispaced points. The essence of the method is that unlike other Fourier-based registration schemes, the employed approach does not require any interpolation to calculate the pseudo-polar FFT grid coordinates. Translation parameters are estimated by the phase correlation method. However, instead of two-dimensional analysis of the phase correlation matrix, a low complexity subspace identification of the phase

  12. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    International Nuclear Information System (INIS)

    Gilani, Syed Irtiza Ali

    2008-09-01

    Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T 1 relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improvement in the k-space sampling trajectory is desired to acquire data in clinically acceptable times. In this thesis, a new trajectory, termed line-sharing, is introduced for TAPIR that can potentially reduce the acquisition time by 40 %. Additionally, the line-sharing method was compared with the GRAPPA parallel imaging method. These methods were employed to reconstruct time-point images from the data acquired on a 4T high-field MR research scanner. Multislice, multipoint in vivo results obtained using these methods are presented. Despite improvement in acquisition speed, through line-sharing, for example, motion remains a problem and artefact-free data cannot always be obtained. Therefore, in this thesis, a rapid technique is introduced to estimate in-plane motion. The presented technique is based on calculating the in-plane motion parameters, i.e., translation and rotation, by registering the low-resolution MR images. The rotation estimation method is based on the pseudo-polar FFT, where the Fourier domain is composed of frequencies that reside in an oversampled set of non-angularly, equispaced points. The essence of the method is that unlike other Fourier-based registration schemes, the employed approach does not require any interpolation to calculate the pseudo-polar FFT grid coordinates. Translation parameters are estimated by the phase correlation method. However, instead of two-dimensional analysis of the phase correlation matrix, a low complexity subspace identification of the phase

  13. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    Science.gov (United States)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  14. A rapid method to estimate uranium using ionic liquid as extracting agent from basic aqueous media

    International Nuclear Information System (INIS)

    Prabhath Ravi, K.; Sathyapriya, R.S.; Rao, D.D.; Ghosh, S.K.

    2016-01-01

    Room temperature ionic liquids, as their name suggests are salts with a low melting point typically less than 100 °C and exist as liquid at room temperature. The common cationic parts of ionic liquids are imidazolium, pyridinium, pyrrolidinium, quaternary ammonium, or phosphonium ions, and common anionic parts are chloride, bromide, boron tetrafluorate, phosphorous hexafluorate, triflimide etc. The physical properties of ionic liquids can be tuned by choosing appropriate cations with differing alkyl chain lengths and anions. Application of ionic liquids in organic synthesis, liquid-liquid extractions, electrochemistry, catalysis, speciation studies, nuclear reprocessing is being studied extensively in recent times. In this paper a rapid method to estimate the uranium content in aqueous media by extraction with room temperature ionic liquid tricaprylammoniumthiosalicylate ((A- 336)(TS)) followed by liquid scintillation analysis is described. Re-extraction of uranium from ionic liquid phase to aqueous phase was also studied

  15. A rapid and highly selective method for the estimation of pyro-, tri- and orthophosphates.

    Science.gov (United States)

    Kamat, D R; Savant, V V; Sathyanarayana, D N

    1995-03-01

    A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP(2)O(7)]4H(2)O and [Co(en)(2)H(2)P(3)O(10)]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.

  16. Rapid, Simple, and Sensitive Spectrofluorimetric Method for the Estimation of Ganciclovir in Bulk and Pharmaceutical Formulations

    Directory of Open Access Journals (Sweden)

    Garima Balwani

    2013-01-01

    Full Text Available A new, simple, rapid, sensitive, accurate, and affordable spectrofluorimetric method was developed and validated for the estimation of ganciclovir in bulk as well as in marketed formulations. The method was based on measuring the native fluorescence of ganciclovir in 0.2 M hydrochloric acid buffer of pH 1.2 at 374 nm after excitation at 257 nm. The calibration graph was found to be rectilinear in the concentration range of 0.25–2.00 μg mL−1. The limit of quantification and limit of detection were found to be 0.029 μg mL−1 and 0.010 μg mL−1, respectively. The method was fully validated for various parameters according to ICH guidelines. The results demonstrated that the procedure is accurate, precise, and reproducible (relative standard deviation <2% and can be successfully applied for the determination of ganciclovir in its commercial capsules with average percentage recovery of 101.31 ± 0.90.

  17. A rapid method for the separation and estimation of uranium in geological materials using ion chromatography

    International Nuclear Information System (INIS)

    Prakash, Satya; Bangroo, P.N.

    2013-01-01

    Ion Chromatography is an elegant analytical technique which was primarily developed for the analysis of anionic species and over the years it has been used successfully to analyse various elements in different matrices. In this work the potential of Ion Chromatography has been used for the rapid separation and estimation of uranium in hydrogeochemical and other geological materials

  18. Rapid estimation of compost enzymatic activity by spectral analysis method combined with machine learning.

    Science.gov (United States)

    Chakraborty, Somsubhra; Das, Bhabani S; Ali, Md Nasim; Li, Bin; Sarathjith, M C; Majumdar, K; Ray, D P

    2014-03-01

    The aim of this study was to investigate the feasibility of using visible near-infrared (VisNIR) diffuse reflectance spectroscopy (DRS) as an easy, inexpensive, and rapid method to predict compost enzymatic activity, which traditionally measured by fluorescein diacetate hydrolysis (FDA-HR) assay. Compost samples representative of five different compost facilities were scanned by DRS, and the raw reflectance spectra were preprocessed using seven spectral transformations for predicting compost FDA-HR with six multivariate algorithms. Although principal component analysis for all spectral pretreatments satisfactorily identified the clusters by compost types, it could not separate different FDA contents. Furthermore, the artificial neural network multilayer perceptron (residual prediction deviation=3.2, validation r(2)=0.91 and RMSE=13.38 μg g(-1) h(-1)) outperformed other multivariate models to capture the highly non-linear relationships between compost enzymatic activity and VisNIR reflectance spectra after Savitzky-Golay first derivative pretreatment. This work demonstrates the efficiency of VisNIR DRS for predicting compost enzymatic as well as microbial activity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Rapid Methods to Estimate Potential Exposure to Semivolatile Organic Compounds in the Indoor Environment

    DEFF Research Database (Denmark)

    Little, John C.; Weschler, Charles J.; Nazaroff, William W

    2012-01-01

    A systematic and efficient strategy is needed to assess and manage potential risks to human health that arise from the manufacture and use of thousands of chemicals. Among available tools for rapid assessment of large numbers of chemicals, significant gaps are associated with the capability...

  20. A rapid alpha spectrometric method for estimation of 233U in bulk of thorium

    International Nuclear Information System (INIS)

    Rao, K.S.; Sankar, R.; Dhami, P.S.; Tripathi, S.C.; Gandhi, P.M.

    2015-01-01

    Analytical methods play important role in entire nuclear fuel cycle. Almost all the methods find applications in some way or the other in nuclear industry. Methods which cannot be directly used owing to selectivity, find application after chemical separation of analyte from interfering components. The analytical techniques used in PUREX process are almost well matured whereas in THOREX process the analytical techniques are constantly evolving as regards to simplicity, accuracy and time of analysis

  1. Development of a new, rapid and sensitive HPTLC method for estimation of Milnacipran in bulk, formulation and compatibility study

    Directory of Open Access Journals (Sweden)

    Gautam Singhvi

    2017-05-01

    Full Text Available A simple, sensitive and rapid high performance thin layer chromatographic (HPTLC method has been developed and validated for quantitative determination of Milnacipran Hydrochloride (MIL in bulk and formulations. The chromatographic development was carried out on HPTLC plates precoated with silica gel 60 F254 using a mixture of acetonitrile, water and ammonia (6:0.6:1.6 (v/v/v as mobile phase. Detection was carried out densitometrically at 220 nm. The Rf value of drug was found to be 0.63 ± 0.02. The method was validated as per ICH guideline with respect to linearity, accuracy, precision, robustness etc. The calibration curve was found to be linear over a range of 100–1000 ng μL−1 with a regression coefficient of 0.999. The accuracy was found to be very high (99.12–100.87%. %RSD values for intra-day and inter-day variation were not more than 1.43. The method has demonstrated high sensitivity and specificity. The method was applied for compatibility studies also. The method is new, simple and economic for routine estimation of MIL in bulk, preformulation studies and pharmaceutical formulation to help the industries as well as researchers for their sensitive determination of MIL rapidly at low cost in routine analysis.

  2. Bed Evolution under Rapidly Varying Flows by a New Method for Wave Speed Estimation

    Directory of Open Access Journals (Sweden)

    Khawar Rehman

    2016-05-01

    Full Text Available This paper proposes a sediment-transport model based on coupled Saint-Venant and Exner equations. A finite volume method of Godunov type with predictor-corrector steps is used to solve a set of coupled equations. An efficient combination of approximate Riemann solvers is proposed to compute fluxes associated with sediment-laden flow. In addition, a new method is proposed for computing the water depth and velocity values along the shear wave. This method ensures smooth solutions, even for flows with high discontinuities, and on domains with highly distorted grids. The numerical model is tested for channel aggradation on a sloping bottom, dam-break cases at flume-scale and reach-scale with flat bottom configurations and varying downstream water depths. The proposed model is tested for predicting the position of hydraulic jump, wave front propagation, and for predicting magnitude of bed erosion. The comparison between results based on the proposed scheme and analytical, experimental, and published numerical results shows good agreement. Sensitivity analysis shows that the model is computationally efficient and virtually independent of mesh refinement.

  3. Rapid validated HPTLC method for estimation of piperine and piperlongumine in root of Piper longum extract and its commercial formulation

    Directory of Open Access Journals (Sweden)

    Anagha A. Rajopadhye

    2012-12-01

    Full Text Available Piperine and piperlongumine, alkaloids having diverse biological activities, commonly occur in roots of Piper longum L., Piperaceae, which have high commercial, economical and medicinal value. In present study, rapid, validated HPTLC method has been established for the determination of piperine and piperlongumine in methanolic root extract and its commercial formulation 'Mahasudarshan churna®' using ICH guidelines. The use of Accelerated Solvent Extraction (ASE as an alternative to conventional techniques has been explored. The methanol extracts of root, its formulation and both standard solutions were applied on silica gel F254 HPTLC plates. The plates were developed in Twin chamber using mobile phase toluene: ethyl acetate (6:4, v/v and scanned at 342 and 325 nm (λmax of piperine and piperlongumine, respectively using Camag TLC scanner 3 with CATS 4 software. A linear relationship was obtained between response (peak area and amount of piperine and piperlongumine in the range of 20-100 and 30-150 ng/spot, respectively; the correlation coefficient was 0.9957 and 0.9941 respectively. Sharp, symmetrical and well resolved peaks of piperine and piperlongumine spots resolved at Rf 0.51 and 0.74, respectively from other components of the sample extracts. The HPTLC method showed good linearity, recovery and high precision of both markers. Extraction of plant using ASE and rapid HPTLC method provides a new and powerful approach to estimate piperine and piperlongumine as phytomarkers in the extract as well as its commercial formulations for routine quality control.

  4. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    International Nuclear Information System (INIS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-01-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally. -- Highlights: • A rapid interpolation method of system matrices (H) is proposed, named DW-GIMGPE. • Reduce H acquisition time by 15.2× with simplified grid scan and 2× interpolation. • Reconstructions of a hot-rod phantom with measured and DW-GIMGPE H were similar. • The imaging study of normal

  5. Using a Regression Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

    Science.gov (United States)

    2017-12-01

    Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR and FAR. Analysis was...distribution is unlimited. 8 Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR...stimuli. Simulations show that this regression method results in an unbiased and accurate estimate of target detection performance. The regression

  6. Rapid Estimation Method for State of Charge of Lithium-Ion Battery Based on Fractional Continual Variable Order Model

    Directory of Open Access Journals (Sweden)

    Xin Lu

    2018-03-01

    Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.

  7. Rapid flow imaging method

    International Nuclear Information System (INIS)

    Pelc, N.J.; Spritzer, C.E.; Lee, J.N.

    1988-01-01

    A rapid, phase-contrast, MR imaging method of imaging flow has been implemented. The method, called VIGRE (velocity imaging with gradient recalled echoes), consists of two interleaved, narrow flip angle, gradient-recalled acquisitions. One is flow compensated while the second has a specified flow encoding (both peak velocity and direction) that causes signals to contain additional phase in proportion to velocity in the specified direction. Complex image data from the first acquisition are used as a phase reference for the second, yielding immunity from phase accumulation due to causes other than motion. Images with pixel values equal to MΔΘ where M is the magnitude of the flow compensated image and ΔΘ is the phase difference at the pixel, are produced. The magnitude weighting provides additional vessel contrast, suppresses background noise, maintains the flow direction information, and still allows quantitative data to be retrieved. The method has been validated with phantoms and is undergoing initial clinical evaluation. Early results are extremely encouraging

  8. Measurement of 90Sr radioactivity in a rapid method of strontium estimation by solvent extraction with dicarbollides

    International Nuclear Information System (INIS)

    Svoboda, K.; Kyrs, M.

    1994-01-01

    The application of liquid scintillation counting to the measurement of 90 Sr radioactivity was studied, using a previously published rapid method of strontium separation, based on solvent extraction with a solution of cobalt dicarbollide and Slovafol 909 in a nitrobenzene-carbon tetrachloride mixture and subsequent stripping of strontium with a 0.15 M Chelaton IV (CDTA) solution at pH 10.2. With liquid scintillation counting, a more efficient elimination of the effect of 90 Y β-activity on 90 Sr counting is possible than when measuring the evaporated aliquot with the use of a solid scintillator. The adverse effect of traces of dicarbollide, nitrobenzene, and CCl 4 passed over in the aqueous 90 Sr solution prepared for counting, is caused by the (poorly reproducible) shift of the 90 Sr + 90 Y β-radiation spectral curve towards lower energies, the so-called quenching. The shift is independent of the aqueous phase concentration of the organic compounds mentioned. They can be removed by shaking the aqueous reextract with an equal volume of octanol or amyl acetate so that the undesirable spectral shift does not occur. No loss of strontium was found in this washing procedure. (author) 2 tabs., 6 figs., 5 refs

  9. A rapid assessment method to estimate the distribution of juvenile Chinook Salmon in tributary habitats using eDNA and occupancy estimation

    Science.gov (United States)

    Matter, A.; Falke, Jeffrey A.; López, J. Andres; Savereide, James W.

    2018-01-01

    Identification and protection of water bodies used by anadromous species are critical in light of increasing threats to fish populations, yet often challenging given budgetary and logistical limitations. Noninvasive, rapid‐assessment, sampling techniques may reduce costs and effort while increasing species detection efficiencies. We used an intrinsic potential (IP) habitat model to identify high‐quality rearing habitats for Chinook Salmon Oncorhynchus tshawytscha and select sites to sample throughout the Chena River basin, Alaska, for juvenile occupancy using an environmental DNA (eDNA) approach. Water samples were collected from 75 tributary sites in 2014 and 2015. The presence of Chinook Salmon DNA in water samples was assessed using a species‐specific quantitative PCR (qPCR) assay. The IP model predicted over 900 stream kilometers in the basin to support high‐quality (IP ≥ 0.75) rearing habitat. Occupancy estimation based on eDNA samples indicated that 80% and 56% of previously unsampled sites classified as high or low IP (IP Salmon DNA from three replicate water samples was high (p = 0.76) but varied with drainage area (km2). A power analysis indicated high power to detect proportional changes in occupancy based on parameter values estimated from eDNA occupancy models, although power curves were not symmetrical around zero, indicating greater power to detect positive than negative proportional changes in occupancy. Overall, the combination of IP habitat modeling and occupancy estimation provided a useful, rapid‐assessment method to predict and subsequently quantify the distribution of juvenile salmon in previously unsampled tributary habitats. Additionally, these methods are flexible and can be modified for application to other species and in other locations, which may contribute towards improved population monitoring and management.

  10. Rapid prototyping: een veelbelovende methode

    NARCIS (Netherlands)

    Haverman, T.M.; Karagozoglu, K.H.; Prins, H.; Schulten, E.A.J.M.; Forouzanfar, T.

    2013-01-01

    Rapid prototyping is a method which makes it possible to produce a three-dimensional model based on two-dimensional imaging. Various rapid prototyping methods are available for modelling, such as stereolithography, selective laser sintering, direct laser metal sintering, two-photon polymerization,

  11. Automated Method for the Rapid and Precise Estimation of Adherent Cell Culture Characteristics from Phase Contrast Microscopy Images

    Science.gov (United States)

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-01-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521

  12. Development and Validation of a Rapid RP-UPLC Method for the Simultaneous Estimation of Bambuterol Hydrochloride and Montelukast Sodium from Tablets.

    Science.gov (United States)

    Yanamandra, R; Vadla, C S; Puppala, U M; Patro, B; Murthy, Y L N; Parimi, A R

    2012-03-01

    A rapid, simple, sensitive and selective analytical method was developed by using reverse phase ultra performance liquid chromatographic technique for the simultaneous estimation of bambuterol hydrochloride and montelukast sodium in combined tablet dosage form. The developed method is superior in technology to conventional high performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Elution time for the separation was 6 min and ultra violet detection was carried out at 210 nm. Efficient separation was achieved on BEH C18 sub-2-μm Acquity UPLC column using 0.025% (v/v) trifluoro acetic acid in water and acetonitrile as organic solvent in a linear gradient program. Resolutions between bambuterol hydrochloride and montelukast sodium were found to be more than 31. The active pharmaceutical ingredient was extracted from tablet dosage from using a mixture of methanol, acetonitrile and water as diluent. The calibration graphs were linear for bambuterol hydrochloride and montelukast sodium in the range of 6.25-37.5 μg/ml. The percentage recoveries for bambuterol hydrochloride and montelukast sodium were found to be in the range of 99.1-100.0% and 98.0-101.6%, respectively. The test solution was found to be stable for 7 days when stored in the refrigerator between 2-8°. Developed UPLC method was validated as per International Conference on Harmonization specifications for method validation. This method can be successfully employed for simultaneous estimation of bambuterol hydrochloride and montelukast sodium in bulk drugs and formulations.

  13. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  14. Verification of rapid method for estimation of added food colorant type in boiled sausages based on measurement of cross section color

    Science.gov (United States)

    Jovanović, J.; Petronijević, R. B.; Lukić, M.; Karan, D.; Parunović, N.; Branković-Lazić, I.

    2017-09-01

    During the previous development of a chemometric method for estimating the amount of added colorant in meat products, it was noticed that the natural colorant most commonly added to boiled sausages, E 120, has different CIE-LAB behavior compared to artificial colors that are used for the same purpose. This has opened the possibility of transforming the developed method into a method for identifying the addition of natural or synthetic colorants in boiled sausages based on the measurement of the color of the cross-section. After recalibration of the CIE-LAB method using linear discriminant analysis, verification was performed on 76 boiled sausages, of either frankfurters or Parisian sausage types. The accuracy and reliability of the classification was confirmed by comparison with the standard HPLC method. Results showed that the LDA + CIE-LAB method can be applied with high accuracy, 93.42 %, to estimate food color type in boiled sausages. Natural orange colors can give false positive results. Pigments from spice mixtures had no significant effect on CIE-LAB results.

  15. Development and validation of RP-HPLC and UV-spectrophotometric methods for rapid simultaneous estimation of amlodipine and benazepril in pure and fixed dose combination

    Directory of Open Access Journals (Sweden)

    Abhi Kavathia

    2017-05-01

    Full Text Available High-performance liquid chromatographic (HPLC and UV spectrophotometric methods were developed and validated for the quantitative determination of amlodipine besylate (AM and benazepril hydrochloride (BZ. Different analytical performance parameters such as linearity, precision, accuracy, specificity, limit of detection (LOD and limit of quantification (LOQ were determined according to International Conference on Harmonization ICH Q2B guidelines. The RP-HPLC method was developed by the isocratic technique on a reversed-phase Shodex C-18 5e column. The retention time for AM and BZ was 4.43 min and 5.70 min respectively. The UV spectrophotometric determinations were performed at 237 nm and 366 nm for AM and at 237 nm for BZ. Correlation between absorbance of AM at 237 nm and 366 nm was established and based on developed correlation equation estimation of BZ at 237 nm was carried out. The linearity of the calibration curves for each analyte in the desired concentration range was good (r2 > 0.999 by both the HPLC and UV methods. The method showed good reproducibility and recovery with percent relative standard deviation less than 5%. Moreover, the accuracy and precision obtained with HPLC co-related well with the UV method which implied that UV spectroscopy can be a cheap, reliable and less time consuming alternative for chromatographic analysis. The proposed methods are highly sensitive, precise and accurate and hence successfully applied for determining the assay and in vitro dissolution of a marketed formulation.

  16. Rapid estimation of the economic consequences of global earthquakes

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.

    2011-01-01

    to reduce this time gap to more rapidly and effectively mobilize response. We present here a procedure to rapidly and approximately ascertain the economic impact immediately following a large earthquake anywhere in the world. In principle, the approach presented is similar to the empirical fatality estimation methodology proposed and implemented by Jaiswal and others (2009). In order to estimate economic losses, we need an assessment of the economic exposure at various levels of shaking intensity. The economic value of all the physical assets exposed at different locations in a given area is generally not known and extremely difficult to compile at a global scale. In the absence of such a dataset, we first estimate the total Gross Domestic Product (GDP) exposed at each shaking intensity by multiplying the per-capita GDP of the country by the total population exposed at that shaking intensity level. We then scale the total GDP estimated at each intensity by an exposure correction factor, which is a multiplying factor to account for the disparity between wealth and/or economic assets to the annual GDP. The economic exposure obtained using this procedure is thus a proxy estimate for the economic value of the actual inventory that is exposed to the earthquake. The economic loss ratio, defined in terms of a country-specific lognormal cumulative distribution function of shaking intensity, is derived and calibrated against the losses from past earthquakes. This report describes the development of a country or region-specific economic loss ratio model using economic loss data available for global earthquakes from 1980 to 2007. The proposed model is a potential candidate for directly estimating economic losses within the currently-operating PAGER system. PAGER's other loss models use indirect methods that require substantially more data (such as building/asset inventories, vulnerabilities, and the asset values exposed at the time of earthquake) to implement on a global basis

  17. Rapid methods for detection of bacteria

    DEFF Research Database (Denmark)

    Corfitzen, Charlotte B.; Andersen, B.Ø.; Miller, M.

    2006-01-01

    Traditional methods for detection of bacteria in drinking water e.g. Heterotrophic Plate Counts (HPC) or Most Probable Number (MNP) take 48-72 hours to give the result. New rapid methods for detection of bacteria are needed to protect the consumers against contaminations. Two rapid methods...

  18. Rapid Estimation of Gustatory Sensitivity Thresholds with SIAM and QUEST

    Directory of Open Access Journals (Sweden)

    Richard Höchenberger

    2017-06-01

    Full Text Available Adaptive methods provide quick and reliable estimates of sensory sensitivity. Yet, these procedures are typically developed for and applied to the non-chemical senses only, i.e., to vision, audition, and somatosensation. The relatively long inter-stimulus-intervals in gustatory studies, which are required to minimize adaptation and habituation, call for time-efficient threshold estimations. We therefore tested the suitability of two adaptive yes-no methods based on SIAM and QUEST for rapid estimation of taste sensitivity by comparing test-retest reliability for sucrose, citric acid, sodium chloride, and quinine hydrochloride thresholds. We show that taste thresholds can be obtained in a time efficient manner with both methods (within only 6.5 min on average using QUEST and ~9.5 min using SIAM. QUEST yielded higher test-retest correlations than SIAM in three of the four tastants. Either method allows for taste threshold estimation with low strain on participants, rendering them particularly advantageous for use in subjects with limited attentional or mnemonic capacities, and for time-constrained applications during cohort studies or in the testing of patients and children.

  19. Development and validation of a rapid high performance liquid chromatography - photodiode array detection method for estimation of a bioactive compound wedelolactone in extracts of Eclipta alba

    Directory of Open Access Journals (Sweden)

    Satyanshu Kumar

    2013-03-01

    Full Text Available Following optimization of extraction, separation and analytical conditions, a rapid, sensitive and simple reverse-phase high performance liquid chromatography-photo diode array (HPLC-PDA method has been developed for the identification and quantification of wedelolactone in different extracts of Eclipta alba. The separation of wedelolactone was achieved on a C18 column using the solvent system consisting of a mixture of methanol: water: acetic acid (95: 5: 0.04 as a mobile phase in isocratic elution mode followed by photo diode array detection at 352 nm. The developed method was validated as per the guidelines of the International Conference on Harmonization (ICH. Calibration curve presented good linear regression (r²>0.998 within the test range and the maximum relative standard deviation (RSD, % values for intra-day assay were found to be 0.15, 1.30 and 1.1 for low (5 µg/mL, medium (20 µg/mL and high (80 µg/mL concentrations of wedelolactone. For inter-day assay the maximum RSD (% values were found to be 2.83, 1.51 and 2.06 for low, medium and high concentrations, respectively. Limit of detection (LOD and limit of quantification (LOQ were calculated to be 2 and 5 µg/mL respectively. Analytical recovery of wedelolactone was greater than 95%. Wedelolactone in different extracts of Eclipta alba was identified and quantified using the developed HPLC method. The validated HPLC method allowed precise quantitative analysis of wedelolactone in Eclipta. alba extracts.Desenvolveu-se método rápido, sensível e simples de Cromatografia Líquida de Alta Eficiência em fase reversa, utilizando-se arranjo de fotodiodo (HPLC-PDA, visando à separação, extração e às condições analíticas para a identificação e quantificação de wedelolactona em diferentes extratos de Eclipta alba. A separação de wedelolactona foi efetuada por meio de uma coluna C18, utilizando mistura de metanol:água:ácido acético (95:5:0.04 como fase móvel, em sistema de

  20. CTER—Rapid estimation of CTF parameters with error assessment

    Energy Technology Data Exchange (ETDEWEB)

    Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.

  1. [Rapid prototyping: a very promising method].

    Science.gov (United States)

    Haverman, T M; Karagozoglu, K H; Prins, H-J; Schulten, E A J M; Forouzanfar, T

    2013-03-01

    Rapid prototyping is a method which makes it possible to produce a three-dimensional model based on two-dimensional imaging. Various rapid prototyping methods are available for modelling, such as stereolithography, selective laser sintering, direct laser metal sintering, two-photon polymerization, laminated object manufacturing, three-dimensional printing, three-dimensional plotting, polyjet inkjet technology,fused deposition modelling, vacuum casting and milling. The various methods currently being used in the biomedical sector differ in production, materials and properties of the three-dimensional model which is produced. Rapid prototyping is mainly usedforpreoperative planning, simulation, education, and research into and development of bioengineering possibilities.

  2. A new method for rapid Canine retraction

    Directory of Open Access Journals (Sweden)

    "Khavari A

    2001-06-01

    Full Text Available Distraction osteogenesis method (Do in bone lengthening and rapid midpalatal expansion have shown the great ability of osteognic tissues for rapid bone formation under distraction force and special protocol with optimum rate of one millimeter per day. Periodontal membrane of teeth (PDM is the extension of periostium in the alveolar socked. Orthodontic force distracts PDM fibers in the tension side and then bone formation will begin.Objects: Rapid retraction of canine tooth into extraction space of first premolar by DO protocol in order to show the ability of the PDM in rapid bone formation. The other objective was reducing total orthodontic treatment time of extraction cases.Patients and Methods: Tweleve maxillary canines in six patients were retracted rapidly in three weeks by a custom-made tooth-born appliance. Radiographic records were taken to evaluate the effects of heavy applied force on canine and anchorage teeth.Results: Average retraction was 7.05 mm in three weeks (2.35 mm/week. Canines rotated distal- in by mean 3.5 degrees.Anchorage loss was from 0 to 0.8 mm with average of 0.3 mm.Root resorption of canines was negligible, and was not significant clinically. Periodontium was normal after rapid retraction. No hazard for pulp vitality was observed.Discussion: PDM responded well to heavy distraction force by Do protocol. Rapid canine retraction seems to be a safe method and can considerabely reduce orthodontic time.

  3. A scoping review of rapid review methods.

    Science.gov (United States)

    Tricco, Andrea C; Antony, Jesmin; Zarin, Wasifa; Strifler, Lisa; Ghassemi, Marco; Ivory, John; Perrier, Laure; Hutton, Brian; Moher, David; Straus, Sharon E

    2015-09-16

    Rapid reviews are a form of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce information in a timely manner. Although numerous centers are conducting rapid reviews internationally, few studies have examined the methodological characteristics of rapid reviews. We aimed to examine articles, books, and reports that evaluated, compared, used or described rapid reviews or methods through a scoping review. MEDLINE, EMBASE, the Cochrane Library, internet websites of rapid review producers, and reference lists were searched to identify articles for inclusion. Two reviewers independently screened literature search results and abstracted data from included studies. Descriptive analysis was conducted. We included 100 articles plus one companion report that were published between 1997 and 2013. The studies were categorized as 84 application papers, seven development papers, six impact papers, and four comparison papers (one was included in two categories). The rapid reviews were conducted between 1 and 12 months, predominantly in Europe (58 %) and North America (20 %). The included studies failed to report 6 % to 73 % of the specific systematic review steps examined. Fifty unique rapid review methods were identified; 16 methods occurred more than once. Streamlined methods that were used in the 82 rapid reviews included limiting the literature search to published literature (24 %) or one database (2 %), limiting inclusion criteria by date (68 %) or language (49 %), having one person screen and another verify or screen excluded studies (6 %), having one person abstract data and another verify (23 %), not conducting risk of bias/quality appraisal (7 %) or having only one reviewer conduct the quality appraisal (7 %), and presenting results as a narrative summary (78 %). Four case studies were identified that compared the results of rapid reviews to systematic reviews. Three studies found that the conclusions between

  4. CTER-rapid estimation of CTF parameters with error assessment.

    Science.gov (United States)

    Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A Semi-Analytical Method for Rapid Estimation of Near-Well Saturation, Temperature, Pressure and Stress in Non-Isothermal CO2 Injection

    Science.gov (United States)

    LaForce, T.; Ennis-King, J.; Paterson, L.

    2015-12-01

    Reservoir cooling near the wellbore is expected when fluids are injected into a reservoir or aquifer in CO2 storage, enhanced oil or gas recovery, enhanced geothermal systems, and water injection for disposal. Ignoring thermal effects near the well can lead to under-prediction of changes in reservoir pressure and stress due to competition between increased pressure and contraction of the rock in the cooled near-well region. In this work a previously developed semi-analytical model for immiscible, nonisothermal fluid injection is generalised to include partitioning of components between two phases. Advection-dominated radial flow is assumed so that the coupled two-phase flow and thermal conservation laws can be solved analytically. The temperature and saturation profiles are used to find the increase in reservoir pressure, tangential, and radial stress near the wellbore in a semi-analytical, forward-coupled model. Saturation, temperature, pressure, and stress profiles are found for parameters representative of several CO2 storage demonstration projects around the world. General results on maximum injection rates vs depth for common reservoir parameters are also presented. Prior to drilling an injection well there is often little information about the properties that will determine the injection rate that can be achieved without exceeding fracture pressure, yet injection rate and pressure are key parameters in well design and placement decisions. Analytical solutions to simplified models such as these can quickly provide order of magnitude estimates for flow and stress near the well based on a range of likely parameters.

  6. Research on parafoil stability using a rapid estimate model

    Directory of Open Access Journals (Sweden)

    Hua YANG

    2017-10-01

    Full Text Available With the consideration of rotation between canopy and payload of parafoil system, a four-degree-of-freedom (4-DOF longitudinal static model was used to solve parafoil state variables in straight steady flight. The aerodynamic solution of parafoil system was a combination of vortex lattice method (VLM and engineering estimation method. Based on small disturbance assumption, a 6-DOF linear model that considers canopy additional mass was established with benchmark state calculated by 4-DOF static model. Modal analysis of a dynamic model was used to calculate the stability parameters. This method, which is based on a small disturbance linear model and modal analysis, is high-efficiency to the study of parafoil stability. It is well suited for rapid stability analysis in the preliminary stage of parafoil design. Using this method, this paper shows that longitudinal and lateral stability will both decrease when a steady climbing angle increases. This explains the wavy track of the parafoil observed during climbing.

  7. Experimental study on rapid embankment construction methods

    International Nuclear Information System (INIS)

    Hirano, Hideaki; Egawa, Kikuji; Hyodo, Kazuya; Kannoto, Yasuo; Sekimoto, Tsuyoshi; Kobayashi, Kokichi.

    1982-01-01

    In the construction of a thermal or nuclear power plant in a coastal area, shorter embankment construction period has come to be called for recently. This tendency is remarkable where construction period is limited due to meteorological or sea conditions. To meet this requirement, the authors have been conducting basic experimental studies on two methods for the rapid execution of embankment construction, that is, Steel Plate Cellular Bulkhead Embedding Method and Ship Hull Caisson Method. This paper presents an outline of the results of the experimental study on these two methods. (author)

  8. Rapid Radiochemical Methods for Asphalt Paving Material ...

    Science.gov (United States)

    Technical Brief Validated rapid radiochemical methods for alpha and beta emitters in solid matrices that are commonly encountered in urban environments were previously unavailable for public use by responding laboratories. A lack of tested rapid methods would delay the quick determination of contamination levels and the assessment of acceptable site-specific exposure levels. Of special concern are matrices with rough and porous surfaces, which allow the movement of radioactive material deep into the building material making it difficult to detect. This research focuses on methods that address preparation, radiochemical separation, and analysis of asphalt paving materials and asphalt roofing shingles. These matrices, common to outdoor environments, challenge the capability and capacity of very experienced radiochemistry laboratories. Generally, routine sample preparation and dissolution techniques produce liquid samples (representative of the original sample material) that can be processed using available radiochemical methods. The asphalt materials are especially difficult because they do not readily lend themselves to these routine sample preparation and dissolution techniques. The HSRP and ORIA coordinate radiological reference laboratory priorities and activities in conjunction with HSRP’s Partner Process. As part of the collaboration, the HSRP worked with ORIA to publish rapid radioanalytical methods for selected radionuclides in building material matrice

  9. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  10. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  11. A novel ultra-performance liquid chromatography hyphenated with quadrupole time of flight mass spectrometry method for rapid estimation of total toxic retronecine-type of pyrrolizidine alkaloids in herbs without requiring corresponding standards.

    Science.gov (United States)

    Zhu, Lin; Ruan, Jian-Qing; Li, Na; Fu, Peter P; Ye, Yang; Lin, Ge

    2016-03-01

    Nearly 50% of naturally-occurring pyrrolizidine alkaloids (PAs) are hepatotoxic, and the majority of hepatotoxic PAs are retronecine-type PAs (RET-PAs). However, quantitative measurement of PAs in herbs/foodstuffs is often difficult because most of reference PAs are unavailable. In this study, a rapid, selective, and sensitive UHPLC-QTOF-MS method was developed for the estimation of RET-PAs in herbs without requiring corresponding standards. This method is based on our previously established characteristic and diagnostic mass fragmentation patterns and the use of retrorsine for calibration. The use of a single RET-PA (i.e. retrorsine) for construction of calibration was based on high similarities with no significant differences demonstrated by the calibration curves constructed by peak areas of extract ion chromatograms of fragment ion at m/z 120.0813 or 138.0919 versus concentrations of five representative RET-PAs. The developed method was successfully applied to measure a total content of toxic RET-PAs of diversified structures in fifteen potential PA-containing herbs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Survey of methods for rapid spin reversal

    International Nuclear Information System (INIS)

    McKibben, J.L.

    1980-01-01

    The need for rapid spin reversal technique in polarization experiments is discussed. The ground-state atomic-beam source equipped with two rf transitions for hydrogen can be reversed rapidly, and is now in use on several accelerators. It is the optimum choice provided the accelerator can accept H + ions. At present all rapid reversal experiments using H - ions are done with Lamb-shift sources; however, this is not a unique choice. Three methods for the reversal of the spin of the atomic beam within the Lamb-shift source are discussed in order of development. Coherent intensity and perhaps focus modulation seem to be the biggest problems in both types of sources. Methods for reducing these modulations in the Lamb-shift source are discussed. The same Lamb-shift apparatus is easily modified to provide information on the atomic physics of quenching of the 2S/sub 1/2/ states versus spin orientation, and this is also discussed. 2 figures

  13. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  14. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...

  15. A new rapid method for isolating nucleoli.

    Science.gov (United States)

    Li, Zhou Fang; Lam, Yun Wah

    2015-01-01

    The nucleolus was one of the first subcellular organelles to be isolated from the cell. The advent of modern proteomic techniques has resulted in the identification of thousands of proteins in this organelle, and live cell imaging technology has allowed the study of the dynamics of these proteins. However, the limitations of current nucleolar isolation methods hinder the further exploration of this structure. In particular, these methods require the use of a large number of cells and tedious procedures. In this chapter we describe a new and improved nucleolar isolation method for cultured adherent cells. In this method cells are snap-frozen before direct sonication and centrifugation onto a sucrose cushion. The nucleoli can be obtained within a time as short as 20 min, and the high yield allows the use of less starting material. As a result, this method can capture rapid biochemical changes in nucleoli by freezing the cells at a precise time, hence faithfully reflecting the protein composition of nucleoli at the specified time point. This protocol will be useful for proteomic studies of dynamic events in the nucleolus and for better understanding of the biology of mammalian cells.

  16. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  17. Computerized method for rapid optimization of immunoassays

    International Nuclear Information System (INIS)

    Rousseau, F.; Forest, J.C.

    1990-01-01

    The authors have developed an one step quantitative method for radioimmunoassay optimization. The method is rapid and necessitates only to perform a series of saturation curves with different titres of the antiserum. After calculating the saturation point at several antiserum titres using the Scatchard plot, the authors have produced a table that predicts the main characteristics of the standard curve (Bo/T, Bo and T) that will prevail for any combination of antiserum titre and percentage of sites saturation. The authors have developed a microcomputer program able to interpolate all the data needed to produce such a table from the results of the saturation curves. This computer program permits also to predict the sensitivity of the assay at any experimental conditions if the antibody does not discriminate between the labeled and the non labeled antigen. The authors have tested the accuracy of this optimization table with two in house RIA systems: 17-β-estradiol, and hLH. The results obtained experimentally, including sensitivity determinations, were concordant with those predicted from the optimization table. This method accerelates and improves greatly the process of optimization of radioimmunoassays [fr

  18. Rapid Moment Magnitude Estimation Using Strong Motion Derived Static Displacements

    OpenAIRE

    Muzli, Muzli; Asch, Guenter; Saul, Joachim; Murjaya, Jaya

    2015-01-01

    The static surface deformation can be recovered from strong motion records. Compared to satellite-based measurements such as GPS or InSAR, the advantage of strong motion records is that they have the potential to provide real-time coseismic static displacements. The use of these valuable data was optimized for the moment magnitude estimation. A centroid grid search method was introduced to calculate the moment magnitude by using1 model. The method to data sets was applied of the 2011...

  19. Rapid cable tension estimation using dynamic and mechanical properties

    Science.gov (United States)

    Martínez-Castro, Rosana E.; Jang, Shinae; Christenson, Richard E.

    2016-04-01

    Main tension elements are critical to the overall stability of cable-supported bridges. A dependable and rapid determination of cable tension is desired to assess the state of a cable-supported bridge and evaluate its operability. A portable smart sensor setup is presented to reduce post-processing time and deployment complexity while reliably determining cable tension using dynamic characteristics extracted from spectral analysis. A self-recording accelerometer is coupled with a single-board microcomputer that communicates wirelessly with a remote host computer. The portable smart sensing device is designed such that additional algorithms, sensors and controlling devices for various monitoring applications can be installed and operated for additional structural assessment. The tension-estimating algorithms are based on taut string theory and expand to consider bending stiffness. The successful combination of cable properties allows the use of a cable's dynamic behavior to determine tension force. The tension-estimating algorithms are experimentally validated on a through-arch steel bridge subject to ambient vibration induced by passing traffic. The tension estimation is determined in well agreement with previously determined tension values for the structure.

  20. Bayesian estimation methods in metrology

    International Nuclear Information System (INIS)

    Cox, M.G.; Forbes, A.B.; Harris, P.M.

    2004-01-01

    In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods

  1. Joko Tingkir program for estimating tsunami potential rapidly

    Energy Technology Data Exchange (ETDEWEB)

    Madlazim,, E-mail: m-lazim@physics.its.ac.id; Hariyono, E., E-mail: m-lazim@physics.its.ac.id [Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Negeri Surabaya (UNESA) , Jl. Ketintang, Surabaya 60231 (Indonesia)

    2014-09-25

    The purpose of the study was to estimate P-wave rupture durations (T{sub dur}), dominant periods (T{sub d}) and exceeds duration (T{sub 50Ex}) simultaneously for local events, shallow earthquakes which occurred off the coast of Indonesia. Although the all earthquakes had parameters of magnitude more than 6,3 and depth less than 70 km, part of the earthquakes generated a tsunami while the other events (Mw=7.8) did not. Analysis using Joko Tingkir of the above stated parameters helped understand the tsunami generation of these earthquakes. Measurements from vertical component broadband P-wave quake velocity records and determination of the above stated parameters can provide a direct procedure for assessing rapidly the potential for tsunami generation. The results of the present study and the analysis of the seismic parameters helped explain why the events generated a tsunami, while the others did not.

  2. Rapid Estimates of Rupture Extent for Large Earthquakes Using Aftershocks

    Science.gov (United States)

    Polet, J.; Thio, H. K.; Kremer, M.

    2009-12-01

    The spatial distribution of aftershocks is closely linked to the rupture extent of the mainshock that preceded them and a rapid analysis of aftershock patterns therefore has potential for use in near real-time estimates of earthquake impact. The correlation between aftershocks and slip distribution has frequently been used to estimate the fault dimensions of large historic earthquakes for which no, or insufficient, waveform data is available. With the advent of earthquake inversions that use seismic waveforms and geodetic data to constrain the slip distribution, the study of aftershocks has recently been largely focused on enhancing our understanding of the underlying mechanisms in a broader earthquake mechanics/dynamics framework. However, in a near real-time earthquake monitoring environment, in which aftershocks of large earthquakes are routinely detected and located, these data may also be effective in determining a fast estimate of the mainshock rupture area, which would aid in the rapid assessment of the impact of the earthquake. We have analyzed a considerable number of large recent earthquakes and their aftershock sequences and have developed an effective algorithm that determines the rupture extent of a mainshock from its aftershock distribution, in a fully automatic manner. The algorithm automatically removes outliers by spatial binning, and subsequently determines the best fitting “strike” of the rupture and its length by projecting the aftershock epicenters onto a set of lines that cross the mainshock epicenter with incremental azimuths. For strike-slip or large dip-slip events, for which the surface projection of the rupture is recti-linear, the calculated strike correlates well with the strike of the fault and the corresponding length, determined from the distribution of aftershocks projected onto the line, agrees well with the rupture length. In the case of a smaller dip-slip rupture with an aspect ratio closer to 1, the procedure gives a measure

  3. Incorporating indel information into phylogeny estimation for rapidly emerging pathogens

    Directory of Open Access Journals (Sweden)

    Suchard Marc A

    2007-03-01

    Full Text Available Abstract Background Phylogenies of rapidly evolving pathogens can be difficult to resolve because of the small number of substitutions that accumulate in the short times since divergence. To improve resolution of such phylogenies we propose using insertion and deletion (indel information in addition to substitution information. We accomplish this through joint estimation of alignment and phylogeny in a Bayesian framework, drawing inference using Markov chain Monte Carlo. Joint estimation of alignment and phylogeny sidesteps biases that stem from conditioning on a single alignment by taking into account the ensemble of near-optimal alignments. Results We introduce a novel Markov chain transition kernel that improves computational efficiency by proposing non-local topology rearrangements and by block sampling alignment and topology parameters. In addition, we extend our previous indel model to increase biological realism by placing indels preferentially on longer branches. We demonstrate the ability of indel information to increase phylogenetic resolution in examples drawn from within-host viral sequence samples. We also demonstrate the importance of taking alignment uncertainty into account when using such information. Finally, we show that codon-based substitution models can significantly affect alignment quality and phylogenetic inference by unrealistically forcing indels to begin and end between codons. Conclusion These results indicate that indel information can improve phylogenetic resolution of recently diverged pathogens and that alignment uncertainty should be considered in such analyses.

  4. Rapid surface-water volume estimations in beaver ponds

    Science.gov (United States)

    Karran, Daniel J.; Westbrook, Cherie J.; Wheaton, Joseph M.; Johnston, Carol A.; Bedard-Haughn, Angela

    2017-02-01

    Beaver ponds are surface-water features that are transient through space and time. Such qualities complicate the inclusion of beaver ponds in local and regional water balances, and in hydrological models, as reliable estimates of surface-water storage are difficult to acquire without time- and labour-intensive topographic surveys. A simpler approach to overcome this challenge is needed, given the abundance of the beaver ponds in North America, Eurasia, and southern South America. We investigated whether simple morphometric characteristics derived from readily available aerial imagery or quickly measured field attributes of beaver ponds can be used to approximate surface-water storage among the range of environmental settings in which beaver ponds are found. Studied were a total of 40 beaver ponds from four different sites in North and South America. The simplified volume-area-depth (V-A-h) approach, originally developed for prairie potholes, was tested. With only two measurements of pond depth and corresponding surface area, this method estimated surface-water storage in beaver ponds within 5 % on average. Beaver pond morphometry was characterized by a median basin coefficient of 0.91, and dam length and pond surface area were strongly correlated with beaver pond storage capacity, regardless of geographic setting. These attributes provide a means for coarsely estimating surface-water storage capacity in beaver ponds. Overall, this research demonstrates that reliable estimates of surface-water storage in beaver ponds only requires simple measurements derived from aerial imagery and/or brief visits to the field. Future research efforts should be directed at incorporating these simple methods into both broader beaver-related tools and catchment-scale hydrological models.

  5. Lithium-Ion Battery Online Rapid State-of-Power Estimation under Multiple Constraints

    Directory of Open Access Journals (Sweden)

    Shun Xiang

    2018-01-01

    Full Text Available The paper aims to realize a rapid online estimation of the state-of-power (SOP with multiple constraints of a lithium-ion battery. Firstly, based on the improved first-order resistance-capacitance (RC model with one-state hysteresis, a linear state-space battery model is built; then, using the dual extended Kalman filtering (DEKF method, the battery parameters and states, including open-circuit voltage (OCV, are estimated. Secondly, by employing the estimated OCV as the observed value to build the second dual Kalman filters, the battery SOC is estimated. Thirdly, a novel rapid-calculating peak power/SOP method with multiple constraints is proposed in which, according to the bisection judgment method, the battery’s peak state is determined; then, one or two instantaneous peak powers are used to determine the peak power during T seconds. In addition, in the battery operating process, the actual constraint that the battery is under is analyzed specifically. Finally, three simplified versions of the Federal Urban Driving Schedule (SFUDS with inserted pulse experiments are conducted to verify the effectiveness and accuracy of the proposed online SOP estimation method.

  6. Rapid assessment methods in eye care: An overview

    Directory of Open Access Journals (Sweden)

    Srinivas Marmamula

    2012-01-01

    Full Text Available Reliable information is required for the planning and management of eye care services. While classical research methods provide reliable estimates, they are prohibitively expensive and resource intensive. Rapid assessment (RA methods are indispensable tools in situations where data are needed quickly and where time- or cost-related factors prohibit the use of classical epidemiological surveys. These methods have been developed and field tested, and can be applied across almost the entire gamut of health care. The 1990s witnessed the emergence of RA methods in eye care for cataract, onchocerciasis, and trachoma and, more recently, the main causes of avoidable blindness and visual impairment. The important features of RA methods include the use of local resources, simplified sampling methodology, and a simple examination protocol/data collection method that can be performed by locally available personnel. The analysis is quick and easy to interpret. The entire process is inexpensive, so the survey may be repeated once every 5-10 years to assess the changing trends in disease burden. RA survey methods are typically linked with an intervention. This article provides an overview of the RA methods commonly used in eye care, and emphasizes the selection of appropriate methods based on the local need and context.

  7. Rapid estimation of organic nitrogen in oil shale waste waters

    Energy Technology Data Exchange (ETDEWEB)

    Jones, B.M.; Daughton, C.G.; Harris, G.J.

    1984-04-01

    Many of the characteristics of oil shale process waste waters (e.g., malodors, color, and resistance to biotreatment) are imparted by numerous nitrogenous heterocycles and aromatic amines. For the frequent performance assessment of waste treatment processes designed to remove these nitrogenous organic compounds, a rapid and colligative measurement of organic nitrogen is essential. Quantification of organic nitrogen in biological and agricultural samples is usually accomplished using the time-consuming, wet-chemical Kjeldahl method. For oil shale waste waters, whose primary inorganic nitorgen constituent is amonia, organic Kjeldahl nitrogen (OKN) is determined by first eliminating the endogenous ammonia by distillation and then digesting the sample in boiling H/sub 2/SO/sub 4/. The organic material is oxidized, and most forms of organically bound nitrogen are released as ammonium ion. After the addition of base, the ammonia is separated from the digestate by distillation and quantified by acidimetric titrimetry or colorimetry. The major failings of this method are the loss of volatile species such as aliphatic amines (during predistillation) and the inability to completely recover nitrogen from many nitrogenous heterocycles (during digestion). Within the last decade, a new approach has been developed for the quantification of total nitrogen (TN). The sample is first combusted, a

  8. Validation of Persian rapid estimate of adult literacy in dentistry.

    Science.gov (United States)

    Pakpour, Amir H; Lawson, Douglas M; Tadakamadla, Santosh K; Fridlund, Bengt

    2016-05-01

    The aim of the present study was to establish the psychometric properties of the Rapid Estimate of adult Literacy in Dentistry-99 (REALD-99) in the Persian language for use in an Iranian population (IREALD-99). A total of 421 participants with a mean age of 28 years (59% male) were included in the study. Participants included those who were 18 years or older and those residing in Quazvin (a city close to Tehran), Iran. A forward-backward translation process was used for the IREALD-99. The Test of Functional Health Literacy in Dentistry (TOFHLiD) was also administrated. The validity of the IREALD-99 was investigated by comparing the IREALD-99 across the categories of education and income levels. To further investigate, the correlation of IREALD-99 with TOFHLiD was computed. A principal component analysis (PCA) was performed on the data to assess unidimensionality and strong first factor. The Rasch mathematical model was used to evaluate the contribution of each item to the overall measure, and whether the data were invariant to differences in sex. Reliability was estimated with Cronbach's α and test-retest correlation. Cronbach's alpha for the IREALD-99 was 0.98, indicating strong internal consistency. The test-retest correlation was 0.97. IREALD-99 scores differed by education levels. IREALD-99 scores were positively related to TOFHLiD scores (rh = 0.72, P < 0.01). In addition, IREALD-99 showed positive correlation with self-rated oral health status (rh = 0.31, P < 0.01) as evidence of convergent validity. The PCA indicated a strong first component, five times the strength of the second component and nine times the third. The empirical data were a close fit with the Rasch mathematical model. There was not a significant difference in scores with respect to income level (P = 0.09), and only the very lowest income level was significantly different (P < 0.01). The IREALD-99 exhibited excellent reliability on repeated administrations, as well as internal

  9. A rapid estimation of near field tsunami run-up

    Science.gov (United States)

    Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie

    2015-01-01

    Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.

  10. A method for rapid estimation of internal dose to members of the public from inhalation of mixed fission products (based on the ICRP 1994 human respiratory tract model for radiological protection)

    International Nuclear Information System (INIS)

    Hou Jieli

    1999-01-01

    Based on the computing principle given in ICRP-30, a method had been given by the author for fast estimating internal dose from an intake of mixed fission products after nuclear accident. Following the ICRP-66 Human respiratory tract model published in 1994, the method was reconstructed. The doses of 1 Bq intake of mixed fission products (its AMAD = 1 μm, decay rate coefficient n = 0.2∼2.0) during the period of 1∼15 d after an accident were calculated. It is lower slightly based on ICRP 1994 respiratory tract model than that based on ICRP-30 model

  11. Rapid estimation of the moment magnitude of large earthquake from static strain changes

    Science.gov (United States)

    Itaba, S.

    2014-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred on March 11, 2011. Based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9, and it was considerably smaller than an actual value. On the other hand, using nine borehole strainmeters of Geological Survey of Japan, AIST, we estimated a fault model with Mw 8.7 for the earthquake on the boundary between the Pacific and North American plates. This model can be estimated about seven minutes after the origin time, and five minute after wave arrival. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami (e.g., Ohta et al., 2012). Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  12. Methods and compositions for rapid thermal cycling

    Energy Technology Data Exchange (ETDEWEB)

    Beer, Neil Reginald; Benett, William J.; Frank, James M.; Deotte, Joshua R.; Spadaccini, Christopher

    2018-04-10

    The rapid thermal cycling of a material is targeted. A microfluidic heat exchanger with an internal porous medium is coupled to tanks containing cold fluid and hot fluid. Fluid flows alternately from the cold tank and the hot tank into the porous medium, cooling and heating samples contained in the microfluidic heat exchanger's sample wells. A valve may be coupled to the tanks and a pump, and switching the position of the valve may switch the source and direction of fluid flowing through the porous medium. A controller may control the switching of valve positions based on the temperature of the samples and determined temperature thresholds. A sample tray for containing samples to be thermally cycled may be used in conjunction with the thermal cycling system. A surface or internal electrical heater may aid in heating the samples, or may replace the necessity for the hot tank.

  13. Dose estimation by biological methods

    International Nuclear Information System (INIS)

    Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.

    1997-01-01

    The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)

  14. Rapid Vegetative Propagation Method for Carob

    OpenAIRE

    Hamide GUBBUK; Esma GUNES; Tomas AYALA-SILVA; Sezai ERCISLI

    2011-01-01

    Most of fruit species are propagated by vegetative methods such as budding, grafting, cutting, suckering, layering etc. to avoid heterozygocity. Carob trees (Ceratonia siliqua L.) are of highly economical value and are among the most difficult to propagate fruit species. In the study, air-layering propagation method was investigated first time to compare wild and cultivated (�Sisam�) carob types. In the experiment, one year old carob limbs were air-layered on coco peat medium by wrapping with...

  15. Rapid and accurate species tree estimation for phylogeographic investigations using replicated subsampling.

    Science.gov (United States)

    Hird, Sarah; Kubatko, Laura; Carstens, Bryan

    2010-11-01

    We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Rapid estimation of fatigue entropy and toughness in metals

    Energy Technology Data Exchange (ETDEWEB)

    Liakat, M.; Khonsari, M.M., E-mail: khonsari@me.lsu.edu

    2014-10-15

    Highlights: • A correlation is developed to predict fatigue entropy and toughness of metals. • Predictions are made based on the thermal response of the materials. • The trend of hysteresis energy and temperature evolutions is discussed. • Predicted results are found to be in good agreement to those measured. - Abstract: An analytical model and an experimental procedure are presented for estimating the rate and accumulation of thermodynamic entropy and fatigue toughness in metals subjected to cyclic uniaxial tension–compression tests. Entropy and plastic strain energy generations are predicted based on the thermal response of a specimen at different levels of material damage. Fatigue tests are performed with cylindrical dogbone specimens made of tubular low-carbon steel 1018 and solid medium-carbon steel 1045, API 5L X52, and Al 6061. The evolution of the plastic strain energy generation, temperature, and thermal response throughout a fatigue process are presented and discussed. Predicted entropy accumulation and fatigue toughness obtained from the proposed method are found to be in good agreement to those obtained using a load cell and an extensometer over the range of experimental and environmental conditions considered.

  17. Rapid estimation of fatigue entropy and toughness in metals

    International Nuclear Information System (INIS)

    Liakat, M.; Khonsari, M.M.

    2014-01-01

    Highlights: • A correlation is developed to predict fatigue entropy and toughness of metals. • Predictions are made based on the thermal response of the materials. • The trend of hysteresis energy and temperature evolutions is discussed. • Predicted results are found to be in good agreement to those measured. - Abstract: An analytical model and an experimental procedure are presented for estimating the rate and accumulation of thermodynamic entropy and fatigue toughness in metals subjected to cyclic uniaxial tension–compression tests. Entropy and plastic strain energy generations are predicted based on the thermal response of a specimen at different levels of material damage. Fatigue tests are performed with cylindrical dogbone specimens made of tubular low-carbon steel 1018 and solid medium-carbon steel 1045, API 5L X52, and Al 6061. The evolution of the plastic strain energy generation, temperature, and thermal response throughout a fatigue process are presented and discussed. Predicted entropy accumulation and fatigue toughness obtained from the proposed method are found to be in good agreement to those obtained using a load cell and an extensometer over the range of experimental and environmental conditions considered

  18. Rapid estimation of organic nitrogen in oil shale wastewaters

    Energy Technology Data Exchange (ETDEWEB)

    Jones, B.M.; Harris, G.J.; Daughton, C.G.

    1984-03-01

    Many of the characteristics of oil shale process wastewaters (e.g., malodors, color, and resistance to biotreatment) are imparted by numerous nitrogen heterocycles and aromatic amines. For the frequent performance assessment of waste treatment procsses designed to remove these nitrogenous organic compounds, a rapid and colligative measurement of organic nitrogen is essential.

  19. Rapid estimation of high-parameter auditory-filter shapes

    Science.gov (United States)

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

    2014-01-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

  20. [A new method of fabricating photoelastic model by rapid prototyping].

    Science.gov (United States)

    Fan, Li; Huang, Qing-feng; Zhang, Fu-qiang; Xia, Yin-pei

    2011-10-01

    To explore a novel method of fabricating the photoelastic model using rapid prototyping technique. A mandible model was made by rapid prototyping with computerized three-dimensional reconstruction, then the photoelastic model with teeth was fabricated by traditional impression duplicating and mould casting. The photoelastic model of mandible with teeth, which was fabricated indirectly by rapid prototyping, was very similar to the prototype in geometry and physical parameters. The model was of high optical sensibility and met the experimental requirements. Photoelastic model of mandible with teeth indirectly fabricated by rapid prototyping meets the photoelastic experimental requirements well.

  1. Study on tube rupture strength evaluation method for rapid overheating

    International Nuclear Information System (INIS)

    Komine, Ryuji; Wada, Yusaku

    1998-08-01

    A sodium-water reaction derived from the single tube break in steam generator might overheat neighbor tubes rapidly under internal pressure loadings. If the temperature of tube wall becomes too high, it has to be evaluated that the stress of tube does not exceed the material strength limit to prevent the propagation of tube rupture. In the present study this phenomenon was recognized as the fracture of cylindrical tube with the large deformation due to overheating, and the evaluation method was investigated based on both of experimental and analytical approaches. The results obtained are as follows. (1) As for the nominal stress estimation, it was clarified through the experimental data and the detailed FEM elasto-plastic large deformation analysis that the formula used in conventional designs can be applied. (2) Within the overheating temperature limits of tubes, the creep effect is dominant, even if the loading time is too short. So the strain rate on the basis of JIS elevated temperature tensile test method for steels and heat-resisting alloys is too late and almost of total strain is composed by creep one. As a result the time dependent effect cannot be evaluated under JIS strain rate condition. (3) Creep tests in shorter time condition than a few minutes and tensile tests in higher strain rate condition than 10%/min of JIS are carried out for 2 1/4Cr-1Mo(NT) steel, and the standard values for tube rupture strength evaluation are formulated. (4) The above evaluation method based on both of the stress estimation and the strength standard values application is justified by using the tube burst test data under internal pressure. (5) The strength standard values on Type 321 ss is formulated in accordance with the procedure applied for 2 1/4Cr-1Mo(NT) steel. (author)

  2. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  3. Method-related estimates of sperm vitality.

    Science.gov (United States)

    Cooper, Trevor G; Hellenkemper, Barbara

    2009-01-01

    Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.

  4. A method of estimating log weights.

    Science.gov (United States)

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  5. Residual stresses estimation in tubes after rapid heating of surface

    International Nuclear Information System (INIS)

    Serikov, S.V.

    1992-01-01

    Results are presented on estimation of residual stresses in tubes of steel types ShKh15, EhP836 and 12KIMF after heating by burning pyrotechnic substance inside tubes. External tube surface was heated up to 400-450 deg C under such treatment. Axial stresses distribution over tube wall thickness was determined for initial state, after routine heat treatment and after heating with the use of fireworks. Inner surface heating was shown to essentially decrease axial stresses in tubes

  6. Nonparametric methods for volatility density estimation

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2009-01-01

    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on

  7. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  8. Methods for risk estimation in nuclear energy

    Energy Technology Data Exchange (ETDEWEB)

    Gauvenet, A [CEA, 75 - Paris (France)

    1979-01-01

    The author presents methods for estimating the different risks related to nuclear energy: immediate or delayed risks, individual or collective risks, risks of accidents and long-term risks. These methods have attained a highly valid level of elaboration and their application to other industrial or human problems is currently under way, especially in English-speaking countries.

  9. An evaluation of rapid methods for monitoring vegetation characteristics of wetland bird habitat

    Science.gov (United States)

    Tavernia, Brian G.; Lyons, James E.; Loges, Brian W.; Wilson, Andrew; Collazo, Jaime A.; Runge, Michael C.

    2016-01-01

    Wetland managers benefit from monitoring data of sufficient precision and accuracy to assess wildlife habitat conditions and to evaluate and learn from past management decisions. For large-scale monitoring programs focused on waterbirds (waterfowl, wading birds, secretive marsh birds, and shorebirds), precision and accuracy of habitat measurements must be balanced with fiscal and logistic constraints. We evaluated a set of protocols for rapid, visual estimates of key waterbird habitat characteristics made from the wetland perimeter against estimates from (1) plots sampled within wetlands, and (2) cover maps made from aerial photographs. Estimated percent cover of annuals and perennials using a perimeter-based protocol fell within 10 percent of plot-based estimates, and percent cover estimates for seven vegetation height classes were within 20 % of plot-based estimates. Perimeter-based estimates of total emergent vegetation cover did not differ significantly from cover map estimates. Post-hoc analyses revealed evidence for observer effects in estimates of annual and perennial covers and vegetation height. Median time required to complete perimeter-based methods was less than 7 percent of the time needed for intensive plot-based methods. Our results show that rapid, perimeter-based assessments, which increase sample size and efficiency, provide vegetation estimates comparable to more intensive methods.

  10. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  11. Comparison of methods for estimating premorbid intelligence

    OpenAIRE

    Bright, Peter; van der Linde, Ian

    2018-01-01

    To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or ‘premorbid’) estimate of a patient’s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded ‘hold/no hold’ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed ...

  12. Methods for Rapid Screening in Woody Plant Herbicide Development

    Directory of Open Access Journals (Sweden)

    William Stanley

    2014-07-01

    Full Text Available Methods for woody plant herbicide screening were assayed with the goal of reducing resources and time required to conduct preliminary screenings for new products. Rapid screening methods tested included greenhouse seedling screening, germinal screening, and seed screening. Triclopyr and eight experimental herbicides from Dow AgroSciences (DAS 313, 402, 534, 548, 602, 729, 779, and 896 were tested on black locust, loblolly pine, red maple, sweetgum, and water oak. Screening results detected differences in herbicide and species in all experiments in much less time (days to weeks than traditional field screenings and consumed significantly less resources (<500 mg acid equivalent per herbicide per screening. Using regression analysis, various rapid screening methods were linked into a system capable of rapidly and inexpensively assessing herbicide efficacy and spectrum of activity. Implementation of such a system could streamline early-stage herbicide development leading to field trials, potentially freeing resources for use in development of beneficial new herbicide products.

  13. Evaluation of three paediatric weight estimation methods in Singapore.

    Science.gov (United States)

    Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong

    2013-04-01

    Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (p₁₀) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; p₁₀ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; p₁₀ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; p₁₀ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  14. Rapid spectrographic method for determining microcomponents in solutions

    International Nuclear Information System (INIS)

    Karpenko, L.I.; Fadeeva, L.A.; Gordeeva, A.N.; Ermakova, N.V.

    1984-01-01

    Rapid spectrographic method foe determining microcomponents (Cd, V, Mo, Ni, rare earths and other elements) in industrial and natural solutions has been developed. The analyses were conducted in argon medium and in the air. Calibration charts for determining individual rare earths in solutions are presented. The accuracy of analysis (Sr) was detection limit was 10 -3 -10 -4 mg/ml, that for rare earths - 1.10 -2 mg/ml. The developed method enables to rapidly analyze solutions (sewages and industrialllwaters, wine products) for 20 elements including 6 rare earths, using strandard equipment

  15. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  16. Method to Locate Contaminant Source and Estimate Emission Strength

    Directory of Open Access Journals (Sweden)

    Qu Hongquan

    2013-01-01

    Full Text Available People greatly concern the issue of air quality in some confined spaces, such as spacecraft, aircraft, and submarine. With the increase of residence time in such confined space, contaminant pollution has become a main factor which endangers life. It is urgent to identify a contaminant source rapidly so that a prompt remedial action can be taken. A procedure of source identification should be able to locate the position and to estimate the emission strength of the contaminant source. In this paper, an identification method was developed to realize these two aims. This method was developed based on a discrete concentration stochastic model. With this model, a sensitivity analysis algorithm was induced to locate the source position, and a Kalman filter was used to further estimate the contaminant emission strength. This method could track and predict the source strength dynamically. Meanwhile, it can predict the distribution of contaminant concentration. Simulation results have shown the virtues of the method.

  17. Efficient Methods of Estimating Switchgrass Biomass Supplies

    Science.gov (United States)

    Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...

  18. Coalescent methods for estimating phylogenetic trees.

    Science.gov (United States)

    Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V

    2009-10-01

    We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.

  19. A Rapid Aeroelasticity Optimization Method Based on the Stiffness characteristics

    OpenAIRE

    Yuan, Zhe; Huo, Shihui; Ren, Jianting

    2018-01-01

    A rapid aeroelasticity optimization method based on the stiffness characteristics was proposed in the present study. Large time expense in static aeroelasticity analysis based on traditional time domain aeroelasticity method is solved. Elastic axis location and torsional stiffness are discussed firstly. Both torsional stiffness and the distance between stiffness center and aerodynamic center have a direct impact on divergent velocity. The divergent velocity can be adjusted by changing the cor...

  20. Fusion rule estimation using vector space methods

    International Nuclear Information System (INIS)

    Rao, N.S.V.

    1997-01-01

    In a system of N sensors, the sensor S j , j = 1, 2 .... N, outputs Y (j) element-of Re, according to an unknown probability distribution P (Y(j) /X) , corresponding to input X element-of [0, 1]. A training n-sample (X 1 , Y 1 ), (X 2 , Y 2 ), ..., (X n , Y n ) is given where Y i = (Y i (1) , Y i (2) , . . . , Y i N ) such that Y i (j) is the output of S j in response to input X i . The problem is to estimate a fusion rule f : Re N → [0, 1], based on the sample, such that the expected square error is minimized over a family of functions Y that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f to f* is feasible. We estimate the sample size sufficient to ensure that f provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (a) the sample size estimate is a simple function of the dimensionality of F, and (b) the estimate f can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks

  1. Influence function method for fast estimation of BWR core performance

    International Nuclear Information System (INIS)

    Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.

    1993-01-01

    The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)

  2. Automatic performance estimation of conceptual temperature control system design for rapid development of real system

    International Nuclear Information System (INIS)

    Jang, Yu Jin

    2013-01-01

    This paper presents an automatic performance estimation scheme of conceptual temperature control system with multi-heater configuration prior to constructing the physical system for achieving rapid validation of the conceptual design. An appropriate low-order discrete-time model, which will be used in the controller design, is constructed after determining several basic factors including the geometric shape of controlled object and heaters, material properties, heater arrangement, etc. The proposed temperature controller, which adopts the multivariable GPC (generalized predictive control) scheme with scale factors, is then constructed automatically based on the above model. The performance of the conceptual temperature control system is evaluated by using a FEM (finite element method) simulation combined with the controller.

  3. Automatic performance estimation of conceptual temperature control system design for rapid development of real system

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Yu Jin [Dongguk University, GyeongJu (Korea, Republic of)

    2013-07-15

    This paper presents an automatic performance estimation scheme of conceptual temperature control system with multi-heater configuration prior to constructing the physical system for achieving rapid validation of the conceptual design. An appropriate low-order discrete-time model, which will be used in the controller design, is constructed after determining several basic factors including the geometric shape of controlled object and heaters, material properties, heater arrangement, etc. The proposed temperature controller, which adopts the multivariable GPC (generalized predictive control) scheme with scale factors, is then constructed automatically based on the above model. The performance of the conceptual temperature control system is evaluated by using a FEM (finite element method) simulation combined with the controller.

  4. Internal Dosimetry Intake Estimation using Bayesian Methods

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.; Martz, H.F.

    1999-01-01

    New methods for the inverse problem of internal dosimetry are proposed based on evaluating expectations of the Bayesian posterior probability distribution of intake amounts, given bioassay measurements. These expectation integrals are normally of very high dimension and hence impractical to use. However, the expectations can be algebraically transformed into a sum of terms representing different numbers of intakes, with a Poisson distribution of the number of intakes. This sum often rapidly converges, when the average number of intakes for a population is small. A simplified algorithm using data unfolding is described (UF code). (author)

  5. Reliability of Estimation Pile Load Capacity Methods

    Directory of Open Access Journals (Sweden)

    Yudhi Lastiasih

    2014-04-01

    Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.

  6. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  7. Methods to estimate the genetic risk

    International Nuclear Information System (INIS)

    Ehling, U.H.

    1989-01-01

    The estimation of the radiation-induced genetic risk to human populations is based on the extrapolation of results from animal experiments. Radiation-induced mutations are stochastic events. The probability of the event depends on the dose; the degree of the damage dose not. There are two main approaches in making genetic risk estimates. One of these, termed the direct method, expresses risk in terms of expected frequencies of genetic changes induced per unit dose. The other, referred to as the doubling dose method or the indirect method, expresses risk in relation to the observed incidence of genetic disorders now present in man. The advantage of the indirect method is that not only can Mendelian mutations be quantified, but also other types of genetic disorders. The disadvantages of the method are the uncertainties in determining the current incidence of genetic disorders in human and, in addition, the estimasion of the genetic component of congenital anomalies, anomalies expressed later and constitutional and degenerative diseases. Using the direct method we estimated that 20-50 dominant radiation-induced mutations would be expected in 19 000 offspring born to parents exposed in Hiroshima and Nagasaki, but only a small proportion of these mutants would have been detected with the techniques used for the population study. These methods were used to predict the genetic damage from the fallout of the reactor accident at Chernobyl in the vicinity of Southern Germany. The lack of knowledge for the interaction of chemicals with ionizing radiation and the discrepancy between the high safety standards for radiation protection and the low level of knowledge for the toxicological evaluation of chemical mutagens will be emphasized. (author)

  8. Rapid Enzymatic Method for Pectin Methyl Esters Determination

    Directory of Open Access Journals (Sweden)

    Lucyna Łękawska-Andrinopoulou

    2013-01-01

    Full Text Available Pectin is a natural polysaccharide used in food and pharma industries. Pectin degree of methylation is an important parameter having significant influence on pectin applications. A rapid, fully automated, kinetic flow method for determination of pectin methyl esters has been developed. The method is based on a lab-made analyzer using the reverse flow-injection/stopped flow principle. Methanol is released from pectin by pectin methylesterase in the first mixing coil. Enzyme working solution is injected further downstream and it is mixed with pectin/pectin methylesterase stream in the second mixing coil. Methanol is oxidized by alcohol oxidase releasing formaldehyde and hydrogen peroxide. This reaction is coupled to horse radish peroxidase catalyzed reaction, which gives the colored product 4-N-(p-benzoquinoneimine-antipyrine. Reaction rate is proportional to methanol concentration and it is followed using Ocean Optics USB 2000+ spectrophotometer. The analyzer is fully regulated by a lab written LabVIEW program. The detection limit was 1.47 mM with an analysis rate of 7 samples h−1. A paired t-test with results from manual method showed that the automated method results are equivalent to the manual method at the 95% confidence interval. The developed method is rapid and sustainable and it is the first application of flow analysis in pectin analysis.

  9. A rapid, simple method for obtaining radiochemically pure hepatic heme

    International Nuclear Information System (INIS)

    Bonkowski, H.L.; Bement, W.J.; Erny, R.

    1978-01-01

    Radioactively-labelled heme has usually been isolated from liver to which unlabelled carrier has been added by long, laborious techniques involving organic solvent extraction followed by crystallization. A simpler, rapid method is devised for obtaining radiochemically-pure heme synthesized in vivo in rat liver from delta-amino[4- 14 C]levulinate. This method, in which the heme is extracted into ethyl acetate/glacial acetic acid and in which porphyrins are removed from the heme-containing organic phase with HCl washes, does not require addition of carrier heme. The new method gives better heme recoveries than and heme specific activities identical to, those obtained using the crystallization method. In this new method heme must be synthesized from delta-amino[4- 14 C]levulinate; it is not satisfactory to use [2- 14 C]glycine substrate because non-heme counts are isolated in the heme fraction. (Auth.)

  10. Rapid methods for jugular bleeding of dogs requiring one technician.

    Science.gov (United States)

    Frisk, C S; Richardson, M R

    1979-06-01

    Two methods were used to collect blood from the jugular vein of dogs. In both techniques, only one technician was required. A rope with a slip knot was placed around the base of the neck to assist in restraint and act as a tourniquet for the vein. The technician used one hand to restrain the dog by the muzzle and position the head. The other hand was used for collecting the sample. One of the methods could be accomplished with the dog in its cage. The bleeding techniques were rapid, requiring approximately 1 minute per dog.

  11. Rapid surface enhanced Raman scattering detection method for chloramphenicol residues

    Science.gov (United States)

    Ji, Wei; Yao, Weirong

    2015-06-01

    Chloramphenicol (CAP) is a widely used amide alcohol antibiotics, which has been banned from using in food producing animals in many countries. In this study, surface enhanced Raman scattering (SERS) coupled with gold colloidal nanoparticles was used for the rapid analysis of CAP. Density functional theory (DFT) calculations were conducted with Gaussian 03 at the B3LYP level using the 3-21G(d) and 6-31G(d) basis sets to analyze the assignment of vibrations. Affirmatively, the theoretical Raman spectrum of CAP was in complete agreement with the experimental spectrum. They both exhibited three strong peaks characteristic of CAP at 1104 cm-1, 1344 cm-1, 1596 cm-1, which were used for rapid qualitative analysis of CAP residues in food samples. The use of SERS as a method for the measurements of CAP was explored by comparing use of different solvents, gold colloidal nanoparticles concentration and absorption time. The method of the detection limit was determined as 0.1 μg/mL using optimum conditions. The Raman peak at 1344 cm-1 was used as the index for quantitative analysis of CAP in food samples, with a linear correlation of R2 = 0.9802. Quantitative analysis of CAP residues in foods revealed that the SERS technique with gold colloidal nanoparticles was sensitive and of a good stability and linear correlation, and suited for rapid analysis of CAP residue in a variety of food samples.

  12. A novel method for rapid in vitro radiobioassay

    Science.gov (United States)

    Crawford, Evan Bogert

    Rapid and accurate analysis of internal human exposure to radionuclides is essential to the effective triage and treatment of citizens who have possibly been exposed to radioactive materials in the environment. The two most likely scenarios in which a large number of citizens would be exposed are the detonation of a radiation dispersal device (RDD, "dirty bomb") or the accidental release of an isotope from an industrial source such as a radioisotopic thermal generator (RTG). In the event of the release and dispersion of radioactive materials into the environment in a large city, the entire population of the city -- including all commuting workers and tourists -- would have to be rapidly tested, both to satisfy the psychological needs of the citizens who were exposed to the mental trauma of a possible radiation dose, and to satisfy the immediate medical needs of those who received the highest doses and greatest levels of internal contamination -- those who would best benefit from rapid, intensive medical care. In this research a prototype rapid screening method to screen urine samples for the presence of up to five isotopes, both individually and in a mixture, has been developed. The isotopes used to develop this method are Co-60, Sr-90, Cs-137, Pu-238, and Am-241. This method avoids time-intensive chemical separations via the preparation and counting of a single sample on multiple detectors, and analyzing the spectra for isotope-specific markers. A rapid liquid-liquid separation using an organic extractive scintillator can be used to help quantify the activity of the alpha-emitting isotopes. The method provides quantifiable results in less than five minutes for the activity of beta/gamma-emitting isotopes when present in the sample at the intervention level as defined by the Centers for Disease Control and Prevention (CDC), and quantifiable results for the activity levels of alpha-emitting isotopes present at their respective intervention levels in approximately 30

  13. A new ore reserve estimation method, Yang Chizhong filtering and inferential measurement method, and its application

    International Nuclear Information System (INIS)

    Wu Jingqin.

    1989-01-01

    Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described

  14. Rapid Estimation of Tocopherol Content in Linseed and Sunflower Oils-Reactivity and Assay.

    Science.gov (United States)

    Prevc, Tjaša; Levart, Alenka; Cigić, Irena Kralj; Salobir, Janez; Ulrih, Nataša Poklar; Cigić, Blaž

    2015-08-13

    The reactivity of tocopherols with 2,2-diphenyl-1-picrylhydrazyl (DPPH) was studied in model systems in order to establish a method for quantifying vitamin E in plant oils. The method was optimized with respect to solvent composition of the assay medium, which has a large influence on the course of reaction of tocopherols with DPPH. The rate of reaction of α-tocopherol with DPPH is higher than that of γ-tocopherol in both protic and aprotic solvents. In ethyl acetate, routinely applied for the analysis of antioxidant potential (AOP) of plant oils, reactions of tocopherols with DPPH are slower and concentration of tocopherols in the assay has a large influence on their molar reactivity. In 2-propanol, however, two electrons are exchanged for both α- and γ-tocopherols, independent of their concentration. 2-propanol is not toxic and is fully compatible with polypropylene labware. The chromatographically determined content of tocopherols and their molar reactivity in the DPPH assay reveal that only tocopherols contribute to the AOP of sunflower oil, whereas the contribution of tocopherols to the AOP of linseed oil is 75%. The DPPH assay in 2-propanol can be applied for rapid and cheap estimation of vitamin E content in plant oils where tocopherols are major antioxidants.

  15. Rapid Estimation of Tocopherol Content in Linseed and Sunflower Oils-Reactivity and Assay

    Directory of Open Access Journals (Sweden)

    Tjaša Prevc

    2015-08-01

    Full Text Available The reactivity of tocopherols with 2,2-diphenyl-1-picrylhydrazyl (DPPH was studied in model systems in order to establish a method for quantifying vitamin E in plant oils. The method was optimized with respect to solvent composition of the assay medium, which has a large influence on the course of reaction of tocopherols with DPPH. The rate of reaction of α-tocopherol with DPPH is higher than that of γ-tocopherol in both protic and aprotic solvents. In ethyl acetate, routinely applied for the analysis of antioxidant potential (AOP of plant oils, reactions of tocopherols with DPPH are slower and concentration of tocopherols in the assay has a large influence on their molar reactivity. In 2-propanol, however, two electrons are exchanged for both α- and γ-tocopherols, independent of their concentration. 2-propanol is not toxic and is fully compatible with polypropylene labware. The chromatographically determined content of tocopherols and their molar reactivity in the DPPH assay reveal that only tocopherols contribute to the AOP of sunflower oil, whereas the contribution of tocopherols to the AOP of linseed oil is 75%. The DPPH assay in 2-propanol can be applied for rapid and cheap estimation of vitamin E content in plant oils where tocopherols are major antioxidants.

  16. Method for producing rapid pH changes

    Science.gov (United States)

    Clark, J.H.; Campillo, A.J.; Shapiro, S.L.; Winn, K.R.

    A method of initiating a rapid pH change in a solution comprises irradiating the solution with an intense flux of electromagnetic radiation of a frequency which produces a substantial pK change to a compound in solution. To optimize the resulting pH change, the compound being irradiated in solution should have an excited state lifetime substantially longer than the time required to establish an excited state acid-base equilibrium in the solution. Desired pH changes can be accomplished in nanoseconds or less by means of picosecond pulses of laser radiation.

  17. Method for rapidly determining a pulp kappa number using spectrophotometry

    Science.gov (United States)

    Chai, Xin-Sheng; Zhu, Jun Yong

    2002-01-01

    A system and method for rapidly determining the pulp kappa number through direct measurement of the potassium permanganate concentration in a pulp-permanganate solution using spectrophotometry. Specifically, the present invention uses strong acidification to carry out the pulp-permanganate oxidation reaction in the pulp-permanganate solution to prevent the precipitation of manganese dioxide (MnO.sub.2). Consequently, spectral interference from the precipitated MnO.sub.2 is eliminated and the oxidation reaction becomes dominant. The spectral intensity of the oxidation reaction is then analyzed to determine the pulp kappa number.

  18. Radiometric method for the rapid detection of Leptospira organisms

    International Nuclear Information System (INIS)

    Manca, N.; Verardi, R.; Colombrita, D.; Ravizzola, G.; Savoldi, E.; Turano, A.

    1986-01-01

    A rapid and sensitive radiometric method for detection of Leptospira interrogans serovar pomona and Leptospira interrogans serovar copenhageni is described. Stuart's medium and Middlebrook TB (12A) medium supplemented with bovine serum albumin, catalase, and casein hydrolysate and labeled with 14 C-fatty acids were used. The radioactivity was measured in a BACTEC 460. With this system, Leptospira organisms were detected in human blood in 2 to 5 days, a notably shorter time period than that required for the majority of detection techniques

  19. Radiometric method for the rapid detection of Leptospira organisms

    Energy Technology Data Exchange (ETDEWEB)

    Manca, N.; Verardi, R.; Colombrita, D.; Ravizzola, G.; Savoldi, E.; Turano, A.

    1986-02-01

    A rapid and sensitive radiometric method for detection of Leptospira interrogans serovar pomona and Leptospira interrogans serovar copenhageni is described. Stuart's medium and Middlebrook TB (12A) medium supplemented with bovine serum albumin, catalase, and casein hydrolysate and labeled with /sup 14/C-fatty acids were used. The radioactivity was measured in a BACTEC 460. With this system, Leptospira organisms were detected in human blood in 2 to 5 days, a notably shorter time period than that required for the majority of detection techniques.

  20. Bayesian methods to estimate urban growth potential

    Science.gov (United States)

    Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.

    2017-01-01

    Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.

  1. Examination of an indicative tool for rapidly estimating viable organism abundance in ballast water

    Science.gov (United States)

    Vanden Byllaardt, Julie; Adams, Jennifer K.; Casas-Monroy, Oscar; Bailey, Sarah A.

    2018-03-01

    Regulatory discharge standards stipulating a maximum allowable number of viable organisms in ballast water have led to a need for rapid, easy and accurate compliance assessment tools and protocols. Some potential tools presume that organisms present in ballast water samples display the same characteristics of life as the native community (e.g. rates of fluorescence). This presumption may not prove true, particularly when ships' ballast tanks present a harsh environment and long transit times, negatively impacting organism health. Here, we test the accuracy of a handheld pulse amplitude modulated (PAM) fluorometer, the Hach BW680, for detecting photosynthetic protists at concentrations above or below the discharge standard (< 10 cells·ml- 1) in comparison to microscopic counts using fluorescein diacetate as a viability probe. Testing was conducted on serial dilutions of freshwater harbour samples in the lab and in situ untreated ballast water samples originating from marine, freshwater and brackish sources utilizing three preprocessing techniques to target organisms in the size range of ≥ 10 and < 50 μm. The BW680 numeric estimates were in agreement with microscopic counts when analyzing freshly collected harbour water at all but the lowest concentrations (< 38 cells·ml- 1). Chi-square tests determined that error is not independent of preprocessing methods: using the filtrate method or unfiltered water, in addition to refining the conversion factor of raw fluorescence to cell size, can decrease the grey area where exceedance of the discharge standard cannot be measured with certainty (at least for the studied populations). When examining in situ ballast water, the BW680 detected significantly fewer viable organisms than microscopy, possibly due to factors such as organism size or ballast water age. Assuming both the BW680 and microscopy with FDA stain were measuring fluorescence and enzymatic activity/membrane integrity correctly, the observed discrepancy

  2. Method of estimation of scanning system quality

    Science.gov (United States)

    Larkin, Eugene; Kotov, Vladislav; Kotova, Natalya; Privalov, Alexander

    2018-04-01

    Estimation of scanner parameters is an important part in developing electronic document management system. This paper suggests considering the scanner as a system that contains two main channels: a photoelectric conversion channel and a channel for measuring spatial coordinates of objects. Although both of channels consist of the same elements, the testing of their parameters should be executed separately. The special structure of the two-dimensional reference signal is offered for this purpose. In this structure, the fields for testing various parameters of the scanner are sp atially separated. Characteristics of the scanner are associated with the loss of information when a document is digitized. The methods to test grayscale transmitting ability, resolution and aberrations level are offered.

  3. A method for rapid similarity analysis of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Liu Na

    2006-11-01

    Full Text Available Abstract Background Owing to the rapid expansion of RNA structure databases in recent years, efficient methods for structure comparison are in demand for function prediction and evolutionary analysis. Usually, the similarity of RNA secondary structures is evaluated based on tree models and dynamic programming algorithms. We present here a new method for the similarity analysis of RNA secondary structures. Results Three sets of real data have been used as input for the example applications. Set I includes the structures from 5S rRNAs. Set II includes the secondary structures from RNase P and RNase MRP. Set III includes the structures from 16S rRNAs. Reasonable phylogenetic trees are derived for these three sets of data by using our method. Moreover, our program runs faster as compared to some existing ones. Conclusion The famous Lempel-Ziv algorithm can efficiently extract the information on repeated patterns encoded in RNA secondary structures and makes our method an alternative to analyze the similarity of RNA secondary structures. This method will also be useful to researchers who are interested in evolutionary analysis.

  4. Developing rapid methods for analyzing upland riparian functions and values.

    Science.gov (United States)

    Hruby, Thomas

    2009-06-01

    Regulators protecting riparian areas need to understand the integrity, health, beneficial uses, functions, and values of this resource. Up to now most methods providing information about riparian areas are based on analyzing condition or integrity. These methods, however, provide little information about functions and values. Different methods are needed that specifically address this aspect of riparian areas. In addition to information on functions and values, regulators have very specific needs that include: an analysis at the site scale, low cost, usability, and inclusion of policy interpretations. To meet these needs a rapid method has been developed that uses a multi-criteria decision matrix to categorize riparian areas in Washington State, USA. Indicators are used to identify the potential of the site to provide a function, the potential of the landscape to support the function, and the value the function provides to society. To meet legal needs fixed boundaries for assessment units are established based on geomorphology, the distance from "Ordinary High Water Mark" and different categories of land uses. Assessment units are first classified based on ecoregions, geomorphic characteristics, and land uses. This simplifies the data that need to be collected at a site, but it requires developing and calibrating a separate model for each "class." The approach to developing methods is adaptable to other locations as its basic structure is not dependent on local conditions.

  5. Rapid estimation of aquifer salinity structure from oil and gas geophysical logs

    Science.gov (United States)

    Shimabukuro, D.; Stephens, M.; Ducart, A.; Skinner, S. M.

    2016-12-01

    We describe a workflow for creating aquifer salinity maps using Archie's equation for areas that have geophysical data from oil and gas wells. We apply this method in California, where geophysical logs are available in raster format from the Division of Oil, Gas, and Geothermal Resource (DOGGR) online archive. This method should be applicable to any region where geophysical logs are readily available. Much of the work is controlled by computer code, allowing salinity estimates for new areas to be rapidly generated. For a region of interest, the DOGGR online database is scraped for wells that were logged with multi-tool suites, such as the Platform Express or Triple Combination Logging Tools. Then, well construction metadata, such as measured depth, spud date, and well orientation, is attached. The resultant local database allows a weighted criteria selection of wells that are most likely to have the shallow resistivity, deep resistivity, and density porosity measurements necessary to calculate salinity over the longest depth interval. The algorithm can be adjusted for geophysical log availability for older well fields and density of sampling. Once priority wells are identified, a student researcher team uses Neuralog software to digitize the raster geophysical logs. Total dissolved solid (TDS) concentration is then calculated in clean, wet sand intervals using the resistivity-porosity method, a modified form of Archie's equation. These sand intervals are automatically selected using a combination of spontaneous potential and the difference in shallow resistivity and deep resistivity measurements. Gamma ray logs are not used because arkosic sands common in California make it difficult to distinguish sand and shale. Computer calculation allows easy adjustment of Archie's parameters. The result is a semi-continuous TDS profile for the wells of interest. These profiles are combined and contoured using standard 3-d visualization software to yield preliminary salinity

  6. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    Science.gov (United States)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is

  7. A Rapid Method for the Determination of Fucoxanthin in Diatom

    Directory of Open Access Journals (Sweden)

    Li-Juan Wang

    2018-01-01

    Full Text Available Fucoxanthin is a natural pigment found in microalgae, especially diatoms and Chrysophyta. Recently, it has been shown to have anti-inflammatory, anti-tumor, and anti-obesityactivity in humans. Phaeodactylum tricornutum is a diatom with high economic potential due to its high content of fucoxanthin and eicosapentaenoic acid. In order to improve fucoxanthin production, physical and chemical mutagenesis could be applied to generate mutants. An accurate and rapid method to assess the fucoxanthin content is a prerequisite for a high-throughput screen of mutants. In this work, the content of fucoxanthin in P. tricornutum was determined using spectrophotometry instead of high performance liquid chromatography (HPLC. This spectrophotometric method is easier and faster than liquid chromatography and the standard error was less than 5% when compared to the HPLC results. Also, this method can be applied to other diatoms, with standard errors of 3–14.6%. It provides a high throughput screening method for microalgae strains producing fucoxanthin.

  8. A rapid method for titration of ascovirus infectivity.

    Science.gov (United States)

    Han, Ningning; Chen, Zishu; Wan, Hu; Huang, Guohua; Li, Jianhong; Jin, Byung Rae

    2018-05-01

    Ascoviruses are a recently described family and the traditional plaque assay and end-point PCR assay have been used for their titration. However, these two methods are time-consuming and inaccurate to titrate ascoviruses. In the present study, a quick method for the determination of the titer of ascovirus stocks was developed based on ascovirus-induced apoptosis in infected insect cells. Briefly, cells infected with serial dilutions of virus (10 -2 -10 -10 ) for 24 h were stained with trypan blue. The stained cells were counted, and the percentage of nonviable cells was calculated. The stained cell rate was compared between virus-infected and control cells. The minimum-dilution group that had a significant difference compared with control and the maximum-dilution group that had no significant difference were selected and then compared each well of the two groups with the average stained cell rate of control. The well was marked as positive well if the stained cell rate was higher than the average stained cell rate of control wells; otherwise, the well was marked as negative wells. The percentage of positive wells were calculated according to the number of positive. Subsequently, the virus titer was calculated through the method of Reed and Muench. This novel method is rapid, simple, reproducible, accurate, and less material-consuming and eliminates the subjectivity of the other procedures for titrating ascoviruses. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. A simple method for rapidly processing HEU from weapons returns

    Energy Technology Data Exchange (ETDEWEB)

    McLean, W. II; Miller, P.E.

    1994-01-01

    A method based on the use of a high temperature fluidized bed for rapidly oxidizing, homogenizing and down-blending Highly Enriched Uranium (HEU) from dismantled nuclear weapons is presented. This technology directly addresses many of the most important issues that inhibit progress in international commerce in HEU; viz., transaction verification, materials accountability, transportation and environmental safety. The equipment used to carry out the oxidation and blending is simple, inexpensive and highly portable. Mobile facilities to be used for point-of-sale blending and analysis of the product material are presented along with a phased implementation plan that addresses the conversion of HEU derived from domestic weapons and related waste streams as well as material from possible foreign sources such as South Africa or the former Soviet Union.

  10. Solvent extraction method for rapid separation of strontium-90 in milk and food samples

    International Nuclear Information System (INIS)

    Hingorani, S.B.; Sathe, A.P.

    1991-01-01

    A solvent extraction method, using tributyl phosphate, for rapid separation of strontium-90 in milk and other food samples has been presented in this report in view of large number of samples recieved after Chernobyl accident for checking radioactive contamination. The earlier nitration method in use for the determination of 90 Sr through its daughter 90 Y takes over two weeks for analysis of a sample. While by this extraction method it takes only 4 to 5 hours for sample analysis. Complete estimation including initial counting can be done in a single day. The chemical recovery varies between 80-90% compared to nitration method which is 65-80%. The purity of the method has been established by following the decay of yttrium-90 separated. Some of the results obtained by adopting this chemical method for food analysis are included. The method is, thus, found to be rapid and convenient for accurate estimation of strontium-90 in milk and food samples. (author). 2 tabs., 1 fig

  11. A rapid protection switching method in carrier ethernet ring networks

    Science.gov (United States)

    Yuan, Liang; Ji, Meng

    2008-11-01

    Abstract: Ethernet is the most important Local Area Network (LAN) technology since more than 90% data traffic in access layer is carried on Ethernet. From 10M to 10G, the improving Ethernet technology can be not only used in LAN, but also a good choice for MAN even WAN. MAN are always constructed in ring topology because the ring network could provide resilient path protection by using less resource (fibre or cable) than other network topologies. In layer 2 data networks, spanning tree protocol (STP) is always used to protect transmit link and preventing the formation of logic loop in networks. However, STP cannot guarantee the efficiency of service convergence when link fault happened. In fact, convergent time of networks with STP is about several minutes. Though Rapid Spanning Tree Protocol (RSTP) and Multi-Spanning Tree Protocol (MSTP) improve the STP technology, they still need a couple of seconds to achieve convergence, and can not provide sub-50ms protection switching. This paper presents a novel rapid ring protection method (RRPM) for carrier Ethernet. Unlike other link-fault detection method, it adopts distributed algorithm to detect link fault rapidly (sub-50ms). When networks restore from link fault, it can revert to the original working state. RRPM can provide single ring protection and interconnected ring protection without the formation of super loop. In normal operation, the master node blocks the secondary port for all non-RRPM Ethernet frames belonging to the given RRPM Ring, thereby avoiding a loop in the ring. When link fault happens, the node on which the failure happens moves from the "ring normal" state to the "ring fault" state. It also sends "link down" frame immediately to other nodes and blocks broken port and flushes its forwarding database. Those who receive "link down" frame will flush forwarding database and master node should unblock its secondary port. When the failure restores, the whole ring will revert to the normal state. That is

  12. Use of ethyl-α-isonitrosoacetoacetate in the rapid estimation and radiochemical separation of gold

    International Nuclear Information System (INIS)

    Sawant, A.D.; Haldar, B.C.

    1978-01-01

    The use of ethyl-α-isonitrosoacetoacetate in the rapid estimation and radiochemical separation of gold is reported. As low as 5.00 mg of Au can be estimated with an accuracy better than 1%. Decontamination values against platinum metals and other metals usually associated with Au are greater than 10 5 . Isotopes and results are tabulated. The time required for radiochemical separation is around 20 min and the recovery of Au is better than 80%. γ-activities were measured with a single channel analyser and NaI(Tl) detector. β-activities were counted on a thin end-window type GM counter. (T.I.)

  13. StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.

    Science.gov (United States)

    Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A

    2017-10-15

    Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  14. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  15. Adjustment of a rapid method for quantification of Fusarium spp. spore suspensions in plant pathology.

    Science.gov (United States)

    Caligiore-Gei, Pablo F; Valdez, Jorge G

    2015-01-01

    The use of a Neubauer chamber is a broadly employed method when cell suspensions need to be quantified. However, this technique may take a long time and needs trained personnel. Spectrophotometry has proved to be a rapid, simple and accurate method to estimate the concentration of spore suspensions of isolates of the genus Fusarium. In this work we present a linear formula to relate absorbance measurements at 530nm with the number of microconidia/ml in a suspension. Copyright © 2014 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. The Brazilian version of the 20-item rapid estimate of adult literacy in medicine and dentistry

    Directory of Open Access Journals (Sweden)

    Agnes Fátima P. Cruvinel

    2017-08-01

    Full Text Available Background The misunderstanding of specific vocabulary may hamper the patient-health provider communication. The 20-item Rapid Estimate Adult Literacy in Medicine and Dentistry (REALMD-20 was constructed to screen patients by their ability in reading medical/dental terminologies in a simple and rapid way. This study aimed to perform the cross-cultural adaptation and validation of this instrument for its application in Brazilian dental patients. Methods The cross-cultural adaptation was performed through conceptual equivalence, verbatim translation, semantic, item and operational equivalence, and back-translation. After that, 200 participants responded the adapted version of the REALMD-20, the Brazilian version of the Rapid Estimate of Adult Literacy in Dentistry (BREALD-30, ten questions of the Brazilian National Functional Literacy Index (BNFLI, and a questionnaire with socio-demographic and oral health-related questions. Statistical analysis was conducted to assess the reliability and validity of the REALMD-20 (P < 0.05. Results The sample was composed predominantly by women (55.5% and white/brown (76% individuals, with an average age of 39.02 years old (±15.28. The average REALMD-20 score was 17.48 (±2.59, range 8–20. It displayed a good internal consistency (Cronbach’s alpha = 0.789 and test-retest reliability (ICC = 0.73; 95% CI [0.66 − 0.79]. In the exploratory factor analysis, six factors were extracted according to Kaiser’s criterion. The factor I (eigenvalue = 4.53 comprised four terms— “Jaundice”, “Amalgam”, “Periodontitis” and “Abscess”—accounted for 25.18% of total variance, while the factor II (eigenvalue = 1.88 comprised other four terms—“Gingivitis”, “Instruction”, “Osteoporosis” and “Constipation”—accounted for 10.46% of total variance. The first four factors accounted for 52.1% of total variance. The REALMD-20 was positively correlated with the BREALD-30 (Rs = 0

  17. Rapid method for Detection of Irradiation Mango Fruits

    International Nuclear Information System (INIS)

    El Salhy, F.T.

    2011-01-01

    To detect mango fruits which have been exposed to low doses of gamma rays (0.5-3.0 kGy), three recommended methods by European Committee for Standardization (EN 1784:1996, EN 1785:1996 and EN 1787:2000) were used to study the possibility for identification of irradiated mango fruits (Ewais variety). Fresh mangoes were irradiated to different doses (0.5, 0.75, 1.0 and 3.0 kGy). The first method for determining the volatile hydrocarbons (VHC) was carried out by using florisil column then identified by gas chromatography and mass spectrometry (GC-MS). The major VHCs were C14:1, C15:0 and C17:1 at different doses which increased linearly with increasing doses either at low or high doses. The second one for determining the 2-alkyl cyclobutanone (2-DCB) was carried out using florisil chromatography method activated with 20% for separation and identified by GC-MS. 2-DCB bio marker specific for irradiated food proved its presence at the applied doses from 0.75-3.0 kGy but not at 0.5 kGy. All the mentioned compounds could not detected in non-irradiated samples, which mean that these radiolytic products (VHC and 2-DCB) can be used as a detection markers for irradiated mangoes even at low doses. The third one (EN 1787:2000) was conducted by electron spin resonance (ESR) on dried petioles of mangoes. The results proved that ESR was more sensitive for all applied doses.It could be concluded that using the three methods can be succeeded for detection of irradiated mangoes but the rapid one even at low doses with high accuracy was ESR.

  18. Electrochemical method for rapid synthesis of Zinc Pentacyanonitrosylferrate Nanotubes

    Directory of Open Access Journals (Sweden)

    Rogaieh Bargeshadi

    2014-10-01

    Full Text Available In this paper, a rapid and simple approach was developed for the preparation of zinc pentacyanonitrosylferrate nanotubes (ZnPCNF NTs within the cylindrical pores of anodic aluminum oxide (AAO template by electrochemical method. The AAO was fabricated in two steps anodizing from aluminum foil. The first anodization of aluminum foil was performed in 0.2 mol L-1 H2C2O4 followed by removal of the formed porous oxide film by a solution of 6 wt% of phosphoric acid. The second anodization step was then performed using the same conditions as the previous step. Scanning electron microscope (SEM and X-ray diffraction (XRD method were employed to characterize the resulting highly oriented uniform hollow tube array which its diameter was in the range of 25-75 nm depending on the applied voltage and the length of nanotubes was equal to the thickness of AAO which was about 2 m. The growth properties of the ZnPCNF NTs array film can be achieved by controlling the structure of the template and applied potential across the cell.

  19. Rapid and robust detection methods for poison and microbial contamination.

    Science.gov (United States)

    Hoehl, Melanie M; Lu, Peter J; Sims, Peter A; Slocum, Alexander H

    2012-06-27

    Real-time on-site monitoring of analytes is currently in high demand for food contamination, water, medicines, and ingestible household products that were never tested appropriately. Here we introduce chemical methods for the rapid quantification of a wide range of chemical and microbial contaminations using a simple instrument. Within the testing procedure, we used a multichannel, multisample, UV-vis spectrophotometer/fluorometer that employs two frequencies of light simultaneously to interrogate the sample. We present new enzyme- and dye-based methods to detect (di)ethylene glycol in consumables above 0.1 wt % without interference and alcohols above 1 ppb. Using DNA intercalating dyes, we can detect a range of pathogens ( E. coli , Salmonella , V. Cholera, and a model for Malaria) in water, foods, and blood without background signal. We achieved universal scaling independent of pathogen size above 10(4) CFU/mL by taking advantage of the simultaneous measurement at multiple wavelengths. We can detect contaminants directly, without separation, purification, concentration, or incubation. Our chemistry is stable to ± 1% for >3 weeks without refrigeration, and measurements require <5 min.

  20. Rapid estimation of the moment magnitude of the 2011 off the Pacific coast of Tohoku earthquake from coseismic strain steps

    Science.gov (United States)

    Itaba, S.; Matsumoto, N.; Kitagawa, Y.; Koizumi, N.

    2012-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred at 14:46 Japan Standard Time (JST) on March 11, 2011. The coseismic strain steps caused by the fault slip of this earthquake were observed in the Tokai, Kii Peninsula and Shikoku by the borehole strainmeters which were carefully set by Geological Survey of Japan, AIST. Using these strain steps, we estimated a fault model for the earthquake on the boundary between the Pacific and North American plates. Our model, which is estimated only from several minutes' strain data, is largely consistent with the final fault models estimated from GPS and seismic wave data. The moment magnitude can be estimated about 6 minutes after the origin time, and 4 minutes after wave arrival. According to the fault model, the moment magnitude of the earthquake is 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9. Generally coseismic strain steps are considered to be less reliable than seismic waves and GPS data. However our results show that the coseismic strain steps observed by the borehole strainmeters, which were carefully set and monitored, can be relied enough to decide the earthquake magnitude precisely and rapidly. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  1. Comparing methods of ploidy estimation in potato.

    Science.gov (United States)

    Ploidy manipulation and the resulting need for rapid ploidy screening is an important part of a potato research and breeding program. Determining ploidy by counting chromosomes or measuring DNA in individual cells is definitive, but takes time, technical skills and equipment. We tested three predi...

  2. The use of maturity method in estimating concrete strength

    International Nuclear Information System (INIS)

    Salama, A.E.; Abd El-Baky, S.M.; Ali, E.E.; Ghanem, G.M.

    2005-01-01

    Prediction of the early age strength of concrete is essential for modernized concrete for construction as well as for manufacturing of structural parts. Safe and economic scheduling of such critical operations as form removal and re shoring, application of post-tensioning or other mechanical treatment, and in process transportation and rapid delivery of products all should be based upon a good grasp of the strength development of the concrete in use. For many years, it has been proposed that the strength of concrete can be related to a simple mathematical function of time and temperature so that strength could be assessed by calculation without mechanical testing. Such functions are used to compute what is called the m aturity o f concrete, and the computed value is believed to obtain a correlation with the strength of concrete. With its simplicity and low cost, the application of maturity concept as in situ testing method has received wide attention and found its use in engineering practice. This research work investigates the use of M aturity method' in estimating the concrete strength. An experimental program is designed to estimate the concrete strength by using the maturity method. Using different concrete mixes, with available local materials. Ordinary Portland Cement, crushed stone, silica fume, fly ash and admixtures with different contents are used . All the specimens were exposed to different curing temperatures (10, 25 and 40 degree C), in order to get a simplified expression of maturity that fits in with the influence of temperature. Mix designs and charts obtained from this research can be used as guide information for estimating concrete strength by using the maturity method

  3. Rapid simulation of spatial epidemics: a spectral method.

    Science.gov (United States)

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-07

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Rapid preparation method for technetium-99m bicisate

    Energy Technology Data Exchange (ETDEWEB)

    Hung, J.C. [Nuclear Medicine, Department of Diagnostic Radiology, Mayo Clinic, Rochester, Minnesota (United States); Chowdhury, S. [Nuclear Medicine, Department of Diagnostic Radiology, Mayo Clinic, Rochester, Minnesota (United States); Redfern, M.G. [Nuclear Medicine, Department of Diagnostic Radiology, Mayo Clinic, Rochester, Minnesota (United States); Mahoney, D.W. [Section of Biostatistics, Department of Health Sciences Research, Mayo Clinic, Rochester, Minnesota (United States)

    1997-06-10

    The method currently recommended for the preparation of technetium-99m bicisate ({sup 99m}Tc-bicisate) requires a lengthy 30-min incubation at room temperature. The purpose of this study was to evaluate an alternative method to shorten the preparation time. {sup 99m}Tc-bicisate was prepared with 3.7 GBq (100 mCi) {sup 99m}Tc according to the manufacturer`s instructions, except for the final incubation step, which was replaced with the microwave heating procedure. A standard thin-layer chromatography (TLC) method (i.e., Baker-Flex silica gel IB-F TLC plate with ethyl acetate as mobile phase) was used for the determination of the radiochemical purity (RCP) of {sup 99m}Tc-bicisate. Our evaluation with different microwave heating processes (300 W with different heating times) demonstrated that as the microwave heating temperature was increased (i.e., 44 -71 C), an increased percentage of samples reached 95% within 5 min post preparation (n=58). The highest RCP value (i.e., 97.4%{+-}0.5%, n=10) could be obtained immediately after an 8-s microwave heating time at 300 W (microwave temperature at 69 C), and an average RCP value of 96.4%{+-}1.3% (n=90) was maintained throughout the 24-h evaluation period. However, the trend seemed to reverse at higher microwave temperatures (i.e., 76 -90 C), which reconfirmed our initial findings that overheating had no benefit for the preparation of {sup 99m}Tc-bicisate. To ensure that temperature was the only determining factor, a hot water incubator set at 69 C was used (n=6). Similar RCP results were achieved. In conclusion, the use of a microwave oven at a low heat cycle provides a rapid and efficient way to prepare {sup 99m}Tc-bicisate. (orig.). With 3 figs., 1 tab.

  5. Estimating bacterial diversity for ecological studies: methods, metrics, and assumptions.

    Directory of Open Access Journals (Sweden)

    Julia Birtel

    Full Text Available Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5. Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques.

  6. The MIRD method of estimating absorbed dose

    International Nuclear Information System (INIS)

    Weber, D.A.

    1991-01-01

    The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine

  7. Psychological methods of subjective risk estimates

    International Nuclear Information System (INIS)

    Zimolong, B.

    1980-01-01

    Reactions to situations involving risks can be divided into the following parts/ perception of danger, subjective estimates of the risk and risk taking with respect to action. Several investigations have compared subjective estimates of the risk with an objective measure of that risk. In general there was a mis-match between subjective and objective measures of risk, especially, objective risk involved in routine activities is most commonly underestimated. This implies, for accident prevention, that attempts must be made to induce accurate subjective risk estimates by technical and behavioural measures. (orig.) [de

  8. Use of Genomic Estimated Breeding Values Results in Rapid Genetic Gains for Drought Tolerance in Maize

    Directory of Open Access Journals (Sweden)

    B.S. Vivek

    2017-03-01

    Full Text Available More than 80% of the 19 million ha of maize ( L. in tropical Asia is rainfed and prone to drought. The breeding methods for improving drought tolerance (DT, including genomic selection (GS, are geared to increase the frequency of favorable alleles. Two biparental populations (CIMMYT-Asia Population 1 [CAP1] and CAP2 were generated by crossing elite Asian-adapted yellow inbreds (CML470 and VL1012767 with an African white drought-tolerant line, CML444. Marker effects of polymorphic single-nucleotide polymorphisms (SNPs were determined from testcross (TC performance of F families under drought and optimal conditions. Cycle 1 (C1 was formed by recombining the top 10% of the F families based on TC data. Subsequently, (i C2[PerSe_PS] was derived by recombining those C1 plants that exhibited superior per se phenotypes (phenotype-only selection, and (ii C2[TC-GS] was derived by recombining a second set of C1 plants with high genomic estimated breeding values (GEBVs derived from TC phenotypes of F families (marker-only selection. All the generations and their top crosses to testers were evaluated under drought and optimal conditions. Per se grain yields (GYs of C2[PerSe_PS] and that of C2[TC-GS] were 23 to 39 and 31 to 53% better, respectively, than that of the corresponding F population. The C2[TC-GS] populations showed superiority of 10 to 20% over C2[PerSe-PS] of respective populations. Top crosses of C2[TC-GS] showed 4 to 43% superiority of GY over that of C2[PerSe_PS] of respective populations. Thus, GEBV-enabled selection of superior phenotypes (without the target stress resulted in rapid genetic gains for DT.

  9. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  10. PERFORMANCE ANALYSIS OF METHODS FOR ESTIMATING ...

    African Journals Online (AJOL)

    2014-12-31

    Dec 31, 2014 ... speed is the most significant parameter of the wind energy. ... wind-powered generators and applied to estimate potential power output at various ...... Wind and Solar Power Systems, U.S. Merchant Marine Academy Kings.

  11. Estimation methods for special nuclear materials holdup

    International Nuclear Information System (INIS)

    Pillay, K.K.S.; Picard, R.R.

    1984-01-01

    The potential value of statistical models for the estimation of residual inventories of special nuclear materials was examined using holdup data from processing facilities and through controlled experiments. Although the measurement of hidden inventories of special nuclear materials in large facilities is a challenging task, reliable estimates of these inventories can be developed through a combination of good measurements and the use of statistical models. 7 references, 5 figures

  12. Statistical methods of estimating mining costs

    Science.gov (United States)

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  13. Rapid Methods for the Laboratory Identification of Pathogenic Microorganisms.

    Science.gov (United States)

    1982-09-01

    coli Hemophilus influenzae Bacillus anthracis Bacillus circulans Bacillus coagulans Bacillus cereus T Candida albicans Cryptococcus neoformans Legionel...reveree aide If neceeeary and Identify by block number) Lectins: Rapid Identification, Bacillus anthracisjCryptococcus " neoformans. Neisseria...field-type kit for the rapid identification of Bacillus anthracis. We have shown that certain lectins will selectively interact with B. anthracis

  14. Monitoring of radioiodine and methods for rapid measurement, 2

    International Nuclear Information System (INIS)

    Kamada, Hiroshi

    1979-01-01

    Milk is selected as an indicator or critical food in the environmental monitoring samples, and radioactive iodine as a specific critical radionuclide. Rapid determination of Iodine-131 in the milk has been developed as a standard procedure for the network of environmental radioactivity monitoring in a state of emergency. Outline of the procedure is gamma-ray spectrometry using a heavily shielded 3''diameter x 3'' sodium iodide (thallium-activated) crystal as a detector, 2 liter of Marinelli Beaker for a raw milk and a multi channel pulse height analyzer for quantitative analysis of gamma spectra through the utilization of simultaneous equations. The analysis is what we call ''Milk Matrix Method'' introducing calibration data from the standard samples of Iodine-131, Cesium-137 and Potassium-40. They were selected experimentally, and counting data from the sample were taken into the elements of matrix of set up three simultaneous equations. Most recently detected concentration of Iodine-131 in milk was 81 pCi per liter in 20 May 1978, originated from the nuclear explosion test carried out by the People's Republic of China in 15 May 1978. (author)

  15. A rapid method of evaluating fluoroscopic system performance

    International Nuclear Information System (INIS)

    Sprawls, P.

    1989-01-01

    This paper presents a study to develop a method for the rapid evaluation and documentation of fluoroscopic image quality. All objects contained within a conventional contrast-detail test phantom (Leeds TO-10) are displayed in an array format according to their contrast and size. A copy of the display is used as the data collection form and a permanent record of system performance. A fluoroscope is evaluated by viewing the test phantom and marking the visible objects on the display. A line drawn through the objects with minimum visibility in each size group forms a contrast-detail curve for the system. This is compared with a standard or reference line, which is in the display.Deviations in curve position are useful indicators of specific image quality problems, such as excessive noise or blurring. The use of a special object-visibility array format display makes it possible to collect data, analyze the results, and create a record of fluoroscopic performance in less than 2 minutes for each viewing mode

  16. Rapid screening method for plutonium in mixed waste samples

    International Nuclear Information System (INIS)

    Somers, W.; Culp, T.; Miller, R.

    1987-01-01

    A waste stream sampling program was undertaken to determine those waste streams which contained hazardous constituents, and would therefore be regulated as a hazardous waste under the Resource Conservation and Recovery Act. The waste streams also had the potential of containing radioactive material, either plutonium, americium, or depleted uranium. Because of the potential for contamination with radioactive material, a method of rapidly screening the liquid samples for radioactive material was required. A counting technique was devised to count a small aliquot of a sample, determine plutonium concentration, and allow the sample to be shipped the same day they were collected. This technique utilized the low energy photons (x-rays) that accompany α decay. This direct, non-destructive x-ray analysis was applied to quantitatively determine Pu-239 concentrations in industrial samples. Samples contained a Pu-239, Am-241 mixture; the ratio and/or concentrations of these two radionuclides was not constant. A computer program was designed and implemented to calculate Pu-239 activity and concentration (g/ml) using the 59.5 keV Am-241 peak to determine Am-241's contribution to the 17 keV region. Am's contribution was subtracted, yielding net counts in the 17 keV region due to Pu. 2 figs., 1 tab

  17. Recent applications for rapid estimation of earthquake shaking and losses with ELER Software

    International Nuclear Information System (INIS)

    Demircioglu, M.B.; Erdik, M.; Kamer, Y.; Sesetyan, K.; Tuzun, C.

    2012-01-01

    A methodology and software package entitled Earthquake Loss Estimation Routine (ELER) was developed for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region. The work was carried out under the Joint Research Activity-3 (JRA3) of the EC FP6 project entitled Network of Research Infrastructures for European Seismology (NERIES). The ELER methodology anticipates: 1) finding of the most likely location of the source of the earthquake using regional seismo-tectonic data base; 2) estimation of the spatial distribution of selected ground motion parameters at engineering bedrock through region specific ground motion prediction models, bias-correcting the ground motion estimations with strong ground motion data, if available; 3) estimation of the spatial distribution of site-corrected ground motion parameters using regional geology database using appropriate amplification models; and 4) estimation of the losses and uncertainties at various orders of sophistication (buildings, casualties). The multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships which are coded into ELER. The present paper provides brief information on the methodology of ELER and provides an example application with the recent major earthquake that hit the Van province in the east of Turkey on 23 October 2011 with moment magnitude (Mw) of 7.2. For this earthquake, Kandilli Observatory and Earthquake Research Institute (KOERI) provided almost real time estimations in terms of building damage and casualty distribution using ELER. (author)

  18. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien; Claudel, Christian G.

    2015-01-01

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  19. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien

    2015-12-30

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  20. Rapid estimation of the vertebral body volume: a combination of the Cavalieri principle and computed tomography images

    International Nuclear Information System (INIS)

    Odaci, Ersan; Sahin, Buenyamin; Sonmez, Osman Fikret; Kaplan, Sueleyman; Bas, Orhan; Bilgic, Sait; Bek, Yueksel; Erguer, Hayati

    2003-01-01

    Objective: The exact volume of the vertebral body is necessary for the evaluation, treatment and surgical application of related vertebral body. Thereby, the volume changes of the vertebral body are monitored, such as infectious diseases of vertebra and traumatic or non-traumatic fractures and deformities of the spine. Several studies have been conducted for the assessment of the vertebral body size based on the evaluation of the different criteria of the spine using different techniques. However, we have not found any detailed study in the literature describing the combination of the Cavalieri principle and vertebral body volume estimation. Materials and methods: In the present study we describe a rapid, simple, accurate and practical technique for estimating the volume of vertebral body. Two specimens were taken from the cadavers including ten lumbar vertebras and were scanned in axial, sagittal and coronal section planes by a computed tomography (CT) machine. The consecutive sections in 5 and 3 mm thicknesses were used to estimate the total volume of the vertebral bodies by means of the Cavalieri principle. Furthermore, to evaluate inter-observer differences the volume estimations were carried out by three performers. Results: There were no significant differences between the performers' estimates and real volumes of the vertebral bodies (P>0.05) and also between the performers' volume estimates (P>0.05). The section thickness and the section plains did not affect the accuracy of the estimates (P>0.05). A high correlation was seen between the estimates of performers and the real volumes of the vertebral bodies (r=0.881). Conclusion: We concluded that the combination of CT scanning with the Cavalieri principle is a direct and accurate technique that can be safely applied to estimate the volume of the vertebral body with the mean of 5 min and 11 s workload per vertebra

  1. Development of rapid urine analysis method for uranium

    Energy Technology Data Exchange (ETDEWEB)

    Kuwabara, J.; Noguchi, H. [Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan)

    2000-05-01

    ICP-MS has begun to spread in the field of individual monitoring for internal exposure as a very effective machine for uranium analysis. Although the ICP-MS has very high sensitivity, it requires longer time than conventional analysis, such as fluorescence analysis, because it is necessary to remove matrix from a urine sample sufficiently. To shorten time required for the urine bioassay by ICP-MS, a rapid uranium analysis method using the ICP-MS connected with a flow injection system was developed. Since this method does not involve chemical separation steps, the time required is equivalent to the conventional analysis. A measurement test was carried out using 10 urine solutions prepared from a urine sample. Required volume of urine solution is 5 ml. Main chemical treatment is only the digestion with 5 ml of nitric acid using a microwave oven to decompose organic matter and to dissolve suspended or precipitated matter. The microwave oven can digest 10 samples at once within an hour. Volume of digested sample solution was adjusted to 10 ml. The prepared sample solutions were directly introduced to the ICP-MS without any chemical separation procedure. The ICP-MS was connected with a flow injection system and an auto sampler. The flow injection system can minimize the matrix effects caused from salt dissolved in high matrix solution, such as non chemical separated urine sample, because it can introduce micro volume of sample solution into the ICP-MS. The ICP-MS detected uranium within 2 min/sample using the auto sampler. The 10 solutions prepared from a urine sample showed an average of 7.5 ng/l of uranium concentration in urine with 10 % standard deviation. A detection limit is about 1 ng/l. The total time required was less than 4 hours for 10 sample analysis. In the series of measurement, any memory effect was not observed. The present analysis method using the ICP-MS equipped with the flow injection system demonstrated that the shortening of time required on high

  2. Development of rapid urine analysis method for uranium

    International Nuclear Information System (INIS)

    Kuwabara, J.; Noguchi, H.

    2000-01-01

    ICP-MS has begun to spread in the field of individual monitoring for internal exposure as a very effective machine for uranium analysis. Although the ICP-MS has very high sensitivity, it requires longer time than conventional analysis, such as fluorescence analysis, because it is necessary to remove matrix from a urine sample sufficiently. To shorten time required for the urine bioassay by ICP-MS, a rapid uranium analysis method using the ICP-MS connected with a flow injection system was developed. Since this method does not involve chemical separation steps, the time required is equivalent to the conventional analysis. A measurement test was carried out using 10 urine solutions prepared from a urine sample. Required volume of urine solution is 5 ml. Main chemical treatment is only the digestion with 5 ml of nitric acid using a microwave oven to decompose organic matter and to dissolve suspended or precipitated matter. The microwave oven can digest 10 samples at once within an hour. Volume of digested sample solution was adjusted to 10 ml. The prepared sample solutions were directly introduced to the ICP-MS without any chemical separation procedure. The ICP-MS was connected with a flow injection system and an auto sampler. The flow injection system can minimize the matrix effects caused from salt dissolved in high matrix solution, such as non chemical separated urine sample, because it can introduce micro volume of sample solution into the ICP-MS. The ICP-MS detected uranium within 2 min/sample using the auto sampler. The 10 solutions prepared from a urine sample showed an average of 7.5 ng/l of uranium concentration in urine with 10 % standard deviation. A detection limit is about 1 ng/l. The total time required was less than 4 hours for 10 sample analysis. In the series of measurement, any memory effect was not observed. The present analysis method using the ICP-MS equipped with the flow injection system demonstrated that the shortening of time required on high

  3. A rapid method for counting nucleated erythrocytes on stained blood smears by digital image analysis

    Science.gov (United States)

    Gering, E.; Atkinson, C.T.

    2004-01-01

    Measures of parasitemia by intraerythrocytic hematozoan parasites are normally expressed as the number of infected erythrocytes per n erythrocytes and are notoriously tedious and time consuming to measure. We describe a protocol for generating rapid counts of nucleated erythrocytes from digital micrographs of thin blood smears that can be used to estimate intensity of hematozoan infections in nonmammalian vertebrate hosts. This method takes advantage of the bold contrast and relatively uniform size and morphology of erythrocyte nuclei on Giemsa-stained blood smears and uses ImageJ, a java-based image analysis program developed at the U.S. National Institutes of Health and available on the internet, to recognize and count these nuclei. This technique makes feasible rapid and accurate counts of total erythrocytes in large numbers of microscope fields, which can be used in the calculation of peripheral parasitemias in low-intensity infections.

  4. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  5. Rapid prototyping of soil moisture estimates using the NASA Land Information System

    Science.gov (United States)

    Anantharaj, V.; Mostovoy, G.; Li, B.; Peters-Lidard, C.; Houser, P.; Moorhead, R.; Kumar, S.

    2007-12-01

    The Land Information System (LIS), developed at the NASA Goddard Space Flight Center, is a functional Land Data Assimilation System (LDAS) that incorporates a suite of land models in an interoperable computational framework. LIS has been integrated into a computational Rapid Prototyping Capabilities (RPC) infrastructure. LIS consists of a core, a number of community land models, data servers, and visualization systems - integrated in a high-performance computing environment. The land surface models (LSM) in LIS incorporate surface and atmospheric parameters of temperature, snow/water, vegetation, albedo, soil conditions, topography, and radiation. Many of these parameters are available from in-situ observations, numerical model analysis, and from NASA, NOAA, and other remote sensing satellite platforms at various spatial and temporal resolutions. The computational resources, available to LIS via the RPC infrastructure, support e- Science experiments involving the global modeling of land-atmosphere studies at 1km spatial resolutions as well as regional studies at finer resolutions. The Noah Land Surface Model, available with-in the LIS is being used to rapidly prototype soil moisture estimates in order to evaluate the viability of other science applications for decision making purposes. For example, LIS has been used to further extend the utility of the USDA Soil Climate Analysis Network of in-situ soil moisture observations. In addition, LIS also supports data assimilation capabilities that are used to assimilate remotely sensed soil moisture retrievals from the AMSR-E instrument onboard the Aqua satellite. The rapid prototyping of soil moisture estimates using LIS and their applications will be illustrated during the presentation.

  6. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  7. Rapid method for identification of transgenic fish zygosity

    Directory of Open Access Journals (Sweden)

    . Alimuddin

    2007-07-01

    Full Text Available Identification of zygosity in transgenik fish is normally achieved by PCR analysis with genomic DNA template extracted from the tissue of progenies which are derived by mating the transgenic fish and wild-type counterpart.  This method needs relatively large amounts of fish material and is time- and labor-intensive. New approaches addressing this problem could be of great help for fish biotechnologists.  In this experiment, we applied a quantitative real-time PCR (qr-PCR method to analyze zygosity in a stable line of transgenic zebrafish (Danio rerio carrying masu salmon, Oncorhynchus masou D6-desaturase-like gene. The qr-PCR was performed using iQ SYBR Green Supermix in the iCycler iQ Real-time PCR Detection System (Bio-Rad Laboratories, USA.  Data were analyzed using the comparative cycle threshold method.  The results demonstrated a clear-cut identification of all transgenic fish (n=20 classified as a homozygous or heterozygous.  Mating of those fish with wild-type had revealed transgene transmission to the offspring following expected Mendelian laws. Thus, we found that the qTR-PCR to be effective for a rapid and precise determination of zygosity in transgenic fish. This technique could be useful in the establishment of breeding programs for mass transgenic fish production and in experiments in which zygosity effect could have a functional impact. Keywords: quantitative real-time PCR; zygosity; transgenic fish; mass production   ABSTRAK Identifikasi sigositas ikan transgenik biasanya dilakukan menggunakan analisa PCR dengan cetakan DNA genomik yang diekstraksi dari jaringan ikan hasil persilangan antara ikan transgenik dan ikan normal.   Metode ini memerlukan ikan dalam jumlah yang banyak, dan juga waktu serta tenaga.  Pendekatan baru untuk mengatasi masalah tersebut akan memberikan manfaat besar kepada peneliti bioteknologi perikanan.  Pada penelitian ini, kami menggunakan metode PCR real-time kuantitatif (krt-PCR untuk

  8. Empirical methods for estimating future climatic conditions

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    Applying the empirical approach permits the derivation of estimates of the future climate that are nearly independent of conclusions based on theoretical (model) estimates. This creates an opportunity to compare these results with those derived from the model simulations of the forthcoming changes in climate, thus increasing confidence in areas of agreement and focusing research attention on areas of disagreements. The premise underlying this approach for predicting anthropogenic climate change is based on associating the conditions of the climatic optimums of the Holocene, Eemian, and Pliocene with corresponding stages of the projected increase of mean global surface air temperature. Provided that certain assumptions are fulfilled in matching the value of the increased mean temperature for a certain epoch with the model-projected change in global mean temperature in the future, the empirical approach suggests that relationships leading to the regional variations in air temperature and other meteorological elements could be deduced and interpreted based on use of empirical data describing climatic conditions for past warm epochs. Considerable care must be taken, of course, in making use of these spatial relationships, especially in accounting for possible large-scale differences that might, in some cases, result from different factors contributing to past climate changes than future changes and, in other cases, might result from the possible influences of changes in orography and geography on regional climatic conditions over time

  9. Diagnostic Performance of a Rapid Magnetic Resonance Imaging Method of Measuring Hepatic Steatosis

    Science.gov (United States)

    House, Michael J.; Gan, Eng K.; Adams, Leon A.; Ayonrinde, Oyekoya T.; Bangma, Sander J.; Bhathal, Prithi S.; Olynyk, John K.; St. Pierre, Tim G.

    2013-01-01

    Objectives Hepatic steatosis is associated with an increased risk of developing serious liver disease and other clinical sequelae of the metabolic syndrome. However, visual estimates of steatosis from histological sections of biopsy samples are subjective and reliant on an invasive procedure with associated risks. The aim of this study was to test the ability of a rapid, routinely available, magnetic resonance imaging (MRI) method to diagnose clinically relevant grades of hepatic steatosis in a cohort of patients with diverse liver diseases. Materials and Methods Fifty-nine patients with a range of liver diseases underwent liver biopsy and MRI. Hepatic steatosis was quantified firstly using an opposed-phase, in-phase gradient echo, single breath-hold MRI methodology and secondly, using liver biopsy with visual estimation by a histopathologist and by computer-assisted morphometric image analysis. The area under the receiver operating characteristic (ROC) curve was used to assess the diagnostic performance of the MRI method against the biopsy observations. Results The MRI approach had high sensitivity and specificity at all hepatic steatosis thresholds. Areas under ROC curves were 0.962, 0.993, and 0.972 at thresholds of 5%, 33%, and 66% liver fat, respectively. MRI measurements were strongly associated with visual (r2 = 0.83) and computer-assisted morphometric (r2 = 0.84) estimates of hepatic steatosis from histological specimens. Conclusions This MRI approach, using a conventional, rapid, gradient echo method, has high sensitivity and specificity for diagnosing liver fat at all grades of steatosis in a cohort with a range of liver diseases. PMID:23555650

  10. Statistically Efficient Methods for Pitch and DOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    , it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...

  11. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  12. Petrifilm rapid S. aureus Count Plate method for rapid enumeration of Staphylococcus aureus in selected foods: collaborative study.

    Science.gov (United States)

    Silbernagel, K M; Lindberg, K G

    2001-01-01

    A rehydratable dry-film plating method for Staphylococcus aureus in foods, the 3M Petrifilm Rapid S. aureus Count Plate method, was compared with AOAC Official Method 975.55 (Staphylococcus aureus in Foods). Nine foods-instant nonfat dried milk, dry seasoned vegetable coating, frozen hash browns, frozen cooked chicken patty, frozen ground raw pork, shredded cheddar cheese, fresh green beans, pasta filled with beef and cheese, and egg custard-were analyzed for S. aureus by 13 collaborating laboratories. For each food tested, the collaborators received 8 blind test samples consisting of a control sample and 3 levels of inoculated test sample, each in duplicate. The mean log counts for the methods were comparable for pasta filled with beef and cheese; frozen hash browns; cooked chicken patty; egg custard; frozen ground raw pork; and instant nonfat dried milk. The repeatability and reproducibility variances of the Petrifilm Rapid S. aureus Count Plate method were similar to those of the standard method.

  13. Rapid methods for measuring radionuclides in food and environmental samples

    International Nuclear Information System (INIS)

    Perkins, Richard W.

    1995-01-01

    The application of ICP/mass spectrometry for the isotopic analysis of environmental samples, the use of drum assayers for measuring radionuclides in food and a rapid procedure for the measurement of the transuranic elements and thorium, performed at the Pacific Northwest Laboratory are discussed

  14. Rapid filling of pipelines with the SPH particle method

    NARCIS (Netherlands)

    Hou, Q.; Zhang, L.X.; Tijsseling, A.S.; Kruisbrink, A.C.H.

    2011-01-01

    The paper reports the development and application of a SPH (smoothed particle hydrodynamics) based simulation of rapid filling of pipelines, for which the rigid-column model is commonly used. In this paper the water-hammer equations with a moving boundary are used to model the pipe filling process,

  15. Rapid filling of pipelines with the SPH particle method

    NARCIS (Netherlands)

    Hou, Q.; Zhang, L.X.; Tijsseling, A.S.; Kruisbrink, A.C.H.

    2012-01-01

    The paper reports the development and application of a SPH (smoothed particle hydrodynamics) based simulation of rapid filling of pipelines, for which the rigid-column model is commonly used. In this paper the water-hammer equations with a moving boundary are used to model the pipe filling process,

  16. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  17. Validity of rapid estimation of erythrocyte volume in the diagnosis of polycytemia vera

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, S.; Roedbro, P.

    1989-01-01

    In the diagnosis of polycytemia vera, estimation of erythrocyte volume (EV) from plasma volume (PV) and venous hematocrit (Hct/sub v/) is usually thought unadvisable, because the ratio of whole body hematocrit to venous hematocrit (f ratio) is higher in patients with splenomegaly than in normal subjects, and varies considerably between individuals. We determined the mean f ratio in 232 consecutive patients suspected of polycytemia vera (anti f=0.967; SD 0.048) and used it with each patient's PV and Hct/sub v/ to calculate an estimated normalised EV/sub n/. With measured EV as a reference value, EV/sub n/ was investigated as a diagnostic test. By means of two cut off levels the EV/sub n/ values could be divided into EV/sub n/ elevated, EV/sub n/ not elevated (both with high predictive values), and an EV/sub n/ borderline group. The size of the borderline EV/sub n/ group ranged from 5% to 46% depending on position of the cut off levels, i.e. with the efficiency demanded from the diagnostic test. EV can safely and rapidly be estimated from PV and Hct/sub v/, if anti f is determined from the relevant population, and if the results in an easily definable borderline range of EV/sub n/ values are supplemented by direct EV determination.

  18. Rapid estimation of left ventricular ejection fraction in acute myocardial infarction by echocardiographic wall motion analysis

    DEFF Research Database (Denmark)

    Berning, J; Rokkedal Nielsen, J; Launbjerg, J

    1992-01-01

    Echocardiographic estimates of left ventricular ejection fraction (ECHO-LVEF) in acute myocardial infarction (AMI) were obtained by a new approach, using visual analysis of left ventricular wall motion in a nine-segment model. The method was validated in 41 patients using radionuclide...

  19. Thermodynamic properties of organic compounds estimation methods, principles and practice

    CERN Document Server

    Janz, George J

    1967-01-01

    Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the

  20. A Method for Rapid Measurement of Contrast Sensitivity on Mobile Touch-Screens

    Science.gov (United States)

    Mulligan, Jeffrey B.

    2016-01-01

    Touch-screen displays in cell phones and tablet computers are now pervasive, making them an attractive option for vision testing outside of the laboratory or clinic. Here we de- scribe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatio-temporal modulations are easily measured using the same method. A proto- type has been developed for Apple Computer's iPad/iPod/iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing, http://hsi.arc.nasa.gov/groups/scanpath/research.php). Preliminary data show good agreement with estimates obtained from traditional psychophysical methods as well as newer rapid estimation techniques. Issues relating to device calibration are also discussed.

  1. System and method for correcting attitude estimation

    Science.gov (United States)

    Josselson, Robert H. (Inventor)

    2010-01-01

    A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.

  2. Control and estimation methods over communication networks

    CERN Document Server

    Mahmoud, Magdi S

    2014-01-01

    This book provides a rigorous framework in which to study problems in the analysis, stability and design of networked control systems. Four dominant sources of difficulty are considered: packet dropouts, communication bandwidth constraints, parametric uncertainty, and time delays. Past methods and results are reviewed from a contemporary perspective, present trends are examined, and future possibilities proposed. Emphasis is placed on robust and reliable design methods. New control strategies for improving the efficiency of sensor data processing and reducing associated time delay are presented. The coverage provided features: ·        an overall assessment of recent and current fault-tolerant control algorithms; ·        treatment of several issues arising at the junction of control and communications; ·        key concepts followed by their proofs and efficient computational methods for their implementation; and ·        simulation examples (including TrueTime simulations) to...

  3. Comparison of methods for estimating carbon in harvested wood products

    International Nuclear Information System (INIS)

    Claudia Dias, Ana; Louro, Margarida; Arroja, Luis; Capela, Isabel

    2009-01-01

    There is a great diversity of methods for estimating carbon storage in harvested wood products (HWP) and, therefore, it is extremely important to agree internationally on the methods to be used in national greenhouse gas inventories. This study compares three methods for estimating carbon accumulation in HWP: the method suggested by Winjum et al. (Winjum method), the tier 2 method proposed by the IPCC Good Practice Guidance for Land Use, Land-Use Change and Forestry (GPG LULUCF) (GPG tier 2 method) and a method consistent with GPG LULUCF tier 3 methods (GPG tier 3 method). Carbon accumulation in HWP was estimated for Portugal under three accounting approaches: stock-change, production and atmospheric-flow. The uncertainty in the estimates was also evaluated using Monte Carlo simulation. The estimates of carbon accumulation in HWP obtained with the Winjum method differed substantially from the estimates obtained with the other methods, because this method tends to overestimate carbon accumulation with the stock-change and the production approaches and tends to underestimate carbon accumulation with the atmospheric-flow approach. The estimates of carbon accumulation provided by the GPG methods were similar, but the GPG tier 3 method reported the lowest uncertainties. For the GPG methods, the atmospheric-flow approach produced the largest estimates of carbon accumulation, followed by the production approach and the stock-change approach, by this order. A sensitivity analysis showed that using the ''best'' available data on production and trade of HWP produces larger estimates of carbon accumulation than using data from the Food and Agriculture Organization. (author)

  4. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  5. Novel method for quantitative estimation of biofilms

    DEFF Research Database (Denmark)

    Syal, Kirtimaan

    2017-01-01

    Biofilm protects bacteria from stress and hostile environment. Crystal violet (CV) assay is the most popular method for biofilm determination adopted by different laboratories so far. However, biofilm layer formed at the liquid-air interphase known as pellicle is extremely sensitive to its washing...... and staining steps. Early phase biofilms are also prone to damage by the latter steps. In bacteria like mycobacteria, biofilm formation occurs largely at the liquid-air interphase which is susceptible to loss. In the proposed protocol, loss of such biofilm layer was prevented. In place of inverting...... and discarding the media which can lead to the loss of the aerobic biofilm layer in CV assay, media was removed from the formed biofilm with the help of a syringe and biofilm layer was allowed to dry. The staining and washing steps were avoided, and an organic solvent-tetrahydrofuran (THF) was deployed...

  6. Development of a novel and simple method to evaluate disintegration of rapidly disintegrating tablets.

    Science.gov (United States)

    Hoashi, Yohei; Tozuka, Yuichi; Takeuchi, Hirofumi

    2013-01-01

    The purpose of this study was to develop and test a novel and simple method for evaluating the disintegration time of rapidly disintegrating tablets (RDTs) in vitro, since the conventional disintegration test described in the pharmacopoeia produces poor results due to the difference of its environmental conditions from those of an actual oral cavity. Six RDTs prepared in our laboratory and 5 types of commercial RDTs were used as model formulations. Using our original apparatus, a good correlation was observed between in vivo and in vitro disintegration times by adjusting the height from which the solution was dropped to 8 cm and the weight of the load to 10 or 20 g. Properties of RDTs, such as the pattern of their disintegrating process, can be assessed by verifying the load. These findings confirmed that our proposed method for an in vitro disintegration test apparatus is an excellent one for estimating disintegration time and the disintegration profile of RDTs.

  7. Novel Method for 5G Systems NLOS Channels Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Vladeta Milenkovic

    2017-01-01

    Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.

  8. Rapid analysis of key radionuclides in urine and estimation of internal dose for nuclear accident emergency

    International Nuclear Information System (INIS)

    Zhao Shuquan; Hu Heping; Wu Mingyu; Zhu Guoying; Huang Shibin; Liu Shiming

    2005-01-01

    Objective: To estimate the internal doses of a Chinese visiting scholar in the Chernobyl accident. Methods: The contents of 134 Cs and 137 Cs in urine were measured using a Ge(Li) γ-spectrometer. Their internal doses were estimated according to ICRP reports. Dose review of 131I was performed referring to UNSCEAR 2000 report. Results: The effective dose equivalent from 134 Cs, 137 Cs and 131 I were 66 μSv, 88 μSv and 1728 μSv respectively. Their summation was 1.9 mSv. Conclusion: The internal dose from 131 I was 10 times higher than that from 134 Cs and 137 Cs. So, the earlier estimation of internal doses for 131 I is significant in evaluation on radiation injuries of a nuclear reactor accident. (authors)

  9. VHTRC experiment for verification test of H∞ reactivity estimation method

    International Nuclear Information System (INIS)

    Fujii, Yoshio; Suzuki, Katsuo; Akino, Fujiyoshi; Yamane, Tsuyoshi; Fujisaki, Shingo; Takeuchi, Motoyoshi; Ono, Toshihiko

    1996-02-01

    This experiment was performed at VHTRC to acquire the data for verifying the H∞ reactivity estimation method. In this report, the experimental method, the measuring circuits and data processing softwares are described in details. (author)

  10. Testing survey-based methods for rapid monitoring of child mortality, with implications for summary birth history data.

    Science.gov (United States)

    Brady, Eoghan; Hill, Kenneth

    2017-01-01

    Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.

  11. Carbon footprint: current methods of estimation.

    Science.gov (United States)

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker

    2011-07-01

    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.

  12. Rapid radiometric method for detection of Salmonella in foods

    International Nuclear Information System (INIS)

    Stewart, B.J.; Eyles, M.J.; Murrell, W.G.

    1980-01-01

    A radiometric method for the detection of Salmonella in foods has been developed which is based on Salmonella poly H agglutinating serum preventing Salmonella from producing 14CO2 from [14C] dulcitol. The method will detect the presence or absence of Salmonella in a product within 30 h compared to 4 to 5 days by routine culture methods. The method has been evaluated against a routine culture method using 58 samples of food. The overall agreement was 91%. Five samples negative for Salmonella by the routine method were positive by the radiometric method. These may have been false positives. However, the routine method may have failed to detect Salmonella due to the presence of large numbers of lactose-fermenting bacteria which hindered isolation of Salmonella colonies on the selective agar plates

  13. THE METHODS FOR ESTIMATING REGIONAL PROFESSIONAL MOBILE RADIO MARKET POTENTIAL

    Directory of Open Access Journals (Sweden)

    Y.À. Korobeynikov

    2008-12-01

    Full Text Available The paper represents the author’s methods of estimating regional professional mobile radio market potential, that belongs to high-tech b2b markets. These methods take into consideration such market peculiarities as great range and complexity of products, technological constraints and infrastructure development for the technological systems operation. The paper gives an estimation of professional mobile radio potential in Perm region. This estimation is already used by one of the systems integrator for its strategy development.

  14. Evaluation and reliability of bone histological age estimation methods

    African Journals Online (AJOL)

    Human age estimation at death plays a vital role in forensic anthropology and bioarchaeology. Researchers used morphological and histological methods to estimate human age from their skeletal remains. This paper discussed different histological methods that used human long bones and ribs to determine age ...

  15. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  16. A Fast LMMSE Channel Estimation Method for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Zhou Wen

    2009-01-01

    Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.

  17. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  18. Simplified Method for Rapid Purification of Soluble Histones

    Directory of Open Access Journals (Sweden)

    Nives Ivić

    2016-06-01

    Full Text Available Functional and structural studies of histone-chaperone complexes, nucleosome modifications, their interactions with remodelers and regulatory proteins rely on obtaining recombinant histones from bacteria. In the present study, we show that co-expression of Xenopus laevis histone pairs leads to production of soluble H2AH2B heterodimer and (H3H42 heterotetramer. The soluble histone complexes are purified by simple chromatographic techniques. Obtained H2AH2B dimer and H3H4 tetramer are proficient in histone chaperone binding and histone octamer and nucleosome formation. Our optimized protocol enables rapid purification of multiple soluble histone variants with a remarkable high yield and simplifies histone octamer preparation. We expect that this simple approach will contribute to the histone chaperone and chromatin research. This work is licensed under a Creative Commons Attribution 4.0 International License.

  19. Evaluating an alternative method for rapid urinary creatinine determination

    Science.gov (United States)

    Creatinine (CR) is an endogenously-produced chemical routinely assayed in urine specimens to assess kidney function, sample dilution. The industry-standard method for CR determination, known as the kinetic Jaffe (KJ) method, relies on an exponential rate of a colorimetric change,...

  20. Rapid and Reliable HPLC Method for the Determination of Vitamin ...

    African Journals Online (AJOL)

    Purpose: To develop and validate an accurate, sensitive and reproducible high performance liquid chromatographic (HPLC) method for the quantitation of vitamin C in pharmaceutical samples. Method: The drug and the standard were eluted from Superspher RP-18 (250 mm x 4.6 mm, 10ìm particle size) at 20 0C.

  1. A rapid method for determining chlorobenzenes in dam water systems

    African Journals Online (AJOL)

    A method using direct immersion solid phase microextraction (DI-SPME) coupled to gas chromatography equipped with a flame ionisation detector (GC-FID) was developed for the analysis of 7 chlorinated benzenes in dam water. The main parameters affecting the DI-SPME process were optimised. The optimised method ...

  2. RESEARCH NOTE A Universal, rapid, and inexpensive method for ...

    Indian Academy of Sciences (India)

    Navya

    success of the extracted gDNA to be submitted into post-PCR analysis. ... The application of the universal method for DNA extraction not restricted into routine ... On the other hand, the universal method has proven its feasibility to be utilized.

  3. Joint Pitch and DOA Estimation Using the ESPRIT method

    DEFF Research Database (Denmark)

    Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom

    2015-01-01

    In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...

  4. Developing the RIAM method (rapid impact assessment matrix) in the context of impact significance assessment

    International Nuclear Information System (INIS)

    Ijaes, Asko; Kuitunen, Markku T.; Jalava, Kimmo

    2010-01-01

    In this paper the applicability of the RIAM method (rapid impact assessment matrix) is evaluated in the context of impact significance assessment. The methodological issues considered in the study are: 1) to test the possibilities of enlarging the scoring system used in the method, and 2) to compare the significance classifications of RIAM and unaided decision-making to estimate the consistency between these methods. The data used consisted of projects for which funding had been applied for via the European Union's Regional Development Trust in the area of Central Finland. Cases were evaluated with respect to their environmental, social and economic impacts using an assessment panel. The results showed the scoring framework used in RIAM could be modified according to the problem situation at hand, which enhances its application potential. However the changes made in criteria B did not significantly affect the final ratings of the method, which indicates the high importance of criteria A1 (importance) and A2 (magnitude) to the overall results. The significance classes obtained by the two methods diverged notably. In general the ratings given by RIAM tended to be smaller compared to intuitive judgement implying that the RIAM method may be somewhat conservative in character.

  5. A new method for rapid determination of carbohydrate and total carbon concentrations using UV spectrophotometry.

    Science.gov (United States)

    Albalasmeh, Ammar A; Berhe, Asmeret Asefaw; Ghezzehei, Teamrat A

    2013-09-12

    A new UV spectrophotometry based method for determining the concentration and carbon content of carbohydrate solution was developed. This method depends on the inherent UV absorption potential of hydrolysis byproducts of carbohydrates formed by reaction with concentrated sulfuric acid (furfural derivatives). The proposed method is a major improvement over the widely used Phenol-Sulfuric Acid method developed by DuBois, Gilles, Hamilton, Rebers, and Smith (1956). In the old method, furfural is allowed to develop color by reaction with phenol and its concentration is detected by visible light absorption. Here we present a method that eliminates the coloration step and avoids the health and environmental hazards associated with phenol use. In addition, avoidance of this step was shown to improve measurement accuracy while significantly reducing waiting time prior to light absorption reading. The carbohydrates for which concentrations and carbon content can be reliably estimated with this new rapid Sulfuric Acid-UV technique include: monosaccharides, disaccharides and polysaccharides with very high molecular weight. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Application of two electrical methods for the rapid assessment of freezing resistance in Salix epichloro

    Energy Technology Data Exchange (ETDEWEB)

    Tsarouhas, V.; Kenney, W.A.; Zsuffa, L. [University of Toronto, Ontario (Canada). Faculty of Forestry

    2000-09-01

    The importance of early selection of frost-resistant Salix clones makes it desirable to select a rapid and accurate screening method for assessing freezing resistance among several genotypes. Two electrical methods, stem electrical impedance to 1 and 10 khz alternating current, and electrolyte leakage of leaf tissue, were evaluated for detecting freezing resistance on three North America Salix epichloro Michx., clones after subjecting them to five different freezing temperatures (-1, -2, -3, -4, and -5 deg C). Differences in the electrical impedance to 1 and 10 kHz, and the ratio of the impedance at the two frequencies (low/high) before and after the freezing treatment (DZ{sub low}, DZ{sub high}, and DZ{sub ratio}, respectively) were estimated. Electrolyte leakage was expressed as relative conductivity (RC{sub t}) and index of injury (IDX{sub t}). Results from the two methods, obtained two days after the freezing stress, showed that both electrical methods were able to detect freezing injury in S. eriocephala. However, the electrolyte leakage method detected injury in more levels of freezing stress (-3, -4, and -5 deg C) than the impedance (-4, and -5 deg C), it assessed clonal differences in S. eriocephala freezing resistance, and it was best suited to correlate electrical methods with the visual assessed freezing injury. No significant impedance or leakage changes were found after the -1 and -2 deg C freezing temperatures. (author)

  7. Reverse survival method of fertility estimation: An evaluation

    Directory of Open Access Journals (Sweden)

    Thomas Spoorenberg

    2014-07-01

    Full Text Available Background: For the most part, demographers have relied on the ever-growing body of sample surveys collecting full birth history to derive total fertility estimates in less statistically developed countries. Yet alternative methods of fertility estimation can return very consistent total fertility estimates by using only basic demographic information. Objective: This paper evaluates the consistency and sensitivity of the reverse survival method -- a fertility estimation method based on population data by age and sex collected in one census or a single-round survey. Methods: A simulated population was first projected over 15 years using a set of fertility and mortality age and sex patterns. The projected population was then reverse survived using the Excel template FE_reverse_4.xlsx, provided with Timæus and Moultrie (2012. Reverse survival fertility estimates were then compared for consistency to the total fertility rates used to project the population. The sensitivity was assessed by introducing a series of distortions in the projection of the population and comparing the difference implied in the resulting fertility estimates. Results: The reverse survival method produces total fertility estimates that are very consistent and hardly affected by erroneous assumptions on the age distribution of fertility or by the use of incorrect mortality levels, trends, and age patterns. The quality of the age and sex population data that is 'reverse survived' determines the consistency of the estimates. The contribution of the method for the estimation of past and present trends in total fertility is illustrated through its application to the population data of five countries characterized by distinct fertility levels and data quality issues. Conclusions: Notwithstanding its simplicity, the reverse survival method of fertility estimation has seldom been applied. The method can be applied to a large body of existing and easily available population data

  8. Rapid separation method for {sup 237}Np and Pu isotopes in large soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, Sherrod L., E-mail: sherrod.maxwell@srs.go [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States); Culligan, Brian K.; Noyes, Gary W. [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States)

    2011-07-15

    A new rapid method for the determination of {sup 237}Np and Pu isotopes in soil and sediment samples has been developed at the Savannah River Site Environmental Lab (Aiken, SC, USA) that can be used for large soil samples. The new soil method utilizes an acid leaching method, iron/titanium hydroxide precipitation, a lanthanum fluoride soil matrix removal step, and a rapid column separation process with TEVA Resin. The large soil matrix is removed easily and rapidly using these two simple precipitations with high chemical recoveries and effective removal of interferences. Vacuum box technology and rapid flow rates are used to reduce analytical time.

  9. A simple and rapid molecular method for Leptospira species identification

    NARCIS (Netherlands)

    Ahmed, Ahmed; Anthony, Richard M.; Hartskeerl, Rudy A.

    2010-01-01

    Serological and DNA-based classification systems only have little correlation. Currently serological and molecular methods for characterizing Leptospira are complex and costly restricting their world-wide distribution and use. Ligation mediated amplification combined with microarray analysis

  10. Rapid and inexpensive method for isolating plasmid DNA

    International Nuclear Information System (INIS)

    Aljanabi, S. M.; Al-Awadi, S. J.; Al-Kazaz, A. A.; Baghdad Univ.

    1997-01-01

    A small-scale and economical method for isolating plasmid DNA from bacteria is described. The method provides DNA of suitable quality for most DNA manipulation techniques. This DNA can be used for restriction endonuclease digestion, southern blot hybridization, nick translation and end labeling of DNA probes, Polymerase Chain Reaction (PCR) -based techniques, transformation, DNA cycle-sequencing, and Chain-termination method for DNA sequencing. The entire procedure is adapted to 1.5 ml microfuge tubes and takes approximately 30 mins. The DNA isolated by this method has the same purity produced by CTAB and cesium chloride precipitation and purification procedures respectively. The two previous methods require many hours to obtain the final product and require the use of very expensive equipment as ultracentrifuge. This method is well suited for the isolation of plasmid DNA from a large number of bacterial samples and in a very short time and low cost in laboratories where chemicals, expensive equipment and finance are limited factors in conducting molecular research. (authors). 11refs. 11refs

  11. Consumptive use of upland rice as estimated by different methods

    International Nuclear Information System (INIS)

    Chhabda, P.R.; Varade, S.B.

    1985-01-01

    The consumptive use of upland rice (Oryza sativa Linn.) grown during the wet season (kharif) as estimated by modified Penman, radiation, pan-evaporation and Hargreaves methods showed a variation from computed consumptive use estimated by the gravimetric method. The variability increased with an increase in the irrigation interval, and decreased with an increase in the level of N applied. The average variability was less in pan-evaporation method, which could reliably be used for estimating water requirement of upland rice if percolation losses are considered

  12. MASS SPECTROMETRY PROTEOMICS METHOD AS A RAPID SCREENING TOOL FOR BACTERIAL CONTAMINATION OF FOOD

    Science.gov (United States)

    2017-06-01

    MASS SPECTROMETRY PROTEOMICS METHOD AS A RAPID SCREENING TOOL FOR BACTERIAL CONTAMINATION OF FOOD ECBC-TR...TITLE AND SUBTITLE Mass Spectrometry Proteomics Method as a Rapid Screening Tool for Bacterial Contamination of Food 5a. CONTRACT NUMBER 5b...the MSPM to correctly classify whether or not food samples were contaminated with Salmonella enterica serotype Newport in this blinded pilot study

  13. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  14. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  15. Rapid, cost-effective liquid chromatograghic method for the ...

    African Journals Online (AJOL)

    GRACE

    2006-07-03

    Jul 3, 2006 ... The method was validated and used for pharmacokinetic studies. Key words: Metronidazole ... by the intrinsic analytical properties of the drug molecule ... In addition, such factors as sample size ... account, since these affect the reliability of the quantitation. ... phase and ion-pair high–performance liquid.

  16. Rapid multi-residue method for the determination of pesticide ...

    African Journals Online (AJOL)

    Exposure to pesticides can represent a potential risk to humans. Agricultural workers are at risk of chronic toxicity. Hence, the evaluation of pesticide residues in their blood gives an indication about the extent of exposure and help in assessing adverse health effects. The aim of our study was to develop analytical method for ...

  17. A universal, rapid, and inexpensive method for genomic DNA ...

    Indian Academy of Sciences (India)

    MOHAMMED BAQUR SAHIB A. AL-SHUHAIB

    gels, containing 7% glycerol, and 1×TBE buffer. The gels were run under 200 .... Inc. Germany, GeneaidTM DNA Isolation Kit, Geneaid. Biotech., New Taipei City, .... C. L. and Arsenos G. 2015 Comparison of eleven methods for genomic DNA ...

  18. Rapid prototyping methods for the manufacture of fuel cells

    Directory of Open Access Journals (Sweden)

    Dudek Piotr

    2016-01-01

    The potential for the application of this method for the manufacture of metallic bipolar plates (BPP for use in proton exchange membrane fuel cells (PEMFCs is presented and discussed. Special attention is paid to the fabrication of light elements for the construction of PEMFC stacks designed for mobile applications such as aviation technology and unmanned aerial vehicles (UAVs.

  19. The Brazilian version of the 20-item rapid estimate of adult literacy in medicine and dentistry.

    Science.gov (United States)

    Cruvinel, Agnes Fátima P; Méndez, Daniela Alejandra C; Oliveira, Juliana G; Gutierres, Eliézer; Lotto, Matheus; Machado, Maria Aparecida A M; Oliveira, Thaís M; Cruvinel, Thiago

    2017-01-01

    The misunderstanding of specific vocabulary may hamper the patient-health provider communication. The 20-item Rapid Estimate Adult Literacy in Medicine and Dentistry (REALMD-20) was constructed to screen patients by their ability in reading medical/dental terminologies in a simple and rapid way. This study aimed to perform the cross-cultural adaptation and validation of this instrument for its application in Brazilian dental patients. The cross-cultural adaptation was performed through conceptual equivalence, verbatim translation, semantic, item and operational equivalence, and back-translation. After that, 200 participants responded the adapted version of the REALMD-20, the Brazilian version of the Rapid Estimate of Adult Literacy in Dentistry (BREALD-30), ten questions of the Brazilian National Functional Literacy Index (BNFLI), and a questionnaire with socio-demographic and oral health-related questions. Statistical analysis was conducted to assess the reliability and validity of the REALMD-20 ( P  < 0.05). The sample was composed predominantly by women (55.5%) and white/brown (76%) individuals, with an average age of 39.02 years old (±15.28). The average REALMD-20 score was 17.48 (±2.59, range 8-20). It displayed a good internal consistency (Cronbach's alpha = 0.789) and test-retest reliability ( ICC  = 0.73; 95% CI [0.66 - 0.79]). In the exploratory factor analysis, six factors were extracted according to Kaiser's criterion. The factor I (eigenvalue = 4.53) comprised four terms- "Jaundice" , " Amalgam ", " Periodontitis " and "Abscess" -accounted for 25.18% of total variance, while the factor II (eigenvalue = 1.88) comprised other four terms-" Gingivitis ", " Instruction ", " Osteoporosis " and " Constipation "-accounted for 10.46% of total variance. The first four factors accounted for 52.1% of total variance. The REALMD-20 was positively correlated with the BREALD-30 ( Rs  = 0.73, P  < 0.001) and BNFLI ( Rs  = 0.60, P  < 0.001). The

  20. Population Estimation with Mark and Recapture Method Program

    International Nuclear Information System (INIS)

    Limohpasmanee, W.; Kaewchoung, W.

    1998-01-01

    Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied

  1. Ore reserve estimation: a summary of principles and methods

    International Nuclear Information System (INIS)

    Marques, J.P.M.

    1985-01-01

    The mining industry has experienced substantial improvements with the increasing utilization of computerized and electronic devices throughout the last few years. In the ore reserve estimation field the main methods have undergone recent advances in order to improve their overall efficiency. This paper presents the three main groups of ore reserve estimation methods presently used worldwide: Conventional, Statistical and Geostatistical, and elaborates a detaited description and comparative analysis of each. The Conventional Methods are the oldest, less complex and most employed ones. The Geostatistical Methods are the most recent precise and more complex ones. The Statistical Methods are intermediate to the others in complexity, diffusion and chronological order. (D.J.M.) [pt

  2. The Most Probable Limit of Detection (MPL) for rapid microbiological methods

    NARCIS (Netherlands)

    Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, E.R. van den

    Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on

  3. The most probable limit of detection (MPL) for rapid microbiological methods

    NARCIS (Netherlands)

    Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, van den E.R.

    2010-01-01

    Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on

  4. Methods for design flood estimation in South Africa | Smithers ...

    African Journals Online (AJOL)

    The estimation of design floods is necessary for the design of hydraulic structures and to quantify the risk of failure of the structures. Most of the methods used for design flood estimation in South Africa were developed in the late 1960s and early 1970s and are in need of updating with more than 40 years of additional data ...

  5. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  6. Performance of sampling methods to estimate log characteristics for wildlife.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2004-01-01

    Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...

  7. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry – BREALD-30

    Science.gov (United States)

    Junkes, Monica C.; Fraiz, Fabian C.; Sardenberg, Fernanda; Lee, Jessica Y.; Paiva, Saul M.; Ferreira, Fernanda M.

    2015-01-01

    Objective The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. Methods After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. Results The BREALD-30 demonstrated good internal reliability. Cronbach’s alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent’s perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent’s perception regarding his/her child's oral health remained significant in the multivariate analysis. Conclusion The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil. PMID:26158724

  8. Optimal control methods for rapidly time-varying Hamiltonians

    International Nuclear Information System (INIS)

    Motzoi, F.; Merkel, S. T.; Wilhelm, F. K.; Gambetta, J. M.

    2011-01-01

    In this article, we develop a numerical method to find optimal control pulses that accounts for the separation of timescales between the variation of the input control fields and the applied Hamiltonian. In traditional numerical optimization methods, these timescales are treated as being the same. While this approximation has had much success, in applications where the input controls are filtered substantially or mixed with a fast carrier, the resulting optimized pulses have little relation to the applied physical fields. Our technique remains numerically efficient in that the dimension of our search space is only dependent on the variation of the input control fields, while our simulation of the quantum evolution is accurate on the timescale of the fast variation in the applied Hamiltonian.

  9. Time evolution of the wave equation using rapid expansion method

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2010-01-01

    Forward modeling of seismic data and reverse time migration are based on the time evolution of wavefields. For the case of spatially varying velocity, we have worked on two approaches to evaluate the time evolution of seismic wavefields. An exact solution for the constant-velocity acoustic wave equation can be used to simulate the pressure response at any time. For a spatially varying velocity, a one-step method can be developed where no intermediate time responses are required. Using this approach, we have solved for the pressure response at intermediate times and have developed a recursive solution. The solution has a very high degree of accuracy and can be reduced to various finite-difference time-derivative methods, depending on the approximations used. Although the two approaches are closely related, each has advantages, depending on the problem being solved. © 2010 Society of Exploration Geophysicists.

  10. Time evolution of the wave equation using rapid expansion method

    KAUST Repository

    Pestana, Reynam C.

    2010-07-01

    Forward modeling of seismic data and reverse time migration are based on the time evolution of wavefields. For the case of spatially varying velocity, we have worked on two approaches to evaluate the time evolution of seismic wavefields. An exact solution for the constant-velocity acoustic wave equation can be used to simulate the pressure response at any time. For a spatially varying velocity, a one-step method can be developed where no intermediate time responses are required. Using this approach, we have solved for the pressure response at intermediate times and have developed a recursive solution. The solution has a very high degree of accuracy and can be reduced to various finite-difference time-derivative methods, depending on the approximations used. Although the two approaches are closely related, each has advantages, depending on the problem being solved. © 2010 Society of Exploration Geophysicists.

  11. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  12. A Method for Estimation of Death Tolls in Disastrous Earthquake

    Science.gov (United States)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  13. Rapid estimate of solid volume in large tuff cores using a gas pycnometer

    International Nuclear Information System (INIS)

    Thies, C.; Geddis, A.M.; Guzman, A.G.

    1996-09-01

    A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1 degrees C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm 3 with less than 1% error

  14. Rapid estimate of solid volume in large tuff cores using a gas pycnometer

    Energy Technology Data Exchange (ETDEWEB)

    Thies, C. [ed.; Geddis, A.M.; Guzman, A.G. [and others

    1996-09-01

    A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1{degrees}C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm{sup 3} with less than 1% error.

  15. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  16. Methods of multicriterion estimations in system total quality management

    Directory of Open Access Journals (Sweden)

    Nikolay V. Diligenskiy

    2011-05-01

    Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.

  17. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  18. Methods for design flood estimation in South Africa

    African Journals Online (AJOL)

    2012-07-04

    Jul 4, 2012 ... 1970s and are in need of updating with more than 40 years of additional data ... This paper reviews methods used for design flood estimation in South Africa and .... transposition of past experience, or a deterministic approach,.

  19. A simple method for estimating the convection- dispersion equation ...

    African Journals Online (AJOL)

    Jane

    2011-08-31

    Aug 31, 2011 ... approach of modeling solute transport in porous media uses the deterministic ... Methods of estimating CDE transport parameters can be divided into statistical ..... diffusion-type model for longitudinal mixing of fluids in flow.

  20. A rapid technique for estimating the depth and width of a two-dimensional plate from self-potential data

    International Nuclear Information System (INIS)

    Mehanee, Salah; Smith, Paul D; Essa, Khalid S

    2011-01-01

    Rapid techniques for self-potential (SP) data interpretation are of prime importance in engineering and exploration geophysics. Parameters (e.g. depth, width) estimation of the ore bodies has also been of paramount concern in mineral prospecting. In many cases, it is useful to assume that the SP anomaly is due to an ore body of simple geometric shape and to use the data to determine its parameters. In light of this, we describe a rapid approach to determine the depth and horizontal width of a two-dimensional plate from the SP anomaly. The rationale behind the scheme proposed in this paper is that, unlike the two- (2D) and three-dimensional (3D) SP rigorous source current inversions, it does not demand a priori information about the subsurface resistivity distribution nor high computational resources. We apply the second-order moving average operator on the SP anomaly to remove the unwanted (regional) effect, represented by up to a third-order polynomial, using filters of successive window lengths. By defining a function F at a fixed window length (s) in terms of the filtered anomaly computed at two points symmetrically distributed about the origin point of the causative body, the depth (z) corresponding to each half-width (w) is estimated by solving a nonlinear equation in the form ξ(s, w, z) = 0. The estimated depths are then plotted against their corresponding half-widths on a graph representing a continuous curve for this window length. This procedure is then repeated for each available window length. The depth and half-width solution of the buried structure is read at the common intersection of these various curves. The improvement of this method over the published first-order moving average technique for SP data is demonstrated on a synthetic data set. It is then verified on noisy synthetic data, complicated structures and successfully applied to three field examples for mineral exploration and we have found that the estimated depth is in good agreement with

  1. Methods for the estimation of uranium ore reserves

    International Nuclear Information System (INIS)

    1985-01-01

    The Manual is designed mainly to provide assistance in uranium ore reserve estimation methods to mining engineers and geologists with limited experience in estimating reserves, especially to those working in developing countries. This Manual deals with the general principles of evaluation of metalliferous deposits but also takes into account the radioactivity of uranium ores. The methods presented have been generally accepted in the international uranium industry

  2. Research Note A novel method for estimating tree dimensions and ...

    African Journals Online (AJOL)

    The two objects must be adjacent to one another in the photograph. For rapid analysis, multiple photographs of different objects can be taken over a short period of time using the measuring staff. The method is not limited to plants and can be used to determine, for example, browser height, height at which browsers feed, ...

  3. Study of a large rapid ashing apparatus and a rapid dry ashing method for biological samples and its application

    International Nuclear Information System (INIS)

    Jin Meisun; Wang Benli; Liu Wencang

    1988-04-01

    A large rapid-dry-ashing apparatus and a rapid ashing method for biological samples are described. The apparatus consists of specially made ashing furnace, gas supply system and temperature-programming control cabinet. The following adventages have been showed by ashing experiment with the above apparatus: (1) high speed of ashing and saving of electric energy; (2) The apparatus can ash a large amount of samples at a time; (3) The ashed sample is pure white (or spotless), loose and easily soluble with few content of residual char; (4) The fresh sample can also be ashed directly. The apparatus is suitable for ashing a large amount of the environmental samples containing low level radioactivity trace elements and the medical, food and agricultural research samples

  4. Evaluation of rapid radiometric method for drug susceptibility testing of Mycobacterium tuberculosis

    International Nuclear Information System (INIS)

    Siddiqi, S.H.; Libonati, J.P.; Middlebrook, G.

    1981-01-01

    A total of 106 isolates of Mycobacterium tuberculosis were tested for drug susceptibility by the conventional 7H11 plate method and by a new rapid radiometric method using special 7H12 liquid medium with 14 C-labeled substrate. Results obtained by the two methods were compared for rapidity, sensitivity, and specificity of the new test method. There was 98% overall agreement between the results obtained by the two methods. Of a total of 424 drug tests, only 8 drug results did not agree, mostly in the case of streptomycin. This new procedure was found to be rapid, with 87% of the tests results reportable within 4 days and 98% reportable within 5 days as compared to the usual 3 weeks required with the conventional indirect susceptibility test method. The results of this preliminary study indicate that the rapid radiometric method seems to have the potential for routine laboratory use and merits further investigations

  5. A multiplex PCR method for rapid identification of Brachionus rotifers.

    Science.gov (United States)

    Vasileiadou, Kalliopi; Papakostas, Spiros; Triantafyllidis, Alexander; Kappas, Ilias; Abatzopoulos, Theodore J

    2009-01-01

    Cryptic species are increasingly being recognized in many organisms. In Brachionus rotifers, many morphologically similar yet genetically distinct species/biotypes have been described. A number of Brachionus cryptic species have been recognized among hatchery strains. In this study, we present a simple, one-step genetic method to detect the presence of those Brachionus sp. rotifers that have been found in hatcheries. With the proposed technique, each of the B. plicatilis sensu stricto, B. ibericus, Brachionus sp. Nevada, Brachionus sp. Austria, Brachionus sp. Manjavacas, and Brachionus sp. Cayman species and/or biotypes can be identified with polymerase chain reaction (PCR) analysis. Based on 233 cytochrome c oxidase subunit I sequences, we reviewed all the available cryptic Brachionus sp. genetic polymorphisms, and we designed six nested primers. With these primers, a specific amplicon of distinct size is produced for every one of the involved species/biotypes. Two highly sensitive protocols were developed for using the primers. Many of the primers can be combined in the same PCR. The proposed method has been found to be an effective and practical tool to investigate the presence of the above six cryptic species/biotypes in both individual and communal (bulk) rotifer deoxyribonucleic acid extractions from hatcheries. With this technique, hatchery managers could easily determine their rotifer composition at the level of cryptic species and monitor their cultures more efficiently.

  6. Toward tsunami early warning system in Indonesia by using rapid rupture durations estimation

    International Nuclear Information System (INIS)

    Madlazim

    2012-01-01

    Indonesia has Indonesian Tsunami Early Warning System (Ina-TEWS) since 2008. The Ina-TEWS has used automatic processing on hypocenter; Mwp, Mw (mB) and Mj. If earthquake occurred in Ocean, depth 7, then Ina-TEWS announce early warning that the earthquake can generate tsunami. However, the announcement of the Ina-TEWS is still not accuracy. Purposes of this research are to estimate earthquake rupture duration of large Indonesia earthquakes that occurred in Indian Ocean, Java, Timor sea, Banda sea, Arafura sea and Pasific ocean. We analyzed at least 330 vertical seismogram recorded by IRIS-DMC network using a direct procedure for rapid assessment of earthquake tsunami potential using simple measures on P-wave vertical seismograms on the velocity records, and the likelihood that the high-frequency, apparent rupture duration, T dur . T dur can be related to the critical parameters rupture length (L), depth (z), and shear modulus (μ) while T dur may be related to wide (W), slip (D), z or μ. Our analysis shows that the rupture duration has a stronger influence to generate tsunami than Mw and depth. The rupture duration gives more information on tsunami impact, Mo/μ, depth and size than Mw and other currently used discriminants. We show more information which known from the rupture durations. The longer rupture duration, the shallower source of the earthquake. For rupture duration greater than 50 s, the depth less than 50 km, Mw greater than 7, the longer rupture length, because T dur is proportional L and greater Mo/μ. Because Mo/μ is proportional L. So, with rupture duration information can be known information of the four parameters. We also suggest that tsunami potential is not directly related to the faulting type of source and for events that have rupture duration greater than 50 s, the earthquakes generated tsunami. With available real-time seismogram data, rapid calculation, rupture duration discriminant can be completed within 4–5 min after an earthquake

  7. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  8. Methods for Estimation of Market Power in Electric Power Industry

    Science.gov (United States)

    Turcik, M.; Oleinikova, I.; Junghans, G.; Kolcun, M.

    2012-01-01

    The article is related to a topical issue of the newly-arisen market power phenomenon in the electric power industry. The authors point out to the importance of effective instruments and methods for credible estimation of the market power on liberalized electricity market as well as the forms and consequences of market power abuse. The fundamental principles and methods of the market power estimation are given along with the most common relevant indicators. Furthermore, in the work a proposal for determination of the relevant market place taking into account the specific features of power system and a theoretical example of estimating the residual supply index (RSI) in the electricity market are given.

  9. Stock price estimation using ensemble Kalman Filter square root method

    Science.gov (United States)

    Karya, D. F.; Katias, P.; Herlambang, T.

    2018-04-01

    Shares are securities as the possession or equity evidence of an individual or corporation over an enterprise, especially public companies whose activity is stock trading. Investment in stocks trading is most likely to be the option of investors as stocks trading offers attractive profits. In determining a choice of safe investment in the stocks, the investors require a way of assessing the stock prices to buy so as to help optimize their profits. An effective method of analysis which will reduce the risk the investors may bear is by predicting or estimating the stock price. Estimation is carried out as a problem sometimes can be solved by using previous information or data related or relevant to the problem. The contribution of this paper is that the estimates of stock prices in high, low, and close categorycan be utilized as investors’ consideration for decision making in investment. In this paper, stock price estimation was made by using the Ensemble Kalman Filter Square Root method (EnKF-SR) and Ensemble Kalman Filter method (EnKF). The simulation results showed that the resulted estimation by applying EnKF method was more accurate than that by the EnKF-SR, with an estimation error of about 0.2 % by EnKF and an estimation error of 2.6 % by EnKF-SR.

  10. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  11. A rapid estimation of tsunami run-up based on finite fault models

    Science.gov (United States)

    Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.

    2014-12-01

    Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.

  12. The Use of Rapid Review Methods for the U.S. Preventive Services Task Force.

    Science.gov (United States)

    Patnode, Carrie D; Eder, Michelle L; Walsh, Emily S; Viswanathan, Meera; Lin, Jennifer S

    2018-01-01

    Rapid review products are intended to synthesize available evidence in a timely fashion while still meeting the needs of healthcare decision makers. Various methods and products have been applied for rapid evidence syntheses, but no single approach has been uniformly adopted. Methods to gain efficiency and compress the review time period include focusing on a narrow clinical topic and key questions; limiting the literature search; performing single (versus dual) screening of abstracts and full-text articles for relevance; and limiting the analysis and synthesis. In order to maintain the scientific integrity, including transparency, of rapid evidence syntheses, it is imperative that procedures used to streamline standard systematic review methods are prespecified, based on sound review principles and empiric evidence when possible, and provide the end user with an accurate and comprehensive synthesis. The collection of clinical preventive service recommendations maintained by the U.S. Preventive Services Task Force, along with its commitment to rigorous methods development, provide a unique opportunity to refine, implement, and evaluate rapid evidence synthesis methods and add to an emerging evidence base on rapid review methods. This paper summarizes the U.S. Preventive Services Task Force's use of rapid review methodology, its criteria for selecting topics for rapid evidence syntheses, and proposed methods to streamline the review process. Copyright © 2018 American Journal of Preventive Medicine. All rights reserved.

  13. Comparing Methods for Estimating Direct Costs of Adverse Drug Events.

    Science.gov (United States)

    Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas

    2017-12-01

    To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    Science.gov (United States)

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  15. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves

    Directory of Open Access Journals (Sweden)

    Madaín Pérez-Patricio

    2018-02-01

    Full Text Available This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance, a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica, Canavalia ensiforme, and Lycopersicon esculentum. Experimental results showed that—in terms of accuracy and processing speed—the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica, where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  16. Training Methods for Image Noise Level Estimation on Wavelet Components

    Directory of Open Access Journals (Sweden)

    A. De Stefano

    2004-12-01

    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  17. A Group Contribution Method for Estimating Cetane and Octane Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group

    2016-07-28

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  18. Novel rapid method for the characterisation of polymeric sugars from macroalgae.

    Science.gov (United States)

    Spicer, S E; Adams, J M M; Thomas, D S; Gallagher, J A; Winters, Ana L

    2017-01-01

    Laminarins are storage polysaccharides found only in brown seaweeds, specifically Laminarialaes and Fucales. Laminarin has been shown to have anti-apoptotic and anti-tumoural activities and is considered as a nutraceutical component that can positively influence human health. The structure is species dependent, generally composed of linear ß(1-3) glucans with intrachain β(1-6) branching and varies according to harvest season and environmental factors. Current methods for analysis of molar mass and DP length are technically demanding and are not widely available. Here, we present a simple inexpensive method which enables rapid analysis of laminarins from macroalgal biomass using high-performance anion exchange chromatography with pulsed amperometric detection (HPAEC-PAD) without the need for hydrolysis or further processing. This is based on the linear relationship observed between log 10 DP and retention time following separation of laminarins on a CarboPac PA-100 column (Dionex) using standard 1,3-β-d-gluco-oligosaccharides ranging in DP from 2 to 8. This method was applied to analyse laminarin oligomers in extracts from different species harvested from within the intertidal zone on Welsh rocky shores containing laminarin polymers with different ranges of DP. The degree of polymerisation and extrapolated molar mass agreed well with values estimated by LC-ESI/MS n analysis and those reported in the literature.

  19. Estimation of arsenic in nail using silver diethyldithiocarbamate method

    Directory of Open Access Journals (Sweden)

    Habiba Akhter Bhuiyan

    2015-08-01

    Full Text Available Spectrophotometric method of arsenic estimation in nails has four steps: a washing of nails, b digestion of nails, c arsenic generation, and finally d reading absorbance using spectrophotometer. Although the method is a cheapest one, widely used and effective, it is time consuming, laborious and need caution while using four acids.

  20. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  1. On the Methods for Estimating the Corneoscleral Limbus.

    Science.gov (United States)

    Jesus, Danilo A; Iskander, D Robert

    2017-08-01

    The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.

  2. Motion estimation using point cluster method and Kalman filter.

    Science.gov (United States)

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  3. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  4. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  5. The estimation of the measurement results with using statistical methods

    International Nuclear Information System (INIS)

    Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T

    2015-01-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed

  6. The estimation of the measurement results with using statistical methods

    Science.gov (United States)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  7. Evaluation of non cyanide methods for hemoglobin estimation

    Directory of Open Access Journals (Sweden)

    Vinaya B Shah

    2011-01-01

    Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers

  8. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  9. Rapid Methods for the Detection of Foodborne Bacterial Pathogens: Principles, Applications, Advantages and Limitations

    Directory of Open Access Journals (Sweden)

    Law eJodi Woan-Fei

    2015-01-01

    Full Text Available The incidence of foodborne diseases has increased over the years and resulted in major public health problem globally. Foodborne pathogens can be found in various foods and it is important to detect foodborne pathogens to provide safe food supply and to prevent foodborne diseases. The conventional methods used to detect foodborne pathogen are time consuming and laborious. Hence, a variety of methods have been developed for rapid detection of foodborne pathogens as it is required in many food analyses. Rapid detection methods can be categorized into nucleic acid-based, biosensor-based and immunological-based methods. This review emphasizes on the principles and application of recent rapid methods for the detection of foodborne bacterial pathogens. Detection methods included are simple polymerase chain reaction (PCR, multiplex PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA, loop-mediated isothermal amplification (LAMP and oligonucleotide DNA microarray which classified as nucleic acid-based methods; optical, electrochemical and mass-based biosensors which classified as biosensor-based methods; enzyme-linked immunosorbent assay (ELISA and lateral flow immunoassay which classified as immunological-based methods. In general, rapid detection methods are generally time-efficient, sensitive, specific and labor-saving. The developments of rapid detection methods are vital in prevention and treatment of foodborne diseases.

  10. Simple method for quick estimation of aquifer hydrogeological parameters

    Science.gov (United States)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  11. Application of Density Estimation Methods to Datasets from a Glider

    Science.gov (United States)

    2014-09-30

    humpback and sperm whales as well as different dolphin species. OBJECTIVES The objective of this research is to extend existing methods for cetacean...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...estimation from single sensor datasets. Required steps for a cue counting approach, where a cue has been defined as a clicking event (Küsel et al., 2011), to

  12. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  13. Assessment of Methods for Estimating Risk to Birds from ...

    Science.gov (United States)

    The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probability of ingestion by birds of contaminated particles such as pesticide granules or lead particles (i.e. shot or bullet fragments). In addition, it presents an approach for using this information to estimate the risk of mortality to birds from ingestion of lead particles. Response to ERASC Request #16

  14. Plant-available soil water capacity: estimation methods and implications

    Directory of Open Access Journals (Sweden)

    Bruno Montoani Silva

    2014-04-01

    Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.

  15. Methods and Magnitudes of Rapid Weight Loss in Judo Athletes Over Pre-Competition Periods

    Directory of Open Access Journals (Sweden)

    Kons Rafael Lima

    2017-06-01

    Full Text Available Purpose. The study aimed to analyse the methods and magnitudes of rapid weight loss (RWL in judo team members in distinct periods before the biggest state competition in Southern Brazil.

  16. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    DEFF Research Database (Denmark)

    Sabourin, David; Snakenborg, Detlef; Dufva, Hans Martin

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observ...

  17. An Estimation Method for number of carrier frequency

    Directory of Open Access Journals (Sweden)

    Xiong Peng

    2015-01-01

    Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.

  18. Mobile Image Ratiometry: A New Method for Instantaneous Analysis of Rapid Test Strips

    OpenAIRE

    Donald C. Cooper; Bryan Callahan; Phil Callahan; Lee Burnett

    2012-01-01

    Here we describe Mobile Image Ratiometry (MIR), a new method for the automated quantification of standardized rapid immunoassay strips using consumer-based mobile smartphone and tablet cameras. To demonstrate MIR we developed a standardized method using rapid immunotest strips directed against cocaine (COC) and its major metabolite, benzoylecgonine (BE). We performed image analysis of three brands of commercially available dye-conjugated anti-COC/BE antibody test strips in response to three d...

  19. Comparing rapid methods for detecting Listeria in seafood and environmental samples using the most probably number (MPN) technique.

    Science.gov (United States)

    Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C

    2012-02-15

    The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. [Rapid methods for the genus Salmonella bacteria detection in food and raw materials].

    Science.gov (United States)

    Sokolov, D M; Sokolov, M S

    2013-01-01

    The article considers sanitary and epidemiological aspects and the impact of Salmonella food poisoning in Russia and abroad. The main characteristics of the agent (Salmonella enterica subsp. Enteritidis) are summarized. The main sources of human Salmonella infection are products of poultry and livestock (poultry, eggs, dairy products, meat products, etc.). Standard methods of identifying the causative agent, rapid (alternative) methods of analysis of Salmonella using differential diagnostic medium (MSRV, Salmosyst, XLT4-agar, agar-Rambach et al.), rapid tests Singlepath-Salmonella and PCR (food proof Salmonella) in real time were stated. Rapid tests provide is a substantial (at 24-48 h) reducing the time to identify Salmonella.

  1. Estimating evolutionary rates using time-structured data: a general comparison of phylogenetic methods.

    Science.gov (United States)

    Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W

    2016-11-15

    In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. A New Method for Estimation of Velocity Vectors

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1998-01-01

    The paper describes a new method for determining the velocity vector of a remotely sensed object using either sound or electromagnetic radiation. The movement of the object is determined from a field with spatial oscillations in both the axial direction of the transducer and in one or two...... directions transverse to the axial direction. By using a number of pulse emissions, the inter-pulse movement can be estimated and the velocity found from the estimated movement and the time between pulses. The method is based on the principle of using transverse spatial modulation for making the received...

  3. A comparison of analysis methods to estimate contingency strength.

    Science.gov (United States)

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  4. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  5. Rapid, convenient method for screening imidazole-containing compounds for heme oxygenase inhibition.

    Science.gov (United States)

    Vlahakis, Jason Z; Rahman, Mona N; Roman, Gheorghe; Jia, Zongchao; Nakatsu, Kanji; Szarek, Walter A

    2011-01-01

    Sensitive assays for measuring heme oxygenase activity have been based on the gas-chromatographic detection of carbon monoxide using elaborate, expensive equipment. The present study describes a rapid and convenient method for screening imidazole-containing candidates for inhibitory activity against heme oxygenase using a plate reader, based on the spectroscopic evaluation of heme degradation. A PowerWave XS plate reader was used to monitor the absorbance (as a function of time) of heme bound to purified truncated human heme oxygenase-1 (hHO-1) in the individual wells of a standard 96-well plate (with or without the addition of a test compound). The degradation of heme by heme oxygenase-1 was initiated using l-ascorbic acid, and the collected relevant absorbance data were analyzed by three different methods to calculate the percent control activity occurring in wells containing test compounds relative to that occurring in control wells with no test compound present. In the cases of wells containing inhibitory compounds, significant shifts in λ(max) from 404 to near 412 nm were observed as well as a decrease in the rate of heme degradation relative to that of the control. Each of the three methods of data processing (overall percent drop in absorbance over 1.5h, initial rate of reaction determined over the first 5 min, and estimated pseudo first-order reaction rate constant determined over 1.5h) gave similar and reproducible results for percent control activity. The fastest and easiest method of data analysis was determined to be that using initial rates, involving data acquisition for only 5 min once reactions have been initiated using l-ascorbic acid. The results of the study demonstrate that this simple assay based on the spectroscopic detection of heme represents a rapid, convenient method to determine the relative inhibitory activity of candidate compounds, and is useful in quickly screening a series or library of compounds for heme oxygenase inhibition

  6. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  7. The determination of Sr-90 in environmental material using an improved rapid method

    International Nuclear Information System (INIS)

    Ghods, A.; Veselsky, J.C.; Zhu, S.; Mirna, A.; Schelenz, R.

    1989-01-01

    A short report on strontium 90, its occurrence in the biosphere and its rapid determination methods is given. Classification of determination methods suitable for various environmental and biological materials is established. Interference due to Y-91 and a method to eliminate the activity of Y-90 and Y-91 is discussed. Tabs

  8. Rapid identification of salmonella serotypes with stereo and hyperspectral microscope imaging Methods

    Science.gov (United States)

    The hyperspectral microscope imaging (HMI) method can reduce detection time within 8 hours including incubation process. The early and rapid detection with this method in conjunction with the high throughput capabilities makes HMI method a prime candidate for implementation for the food industry. Th...

  9. Power system frequency estimation based on an orthogonal decomposition method

    Science.gov (United States)

    Lee, Chih-Hung; Tsai, Men-Shen

    2018-06-01

    In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.

  10. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  11. Estimating building energy consumption using extreme learning machine method

    International Nuclear Information System (INIS)

    Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey

    2016-01-01

    The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.

  12. Conventional estimating method of earthquake response of mechanical appendage system

    International Nuclear Information System (INIS)

    Aoki, Shigeru; Suzuki, Kohei

    1981-01-01

    Generally, for the estimation of the earthquake response of appendage structure system installed in main structure system, the method of floor response analysis using the response spectra at the point of installing the appendage system has been used. On the other hand, the research on the estimation of the earthquake response of appendage system by the statistical procedure based on probability process theory has been reported. The development of a practical method for simply estimating the response is an important subject in aseismatic engineering. In this study, the method of estimating the earthquake response of appendage system in the general case that the natural frequencies of both structure systems were different was investigated. First, it was shown that floor response amplification factor was able to be estimated simply by giving the ratio of the natural frequencies of both structure systems, and its statistical property was clarified. Next, it was elucidated that the procedure of expressing acceleration, velocity and displacement responses with tri-axial response spectra simultaneously was able to be applied to the expression of FRAF. The applicability of this procedure to nonlinear system was examined. (Kako, I.)

  13. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  14. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  15. Comparison of methods used for estimating pharmacist counseling behaviors.

    Science.gov (United States)

    Schommer, J C; Sullivan, D L; Wiederholt, J B

    1994-01-01

    To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.

  16. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  17. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    International Nuclear Information System (INIS)

    Kato, Fumi; Kamishima, Tamotsu; Morita, Ken; Muto, Natalia S.; Okamoto, Syozou; Omatsu, Tokuhiko; Oyama, Noriko; Terae, Satoshi; Kanegae, Kakuko; Nonomura, Katsuya; Shirato, Hiroki

    2011-01-01

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using 99m Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 ± 3.9 s) was significantly shorter than that of the workstation (102.6 ± 38.9 s, p < 0.0001). The results of n-SRV (49.7 ± 4.0%) were highly consistent with those of z-SRV (49.9 ± 3.6%), with a mean discrepancy of 0.12 ± 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 ± 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  18. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Fumi, E-mail: fumikato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kamishima, Tamotsu, E-mail: ktamotamo2@yahoo.co.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Morita, Ken, E-mail: kenordic@carrot.ocn.ne.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Muto, Natalia S., E-mail: nataliamuto@gmail.com [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Okamoto, Syozou, E-mail: shozo@med.hokudai.ac.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Omatsu, Tokuhiko, E-mail: omatoku@nirs.go.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Oyama, Noriko, E-mail: ZAT04404@nifty.ne.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Terae, Satoshi, E-mail: saterae@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kanegae, Kakuko, E-mail: IZW00143@nifty.ne.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Nonomura, Katsuya, E-mail: k-nonno@med.hokudai.ac.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Shirato, Hiroki, E-mail: shirato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan)

    2011-07-15

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using {sup 99m}Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 {+-} 3.9 s) was significantly shorter than that of the workstation (102.6 {+-} 38.9 s, p < 0.0001). The results of n-SRV (49.7 {+-} 4.0%) were highly consistent with those of z-SRV (49.9 {+-} 3.6%), with a mean discrepancy of 0.12 {+-} 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 {+-} 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  19. Rapid, nondestructive estimation of surface polymer layer thickness using attenuated total reflection fourier transform infrared (ATR FT-IR) spectroscopy and synthetic spectra derived from optical principles.

    Science.gov (United States)

    Weinstock, B André; Guiney, Linda M; Loose, Christopher

    2012-11-01

    We have developed a rapid, nondestructive analytical method that estimates the thickness of a surface polymer layer with high precision but unknown accuracy using a single attenuated total reflection Fourier transform infrared (ATR FT-IR) measurement. Because the method is rapid, nondestructive, and requires no sample preparation, it is ideal as a process analytical technique. Prior to implementation, the ATR FT-IR spectrum of the substrate layer pure component and the ATR FT-IR and real refractive index spectra of the surface layer pure component must be known. From these three input spectra a synthetic mid-infrared spectral matrix of surface layers 0 nm to 10,000 nm thick on substrate is created de novo. A minimum statistical distance match between a process sample's ATR FT-IR spectrum and the synthetic spectral matrix provides the thickness of that sample. We show that this method can be used to successfully estimate the thickness of polysulfobetaine surface modification, a hydrated polymeric surface layer covalently bonded onto a polyetherurethane substrate. A database of 1850 sample spectra was examined. Spectrochemical matrix-effect unknowns, such as the nonuniform and molecularly novel polysulfobetaine-polyetherurethane interface, were found to be minimal. A partial least squares regression analysis of the database spectra versus their thicknesses as calculated by the method described yielded an estimate of precision of ±52 nm.

  20. Improved stove programs need robust methods to estimate carbon offsets

    OpenAIRE

    Johnson, Michael; Edwards, Rufus; Masera, Omar

    2010-01-01

    Current standard methods result in significant discrepancies in carbon offset accounting compared to approaches based on representative community based subsamples, which provide more realistic assessments at reasonable cost. Perhaps more critically, neither of the currently approved methods incorporates uncertainties inherent in estimates of emission factors or non-renewable fuel usage (fNRB). Since emission factors and fNRB contribute 25% and 47%, respectively, to the overall uncertainty in ...

  1. Estimation of water percolation by different methods using TDR

    Directory of Open Access Journals (Sweden)

    Alisson Jadavi Pereira da Silva

    2014-02-01

    Full Text Available Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR in a drainage lysimeter. We used Darcy's law with K(θ functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ predicted by the method of Hillel et al. (1972 provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980, Sisson et al. (1980 and van Genuchten (1980 underestimated water percolation.

  2. Comparing different methods for estimating radiation dose to the conceptus

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Rendon, X.; Dedulle, A. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); Walgraeve, M.S.; Woussen, S.; Zhang, G. [University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Bosmans, H. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Zanca, F. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); GE Healthcare, Buc (France)

    2017-02-15

    To compare different methods available in the literature for estimating radiation dose to the conceptus (D{sub conceptus}) against a patient-specific Monte Carlo (MC) simulation and a commercial software package (CSP). Eight voxel models from abdominopelvic CT exams of pregnant patients were generated. D{sub conceptus} was calculated with an MC framework including patient-specific longitudinal tube current modulation (TCM). For the same patients, dose to the uterus, D{sub uterus}, was calculated as an alternative for D{sub conceptus}, with a CSP that uses a standard-size, non-pregnant phantom and a generic TCM curve. The percentage error between D{sub uterus} and D{sub conceptus} was studied. Dose to the conceptus and percent error with respect to D{sub conceptus} was also estimated for three methods in the literature. The percentage error ranged from -15.9% to 40.0% when comparing MC to CSP. When comparing the TCM profiles with the generic TCM profile from the CSP, differences were observed due to patient habitus and conceptus position. For the other methods, the percentage error ranged from -30.1% to 13.5% but applicability was limited. Estimating an accurate D{sub conceptus} requires a patient-specific approach that the CSP investigated cannot provide. Available methods in the literature can provide a better estimation if applicable to patient-specific cases. (orig.)

  3. Stability estimates for hp spectral element methods for general ...

    Indian Academy of Sciences (India)

    We establish basic stability estimates for a non-conforming ℎ- spectral element method which allows for simultaneous mesh refinement and variable polynomial degree. The spectral element functions are non-conforming if the boundary conditions are Dirichlet. For problems with mixed boundary conditions they are ...

  4. A simple method for estimating thermal response of building ...

    African Journals Online (AJOL)

    This paper develops a simple method for estimating the thermal response of building materials in the tropical climatic zone using the basic heat equation. The efficacy of the developed model has been tested with data from three West African cities, namely Kano (lat. 12.1 ºN) Nigeria, Ibadan (lat. 7.4 ºN) Nigeria and Cotonou ...

  5. Methods to estimate breeding values in honey bees

    NARCIS (Netherlands)

    Brascamp, E.W.; Bijma, P.

    2014-01-01

    Background Efficient methodologies based on animal models are widely used to estimate breeding values in farm animals. These methods are not applicable in honey bees because of their mode of reproduction. Observations are recorded on colonies, which consist of a single queen and thousands of workers

  6. Sampling point selection for energy estimation in the quasicontinuum method

    NARCIS (Netherlands)

    Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.

    2010-01-01

    The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of

  7. Benefits of EMU Participation : Estimates using the Synthetic Control Method

    NARCIS (Netherlands)

    Verstegen, Loes; van Groezen, Bas; Meijdam, Lex

    2017-01-01

    This paper investigates quantitatively the benefits from participation in the Economic and Monetary Union for individual Euro area countries. Using the synthetic control method, we estimate how real GDP per capita would have developed for the EMU member states, if those countries had not joined the

  8. Lidar method to estimate emission rates from extended sources

    Science.gov (United States)

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  9. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.

    2011-01-01

    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  10. The relative efficiency of three methods of estimating herbage mass ...

    African Journals Online (AJOL)

    The methods involved were randomly placed circular quadrats; randomly placed narrow strips; and disc meter sampling. Disc meter and quadrat sampling appear to be more efficient than strip sampling. In a subsequent small plot grazing trial the estimates of herbage mass, using the disc meter, had a consistent precision ...

  11. Dual ant colony operational modal analysis parameter estimation method

    Science.gov (United States)

    Sitarz, Piotr; Powałka, Bartosz

    2018-01-01

    Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

  12. Simple method for the estimation of glomerular filtration rate

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)

    1977-02-01

    A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.

  13. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  14. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  15. Methods for Measuring and Estimating Methane Emission from Ruminants

    Directory of Open Access Journals (Sweden)

    Jørgen Madsen

    2012-04-01

    Full Text Available This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments.

  16. A simple method for estimating the entropy of neural activity

    International Nuclear Information System (INIS)

    Berry II, Michael J; Tkačik, Gašper; Dubuis, Julien; Marre, Olivier; Da Silveira, Rava Azeredo

    2013-01-01

    The number of possible activity patterns in a population of neurons grows exponentially with the size of the population. Typical experiments explore only a tiny fraction of the large space of possible activity patterns in the case of populations with more than 10 or 20 neurons. It is thus impossible, in this undersampled regime, to estimate the probabilities with which most of the activity patterns occur. As a result, the corresponding entropy—which is a measure of the computational power of the neural population—cannot be estimated directly. We propose a simple scheme for estimating the entropy in the undersampled regime, which bounds its value from both below and above. The lower bound is the usual ‘naive’ entropy of the experimental frequencies. The upper bound results from a hybrid approximation of the entropy which makes use of the naive estimate, a maximum entropy fit, and a coverage adjustment. We apply our simple scheme to artificial data, in order to check their accuracy; we also compare its performance to those of several previously defined entropy estimators. We then apply it to actual measurements of neural activity in populations with up to 100 cells. Finally, we discuss the similarities and differences between the proposed simple estimation scheme and various earlier methods. (paper)

  17. Uranium solution mining cost estimating technique: means for rapid comparative analysis of deposits

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    Twelve graphs provide a technique for determining relative cost ranges for uranium solution mining projects. The use of the technique can provide a consistent framework for rapid comparative analysis of various properties of mining situations. The technique is also useful to determine the sensitivities of cost figures to incremental changes in mining factors or deposit characteristics

  18. New method and installation for rapid determination of radon diffusion coefficient in various materials

    International Nuclear Information System (INIS)

    Tsapalov, Andrey; Gulabyants, Loren; Livshits, Mihail; Kovler, Konstantin

    2014-01-01

    The mathematical apparatus and the experimental installation for the rapid determination of radon diffusion coefficient in various materials are developed. The single test lasts not longer than 18 h and allows testing numerous materials, such as gaseous and liquid media, as well as soil, concrete and radon-proof membranes, in which diffusion coefficient of radon may vary in an extremely wide range, from 1·10 −12 to 5·10 −5 m 2 /s. The uncertainty of radon diffusion coefficient estimation depends on the permeability of the sample and varies from about 5% (for the most permeable materials) to 40% (for less permeable materials, such as radon-proof membranes). - Highlights: • The new method and installation for determination of radon diffusion coefficient D are developed. • The measured D-values vary in an extremely wide range, from 5×10 -5 to 1×10 -12 m 2 /s. • The materials include water, air, soil, building materials and radon-proof membranes. • The duration of the single test does not exceed 18 hours. • The measurement uncertainty varies from 5% (in permeable materials) to 40% (in radon gas barriers)

  19. Seasonal adjustment methods and real time trend-cycle estimation

    CERN Document Server

    Bee Dagum, Estela

    2016-01-01

    This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...

  20. Limitations of the time slide method of background estimation

    International Nuclear Information System (INIS)

    Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis

    2010-01-01

    Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.

  1. Limitations of the time slide method of background estimation

    Energy Technology Data Exchange (ETDEWEB)

    Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis, E-mail: mwas@lal.in2p3.f [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France)

    2010-10-07

    Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.

  2. A new DOD and DOA estimation method for MIMO radar

    Science.gov (United States)

    Gong, Jian; Lou, Shuntian; Guo, Yiduo

    2018-04-01

    The battlefield electromagnetic environment is becoming more and more complex, and MIMO radar will inevitably be affected by coherent and non-stationary noise. To solve this problem, an angle estimation method based on oblique projection operator and Teoplitz matrix reconstruction is proposed. Through the reconstruction of Toeplitz, nonstationary noise is transformed into Gauss white noise, and then the oblique projection operator is used to separate independent and correlated sources. Finally, simulations are carried out to verify the performance of the proposed algorithm in terms of angle estimation performance and source overload.

  3. Review of methods for level density estimation from resonance parameters

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1983-01-01

    A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)

  4. Dental age estimation using Willems method: A digital orthopantomographic study

    Directory of Open Access Journals (Sweden)

    Rezwana Begum Mohammed

    2014-01-01

    Full Text Available In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA in different age groups and to evaluate the possible correlation between DA and chronological age (CA in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88. The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P 0.05. Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05. Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA.

  5. NEW COMPLETENESS METHODS FOR ESTIMATING EXOPLANET DISCOVERIES BY DIRECT DETECTION

    International Nuclear Information System (INIS)

    Brown, Robert A.; Soummer, Remi

    2010-01-01

    We report on new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list and stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission and its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization) and an estimate of the occurrence rate of planets (η). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, and we prove the accuracy of this new approach. Our methods provide tools to define a mission for a particular science goal; for example, a mission can be defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built and how successive observations can be selected. Our approach also provides other critical mission attributes, such as the number of stars expected to be searched and the probability of zero discoveries. Because these attributes depend strongly on the mission scale (telescope diameter, observing capabilities and constraints, mission lifetime, etc.), our methods are directly applicable to the design of such future missions and provide guidance to the mission and instrument design based on scientific performance. We illustrate our new methods with practical calculations and exploratory design reference missions for the James Webb Space Telescope (JWST) operating with a distant starshade to reduce scattered and diffracted starlight on the focal plane. We estimate that five habitable Earth-mass planets would be discovered and characterized with spectroscopy, with a probability of zero discoveries of 0.004, assuming a small fraction of JWST observing time (7%), η = 0.3, and 70 observing visits, limited by starshade fuel.

  6. COMPARATIVE ANALYSIS OF ESTIMATION METHODS OF PHARMACY ORGANIZATION BANKRUPTCY PROBABILITY

    Directory of Open Access Journals (Sweden)

    V. L. Adzhienko

    2014-01-01

    Full Text Available A purpose of this study was to determine the probability of bankruptcy by various methods in order to predict the financial crisis of pharmacy organization. Estimating the probability of pharmacy organization bankruptcy was conducted using W. Beaver’s method adopted in the Russian Federation, with integrated assessment of financial stability use on the basis of scoring analysis. The results obtained by different methods are comparable and show that the risk of bankruptcy of the pharmacy organization is small.

  7. ACCELERATED METHODS FOR ESTIMATING THE DURABILITY OF PLAIN BEARINGS

    Directory of Open Access Journals (Sweden)

    Myron Czerniec

    2014-09-01

    Full Text Available The paper presents methods for determining the durability of slide bearings. The developed methods enhance the calculation process by even 100000 times, compared to the accurate solution obtained with the generalized cumulative model of wear. The paper determines the accuracy of results for estimating the durability of bearings depending on the size of blocks of constant conditions of contact interaction between the shaft with small out-of-roundedness and the bush with a circular contour. The paper gives an approximate dependence for determining accurate durability using either a more accurate or an additional method.

  8. The scope of application of incremental rapid prototyping methods in foundry engineering

    Directory of Open Access Journals (Sweden)

    M. Stankiewicz

    2010-01-01

    Full Text Available The article presents the scope of application of selected incremental Rapid Prototyping methods in the process of manufacturing casting models, casting moulds and casts. The Rapid Prototyping methods (SL, SLA, FDM, 3DP, JS are predominantly used for the production of models and model sets for casting moulds. The Rapid Tooling methods, such as: ZCast-3DP, ProMetalRCT and VoxelJet, enable the fabrication of casting moulds in the incremental process. The application of the RP methods in cast production makes it possible to speed up the prototype preparation process. This is particularly vital to elements of complex shapes. The time required for the manufacture of the model, the mould and the cast proper may vary from a few to several dozen hours.

  9. Method to Estimate the Dissolved Air Content in Hydraulic Fluid

    Science.gov (United States)

    Hauser, Daniel M.

    2011-01-01

    In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated

  10. Estimation of body fluids with bioimpedance spectroscopy: state of the art methods and proposal of novel methods

    International Nuclear Information System (INIS)

    Buendia, R; Seoane, F; Lindecrantz, K; Bosaeus, I; Gil-Pita, R; Johannsson, G; Ellegård, L; Ward, L C

    2015-01-01

    Determination of body fluids is a useful common practice in determination of disease mechanisms and treatments. Bioimpedance spectroscopy (BIS) methods are non-invasive, inexpensive and rapid alternatives to reference methods such as tracer dilution. However, they are indirect and their robustness and validity are unclear. In this article, state of the art methods are reviewed, their drawbacks identified and new methods are proposed. All methods were tested on a clinical database of patients receiving growth hormone replacement therapy. Results indicated that most BIS methods are similarly accurate (e.g.  <  0.5   ±   3.0% mean percentage difference for total body water) for estimation of body fluids. A new model for calculation is proposed that performs equally well for all fluid compartments (total body water, extra- and intracellular water). It is suggested that the main source of error in extracellular water estimation is due to anisotropy, in total body water estimation to the uncertainty associated with intracellular resistivity and in determination of intracellular water a combination of both. (paper)

  11. Magnetic susceptibility: a proxy method of estimating increased pollution

    International Nuclear Information System (INIS)

    Kluciarova, D.; Gregorova, D.; Tunyi, I.

    2004-01-01

    A need for rapid and inexpensive (proxy) methods of outlining areas exposed to increased pollution by atmospheric particulates of industrial origin caused scientists in various fields to use and validate different non-traditional (or non-chemical) techniques. Among them, soil magnetometry seems to be a suitable tool. This method is based on the knowledge that ferrimagnetic particles, namely magnetite, are produced from pyrite during combustion of fossil fuel. Besides the combustion processes, magnetic particles can also originate from road traffic, for example, or can be included in various waste-water outlets. In our study we examine the magnetic susceptibility as a convenient measure of determining the concentration of (ferri) magnetic minerals by rapid and non-destructive means. We used for measure KLY-2 Kappabridge. Concentration of ferrimagnetic minerals in different soils is linked to pollution sources. Higher χ values were observed in soils on the territory in Istebne (47383x10 -6 SI ). The susceptibility anomaly may be caused by particular geological circumstances and can be related to high content of ferromagnetic minerals in the host rocks. Positive correlation of magnetic susceptibility are conditioned by industrial contamination mainly by metal working factories and by traffic. The proposed method can be successfully applied in determining heavy metal pollution of soils on the city territories. (authors)

  12. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  13. A Rapid Screen Technique for Estimating Nanoparticle Transport in Porous Media

    Science.gov (United States)

    Quantifying the mobility of engineered nanoparticles in hydrologic pathways from point of release to human or ecological receptors is essential for assessing environmental exposures. Column transport experiments are a widely used technique to estimate the transport parameters of ...

  14. Methods to estimate irrigated reference crop evapotranspiration - a review.

    Science.gov (United States)

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  15. A simple method for estimation of phosphorous in urine

    International Nuclear Information System (INIS)

    Chaudhary, Seema; Gondane, Sonali; Sawant, Pramilla D.; Rao, D.D.

    2016-01-01

    Following internal contamination of 32 P, it is preferentially eliminated from the body in urine. It is estimated by in-situ precipitation of ammonium molybdo-phosphate (AMP) in urine followed by gross beta counting. The amount of AMP formed in-situ depends on the amount of stable phosphorous (P) present in the urine and hence, it was essential to generate information regarding urinary excretion of stable P. If amount of P excreted is significant then the amount of AMP formed would correspondingly increase leading to absorption of some of the β particles. The present study was taken up for the estimation of daily urinary excretion of P using the phospho-molybdate spectrophotometry method. Few urine samples received from radiation workers were analyzed and based on the observed range of stable P in urine; volume of sample required for 32 P estimation was finalized

  16. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink

    2017-08-01

    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  17. A method to estimate stellar ages from kinematical data

    Science.gov (United States)

    Almeida-Fernandes, F.; Rocha-Pinto, H. J.

    2018-05-01

    We present a method to build a probability density function (PDF) for the age of a star based on its peculiar velocities U, V, and W and its orbital eccentricity. The sample used in this work comes from the Geneva-Copenhagen Survey (GCS) that contains the spatial velocities, orbital eccentricities, and isochronal ages for about 14 000 stars. Using the GCS stars, we fitted the parameters that describe the relations between the distributions of kinematical properties and age. This parametrization allows us to obtain an age probability from the kinematical data. From this age PDF, we estimate an individual average age for the star using the most likely age and the expected age. We have obtained the stellar age PDF for the age of 9102 stars from the GCS and have shown that the distribution of individual ages derived from our method is in good agreement with the distribution of isochronal ages. We also observe a decline in the mean metallicity with our ages for stars younger than 7 Gyr, similar to the one observed for isochronal ages. This method can be useful for the estimation of rough stellar ages for those stars that fall in areas of the Hertzsprung-Russell diagram where isochrones are tightly crowded. As an example of this method, we estimate the age of Trappist-1, which is a M8V star, obtaining the age of t(UVW) = 12.50(+0.29 - 6.23) Gyr.

  18. Comparison of different methods for estimation of potential evapotranspiration

    International Nuclear Information System (INIS)

    Nazeer, M.

    2010-01-01

    Evapotranspiration can be estimated with different available methods. The aim of this research study to compare and evaluate the originally measured potential evapotranspiration from Class A pan with the Hargreaves equation, the Penman equation, the Penman-Montheith equation, and the FAO56 Penman-Monteith equation. The evaporation rate from pan recorded greater than stated methods. For each evapotranspiration method, results were compared against mean monthly potential evapotranspiration (PET) from Pan data according to FAO (ET/sub o/=K/sub pan X E/sub pan)), from daily measured recorded data of the twenty-five years (1984-2008). On the basis of statistical analysis between the pan data and the FAO56- Penman-Monteith method are not considered to be very significant (=0.98) at 95% confidence and prediction intervals. All methods required accurate weather data for precise results, for the purpose of this study the past twenty five years data were analyzed and used including maximum and minimum air temperature, relative humidity, wind speed, sunshine duration and rainfall. Based on linear regression analysis results the FAO56 PMM ranked first (R/sup 2/=0.98) followed by Hergreaves method (R/sup 2/=0.96), Penman-Monteith method (R/sup 2/=0.94) and Penman method (=0.93). Obviously, using FAO56 Penman Monteith method with precise climatic variables for ET/sub o/ estimation is more reliable than the other alternative methods, Hergreaves is more simple and rely only on air temperatures data and can be used alternative of FAO56 Penman-Monteith method if other climatic data are missing or unreliable. (author)

  19. Vegetation index methods for estimating evapotranspiration by remote sensing

    Science.gov (United States)

    Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.

    2010-01-01

    Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.

  20. A generic method for estimating system reliability using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples

  1. A generic method for estimating system reliability using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu

    2009-02-15

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.

  2. A Novel Nonlinear Parameter Estimation Method of Soft Tissues

    Directory of Open Access Journals (Sweden)

    Qianqian Tong

    2017-12-01

    Full Text Available The elastic parameters of soft tissues are important for medical diagnosis and virtual surgery simulation. In this study, we propose a novel nonlinear parameter estimation method for soft tissues. Firstly, an in-house data acquisition platform was used to obtain external forces and their corresponding deformation values. To provide highly precise data for estimating nonlinear parameters, the measured forces were corrected using the constructed weighted combination forecasting model based on a support vector machine (WCFM_SVM. Secondly, a tetrahedral finite element parameter estimation model was established to describe the physical characteristics of soft tissues, using the substitution parameters of Young’s modulus and Poisson’s ratio to avoid solving complicated nonlinear problems. To improve the robustness of our model and avoid poor local minima, the initial parameters solved by a linear finite element model were introduced into the parameter estimation model. Finally, a self-adapting Levenberg–Marquardt (LM algorithm was presented, which is capable of adaptively adjusting iterative parameters to solve the established parameter estimation model. The maximum absolute error of our WCFM_SVM model was less than 0.03 Newton, resulting in more accurate forces in comparison with other correction models tested. The maximum absolute error between the calculated and measured nodal displacements was less than 1.5 mm, demonstrating that our nonlinear parameters are precise.

  3. Hexographic Method of Complex Town-Planning Terrain Estimate

    Science.gov (United States)

    Khudyakov, A. Ju

    2017-11-01

    The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also

  4. Estimating surface acoustic impedance with the inverse method.

    Science.gov (United States)

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

  5. Interpretation of the method of images in estimating superconducting levitation

    International Nuclear Information System (INIS)

    Perez-Diaz, Jose Luis; Garcia-Prada, Juan Carlos

    2007-01-01

    Among different papers devoted to superconducting levitation of a permanent magnet over a superconductor using the method of images, there is a discrepancy of a factor of two when estimating the lift force. This is not a minor matter but an interesting fundamental question that contributes to understanding the physical phenomena of 'imaging' on a superconductor surface. We solve it, make clear the physical behavior underlying it, and suggest the reinterpretation of some previous experiments

  6. New method to estimate the frequency stability of laser signals

    International Nuclear Information System (INIS)

    McFerran, J.J.; Maric, M.; Luiten, A.N.

    2004-01-01

    A frequent challenge in the scientific and commercial use of lasers is the need to determine the frequency stability of the output optical signal. In this article we present a new method to estimate this quantity while avoiding the complexity of the usual technique. The new technique displays the result in terms of the usual time domain measure of frequency stability: the square root Allan variance

  7. Diffeomorphic Iterative Centroid Methods for Template Estimation on Large Datasets

    OpenAIRE

    Cury , Claire; Glaunès , Joan Alexis; Colliot , Olivier

    2014-01-01

    International audience; A common approach for analysis of anatomical variability relies on the stimation of a template representative of the population. The Large Deformation Diffeomorphic Metric Mapping is an attractive framework for that purpose. However, template estimation using LDDMM is computationally expensive, which is a limitation for the study of large datasets. This paper presents an iterative method which quickly provides a centroid of the population in the shape space. This centr...

  8. Method for developing cost estimates for generic regulatory requirements

    International Nuclear Information System (INIS)

    1985-01-01

    The NRC has established a practice of performing regulatory analyses, reflecting costs as well as benefits, of proposed new or revised generic requirements. A method had been developed to assist the NRC in preparing the types of cost estimates required for this purpose and for assigning priorities in the resolution of generic safety issues. The cost of a generic requirement is defined as the net present value of total lifetime cost incurred by the public, industry, and government in implementing the requirement for all affected plants. The method described here is for commercial light-water-reactor power plants. Estimating the cost for a generic requirement involves several steps: (1) identifying the activities that must be carried out to fully implement the requirement, (2) defining the work packages associated with the major activities, (3) identifying the individual elements of cost for each work package, (4) estimating the magnitude of each cost element, (5) aggregating individual plant costs over the plant lifetime, and (6) aggregating all plant costs and generic costs to produce a total, national, present value of lifetime cost for the requirement. The method developed addresses all six steps. In this paper, we discuss on the first three

  9. Geometric estimation method for x-ray digital intraoral tomosynthesis

    Science.gov (United States)

    Li, Liang; Yang, Yao; Chen, Zhiqiang

    2016-06-01

    It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.

  10. Rapid estimation of 4DCT motion-artifact severity based on 1D breathing-surrogate periodicity

    Energy Technology Data Exchange (ETDEWEB)

    Li, Guang, E-mail: lig2@mskcc.org; Caraveo, Marshall [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Wei, Jie [Department of Computer Science, City College of New York, New York, New York 10031 (United States); Rimner, Andreas; Wu, Abraham J.; Goodman, Karyn A. [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Yorke, Ellen [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States)

    2014-11-01

    Purpose: Motion artifacts are common in patient four-dimensional computed tomography (4DCT) images, leading to an ill-defined tumor volume with large variations for radiotherapy treatment and a poor foundation with low imaging fidelity for studying respiratory motion. The authors developed a method to estimate 4DCT image quality by establishing a correlation between the severity of motion artifacts in 4DCT images and the periodicity of the corresponding 1D respiratory waveform (1DRW) used for phase binning in 4DCT reconstruction. Methods: Discrete Fourier transformation (DFT) was applied to analyze 1DRW periodicity. The breathing periodicity index (BPI) was defined as the sum of the largest five Fourier coefficients, ranging from 0 to 1. Distortional motion artifacts (excluding blurring) of cine-scan 4DCT at the junctions of adjacent couch positions around the diaphragm were classified in three categories: incomplete, overlapping, and duplicate anatomies. To quantify these artifacts, discontinuity of the diaphragm at the junctions was measured in distance and averaged along six directions in three orthogonal views. Artifacts per junction (APJ) across the entire diaphragm were calculated in each breathing phase and phase-averaged APJ{sup ¯}, defined as motion-artifact severity (MAS), was obtained for each patient. To make MAS independent of patient-specific motion amplitude, two new MAS quantities were defined: MAS{sup D} is normalized to the maximum diaphragmatic displacement and MAS{sup V} is normalized to the mean diaphragmatic velocity (the breathing period was obtained from DFT analysis of 1DRW). Twenty-six patients’ free-breathing 4DCT images and corresponding 1DRW data were studied. Results: Higher APJ values were found around midventilation and full inhalation while the lowest APJ values were around full exhalation. The distribution of MAS is close to Poisson distribution with a mean of 2.2 mm. The BPI among the 26 patients was calculated with a value

  11. Rapid estimation of 4DCT motion-artifact severity based on 1D breathing-surrogate periodicity

    International Nuclear Information System (INIS)

    Li, Guang; Caraveo, Marshall; Wei, Jie; Rimner, Andreas; Wu, Abraham J.; Goodman, Karyn A.; Yorke, Ellen

    2014-01-01

    Purpose: Motion artifacts are common in patient four-dimensional computed tomography (4DCT) images, leading to an ill-defined tumor volume with large variations for radiotherapy treatment and a poor foundation with low imaging fidelity for studying respiratory motion. The authors developed a method to estimate 4DCT image quality by establishing a correlation between the severity of motion artifacts in 4DCT images and the periodicity of the corresponding 1D respiratory waveform (1DRW) used for phase binning in 4DCT reconstruction. Methods: Discrete Fourier transformation (DFT) was applied to analyze 1DRW periodicity. The breathing periodicity index (BPI) was defined as the sum of the largest five Fourier coefficients, ranging from 0 to 1. Distortional motion artifacts (excluding blurring) of cine-scan 4DCT at the junctions of adjacent couch positions around the diaphragm were classified in three categories: incomplete, overlapping, and duplicate anatomies. To quantify these artifacts, discontinuity of the diaphragm at the junctions was measured in distance and averaged along six directions in three orthogonal views. Artifacts per junction (APJ) across the entire diaphragm were calculated in each breathing phase and phase-averaged APJ ¯ , defined as motion-artifact severity (MAS), was obtained for each patient. To make MAS independent of patient-specific motion amplitude, two new MAS quantities were defined: MAS D is normalized to the maximum diaphragmatic displacement and MAS V is normalized to the mean diaphragmatic velocity (the breathing period was obtained from DFT analysis of 1DRW). Twenty-six patients’ free-breathing 4DCT images and corresponding 1DRW data were studied. Results: Higher APJ values were found around midventilation and full inhalation while the lowest APJ values were around full exhalation. The distribution of MAS is close to Poisson distribution with a mean of 2.2 mm. The BPI among the 26 patients was calculated with a value ranging from 0

  12. [New non-volumetric method for estimating peroperative blood loss].

    Science.gov (United States)

    Tachoires, D; Mourot, F; Gillardeau, G

    1979-01-01

    The authors have developed a new method for the estimation of peroperative blood loss by measurement of the haematocrit of a fluid obtained by diluting the blood from swabs in a known volume of isotonic saline solution. This value, referred to a monogram, may be used to assess the volume of blood impregnating the compresses, in relation to the pre-operative or present haematocrit of the patient, by direct reading. The precision of the method is discussed. The results obtained justified its routine application in surgery in children, patients with cardiac failure and in all cases requiring precise compensation of per-operative blood loss.

  13. A new method to estimate genetic gain in annual crops

    Directory of Open Access Journals (Sweden)

    Flávio Breseghello

    1998-12-01

    Full Text Available The genetic gain obtained by breeding programs to improve quantitative traits may be estimated by using data from regional trials. A new statistical method for this estimate is proposed and includes four steps: a joint analysis of regional trial data using a generalized linear model to obtain adjusted genotype means and covariance matrix of these means for the whole studied period; b calculation of the arithmetic mean of the adjusted genotype means, exclusively for the group of genotypes evaluated each year; c direct year comparison of the arithmetic means calculated, and d estimation of mean genetic gain by regression. Using the generalized least squares method, a weighted estimate of mean genetic gain during the period is calculated. This method permits a better cancellation of genotype x year and genotype x trial/year interactions, thus resulting in more precise estimates. This method can be applied to unbalanced data, allowing the estimation of genetic gain in series of multilocational trials.Os ganhos genéticos obtidos pelo melhoramento de caracteres quantitativos podem ser estimados utilizando resultados de ensaios regionais de avaliação de linhagens e cultivares. Um novo método estatístico para esta estimativa é proposto, o qual consiste em quatro passos: a análise conjunta da série de dados dos ensaios regionais através de um modelo linear generalizado de forma a obter as médias ajustadas dos genótipos e a matriz de covariâncias destas médias; b para o grupo de genótipos avaliados em cada ano, cálculo da média aritmética das médias ajustadas obtidas na análise conjunta; c comparação direta dos anos, conforme as médias aritméticas obtidas, e d estimativa de um ganho genético médio, por regressão. Aplicando-se o método de quadrados mínimos generalizado, é calculada uma estimativa ponderada do ganho genético médio no período. Este método permite um melhor cancelamento das interações genótipo x ano e gen

  14. Rapid high temperature field test method for evaluation of geothermal calcite scale inhibitors

    Energy Technology Data Exchange (ETDEWEB)

    Asperger, R.G.

    1982-08-01

    A test method is described which allows the rapid field testing of calcite scale inhibitors in high- temperature geothermal brines. Five commercial formulations, chosen on the basis of laboratory screening tests, were tested in brines with low total dissolved solids at ca 500 F. Four were found to be effective; of these, 2 were found to be capable of removing recently deposited scale. One chemical was tested in the full-flow brine line for 6 wks. It was shown to stop a severe surface scaling problem at the well's control valve, thus proving the viability of the rapid test method. (12 refs.)

  15. A rapid method for monitoring the hydrodeoxygenation of coal-derived naphtha

    Energy Technology Data Exchange (ETDEWEB)

    Farnand, B.A.; Coulombe, S.; Smiley, G.T.; Fairbridge, C.

    1988-01-01

    A bonded polar poly(ethylene glycol) capillary column has been used for the identification and quantification of the phenolic components in synthetic crude naphthas. This provides a rapid and routine method for the determination of phenolic oxygen content with results comparable to combustion and neutron activation methods. The method is most useful in monitoring the removal of phenolic oxygen by hydroprocessing. 11 refs., 1 fig. 1 tab.

  16. SIMPLE METHOD FOR ESTIMATING POLYCHLORINATED BIPHENYL CONCENTRATIONS ON SOILS AND SEDIMENTS USING SUBCRITICAL WATER EXTRACTION COUPLED WITH SOLID-PHASE MICROEXTRACTION. (R825368)

    Science.gov (United States)

    A rapid method for estimating polychlorinated biphenyl (PCB) concentrations in contaminated soils and sediments has been developed by coupling static subcritical water extraction with solid-phase microextraction (SPME). Soil, water, and internal standards are placed in a seale...

  17. SCoPE: an efficient method of Cosmological Parameter Estimation

    International Nuclear Information System (INIS)

    Das, Santanu; Souradeep, Tarun

    2014-01-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data

  18. Methods for estimating low-flow statistics for Massachusetts streams

    Science.gov (United States)

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The

  19. Estimation methods for process holdup of special nuclear materials

    International Nuclear Information System (INIS)

    Pillay, K.K.S.; Picard, R.R.; Marshall, R.S.

    1984-06-01

    The US Nuclear Regulatory Commission sponsored a research study at the Los Alamos National Laboratory to explore the possibilities of developing statistical estimation methods for materials holdup at highly enriched uranium (HEU)-processing facilities. Attempts at using historical holdup data from processing facilities and selected holdup measurements at two operating facilities confirmed the need for high-quality data and reasonable control over process parameters in developing statistical models for holdup estimations. A major effort was therefore directed at conducting large-scale experiments to demonstrate the value of statistical estimation models from experimentally measured data of good quality. Using data from these experiments, we developed statistical models to estimate residual inventories of uranium in large process equipment and facilities. Some of the important findings of this investigation are the following: prediction models for the residual holdup of special nuclear material (SNM) can be developed from good-quality historical data on holdup; holdup data from several of the equipment used at HEU-processing facilities, such as air filters, ductwork, calciners, dissolvers, pumps, pipes, and pipe fittings, readily lend themselves to statistical modeling of holdup; holdup profiles of process equipment such as glove boxes, precipitators, and rotary drum filters can change with time; therefore, good estimation of residual inventories in these types of equipment requires several measurements at the time of inventory; although measurement of residual holdup of SNM in large facilities is a challenging task, reasonable estimates of the hidden inventories of holdup to meet the regulatory requirements can be accomplished through a combination of good measurements and the use of statistical models. 44 references, 62 figures, 43 tables

  20. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  1. Evaluating methods for estimating home ranges using GPS collars: A comparison using proboscis monkeys (Nasalis larvatus).

    Science.gov (United States)

    Stark, Danica J; Vaughan, Ian P; Ramirez Saldivar, Diana A; Nathan, Senthilvel K S S; Goossens, Benoit

    2017-01-01

    The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus), we aimed to: 1) compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2) evaluate how well these methods identify known physical barriers (e.g. rivers); and 3) test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24-165 ha (mean 80.89 ha). The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes.

  2. Evaluating methods for estimating home ranges using GPS collars: A comparison using proboscis monkeys (Nasalis larvatus.

    Directory of Open Access Journals (Sweden)

    Danica J Stark

    Full Text Available The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus, we aimed to: 1 compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2 evaluate how well these methods identify known physical barriers (e.g. rivers; and 3 test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24-165 ha (mean 80.89 ha. The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes.

  3. Rapid analysis method for the determination of 14C specific activity in irradiated graphite.

    Directory of Open Access Journals (Sweden)

    Vidmantas Remeikis

    Full Text Available 14C is one of the limiting radionuclides used in the categorization of radioactive graphite waste; this categorization is crucial in selecting the appropriate graphite treatment/disposal method. We propose a rapid analysis method for 14C specific activity determination in small graphite samples in the 1-100 μg range. The method applies an oxidation procedure to the sample, which extracts 14C from the different carbonaceous matrices in a controlled manner. Because this method enables fast online measurement and 14C specific activity evaluation, it can be especially useful for characterizing 14C in irradiated graphite when dismantling graphite moderator and reflector parts, or when sorting radioactive graphite waste from decommissioned nuclear power plants. The proposed rapid method is based on graphite combustion and the subsequent measurement of both CO2 and 14C, using a commercial elemental analyser and the semiconductor detector, respectively. The method was verified using the liquid scintillation counting (LSC technique. The uncertainty of this rapid method is within the acceptable range for radioactive waste characterization purposes. The 14C specific activity determination procedure proposed in this study takes approximately ten minutes, comparing favorably to the more complicated and time consuming LSC method. This method can be potentially used to radiologically characterize radioactive waste or used in biomedical applications when dealing with the specific activity determination of 14C in the sample.

  4. Rapid analysis method for the determination of 14C specific activity in irradiated graphite.

    Science.gov (United States)

    Remeikis, Vidmantas; Lagzdina, Elena; Garbaras, Andrius; Gudelis, Arūnas; Garankin, Jevgenij; Plukienė, Rita; Juodis, Laurynas; Duškesas, Grigorijus; Lingis, Danielius; Abdulajev, Vladimir; Plukis, Artūras

    2018-01-01

    14C is one of the limiting radionuclides used in the categorization of radioactive graphite waste; this categorization is crucial in selecting the appropriate graphite treatment/disposal method. We propose a rapid analysis method for 14C specific activity determination in small graphite samples in the 1-100 μg range. The method applies an oxidation procedure to the sample, which extracts 14C from the different carbonaceous matrices in a controlled manner. Because this method enables fast online measurement and 14C specific activity evaluation, it can be especially useful for characterizing 14C in irradiated graphite when dismantling graphite moderator and reflector parts, or when sorting radioactive graphite waste from decommissioned nuclear power plants. The proposed rapid method is based on graphite combustion and the subsequent measurement of both CO2 and 14C, using a commercial elemental analyser and the semiconductor detector, respectively. The method was verified using the liquid scintillation counting (LSC) technique. The uncertainty of this rapid method is within the acceptable range for radioactive waste characterization purposes. The 14C specific activity determination procedure proposed in this study takes approximately ten minutes, comparing favorably to the more complicated and time consuming LSC method. This method can be potentially used to radiologically characterize radioactive waste or used in biomedical applications when dealing with the specific activity determination of 14C in the sample.

  5. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  6. Priors engaged in long-latency responses to mechanical perturbations suggest a rapid update in state estimation.

    Directory of Open Access Journals (Sweden)

    Frédéric Crevecoeur

    Full Text Available In every motor task, our brain must handle external forces acting on the body. For example, riding a bike on cobblestones or skating on irregular surface requires us to appropriately respond to external perturbations. In these situations, motor predictions cannot help anticipate the motion of the body induced by external factors, and direct use of delayed sensory feedback will tend to generate instability. Here, we show that to solve this problem the motor system uses a rapid sensory prediction to correct the estimated state of the limb. We used a postural task with mechanical perturbations to address whether sensory predictions were engaged in upper-limb corrective movements. Subjects altered their initial motor response in ∼60 ms, depending on the expected perturbation profile, suggesting the use of an internal model, or prior, in this corrective process. Further, we found trial-to-trial changes in corrective responses indicating a rapid update of these perturbation priors. We used a computational model based on Kalman filtering to show that the response modulation was compatible with a rapid correction of the estimated state engaged in the feedback response. Such a process may allow us to handle external disturbances encountered in virtually every physical activity, which is likely an important feature of skilled motor behaviour.

  7. A novel method of rapidly modeling optical properties of actual photonic crystal fibres

    International Nuclear Information System (INIS)

    Li-Wen, Wang; Shu-Qin, Lou; Wei-Guo, Chen; Hong-Lei, Li

    2010-01-01

    The flexible structure of photonic crystal fibre not only offers novel optical properties but also brings some difficulties in keeping the fibre structure in the fabrication process which inevitably cause the optical properties of the resulting fibre to deviate from the designed properties. Therefore, a method of evaluating the optical properties of the actual fibre is necessary for the purpose of application. Up to now, the methods employed to measure the properties of the actual photonic crystal fibre often require long fibre samples or complex expensive equipments. To our knowledge, there are few studies of modeling an actual photonic crystal fibre and evaluating its properties rapidly. In this paper, a novel method, based on the combination model of digital image processing and the finite element method, is proposed to rapidly model the optical properties of the actual photonic crystal fibre. Two kinds of photonic crystal fibres made by Crystal Fiber A/S are modeled. It is confirmed from numerical results that the proposed method is simple, rapid and accurate for evaluating the optical properties of the actual photonic crystal fibre without requiring complex equipment. (rapid communication)

  8. [Experimental rationale for the parameters of a rapid method for oxidase activity determination].

    Science.gov (United States)

    Butorina, N N

    2010-01-01

    Experimental rationale is provided for the parameters of a rapid (1-2-min) test to concurrently determine the oxidase activity of all bacteria grown on the membrane filter after water filtration. Oxidase reagents that are the aqueous solutions of tetramethyl-p-phenylenediamine dihydrochloride and demethyl-p-phenylenediamine dihydrochloride have been first ascertained to exert no effect on the viability and enzymatic activity of bacteria after one-hour contact. An algorithm has been improved for the rapid oxidase activity test: the allowable time for bacteria to contact oxidase reagents and procedures for minimizing the effect on bacterial biochemical activity following the contact. An accelerated method based on lactose medium with tergitol 7 and Endo agar has been devised to determine coliform bacteria, by applying the rapid oxidase test: the time of a final response is 18-24 hours. The method has been included into GOST 52426-2005.

  9. Ambit determination method in estimating rice plant population density

    Directory of Open Access Journals (Sweden)

    Abu Bakar, B.,

    2017-11-01

    Full Text Available Rice plant population density is a key indicator in determining the crop setting and fertilizer application rate. It is therefore essential that the population density is monitored to ensure that a correct crop management decision is taken. The conventional method of determining plant population is by manually counting the total number of rice plant tillers in a 25 cm x 25 cm square frame. Sampling is done by randomly choosing several different locations within a plot to perform tiller counting. This sampling method is time consuming, labour intensive and costly. An alternative fast estimating method was developed to overcome this issue. The method relies on measuring the outer circumference or ambit of the contained rice plants in a 25 cm x 25 cm square frame to determine the number of tillers within that square frame. Data samples of rice variety MR219 were collected from rice plots in the Muda granary area, Sungai Limau Dalam, Kedah. The data were taken at 50 days and 70 days after seeding (DAS. A total of 100 data samples were collected for each sampling day. A good correlation was obtained for the variety of 50 DAS and 70 DAS. The model was then verified by taking 100 samples with the latching strap for 50 DAS and 70 DAS. As a result, this technique can be used as a fast, economical and practical alternative to manual tiller counting. The technique can potentially be used in the development of an electronic sensing system to estimate paddy plant population density.

  10. The Software Cost Estimation Method Based on Fuzzy Ontology

    Directory of Open Access Journals (Sweden)

    Plecka Przemysław

    2014-12-01

    Full Text Available In the course of sales process of Enterprise Resource Planning (ERP Systems, it turns out that the standard system must be extended or changed (modified according to specific customer’s requirements. Therefore, suppliers face the problem of determining the cost of additional works. Most methods of cost estimation bring satisfactory results only at the stage of pre-implementation analysis. However, suppliers need to know the estimated cost as early as at the stage of trade talks. During contract negotiations, they expect not only the information about the costs of works, but also about the risk of exceeding these costs or about the margin of safety. One method that gives more accurate results at the stage of trade talks is the method based on the ontology of implementation costs. This paper proposes modification of the method involving the use of fuzzy attributes, classes, instances and relations in the ontology. The result provides not only the information about the value of work, but also about the minimum and maximum expected cost, and the most likely range of costs. This solution allows suppliers to effectively negotiate the contract and increase the chances of successful completion of the project.

  11. Stress estimation in reservoirs using an integrated inverse method

    Science.gov (United States)

    Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre

    2018-05-01

    Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.

  12. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A projection and density estimation method for knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Adam Stanski

    Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  14. Methods for cost estimation in software project management

    Science.gov (United States)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  15. A rapid method for screening arrayed plasmid cDNA library by PCR

    International Nuclear Information System (INIS)

    Hu Yingchun; Zhang Kaitai; Wu Dechang; Li Gang; Xiang Xiaoqiong

    1999-01-01

    Objective: To develop a PCR-based method for rapid and effective screening of arrayed plasmid cDNA library. Methods: The plasmid cDNA library was arrayed and screened by PCR with a particular set of primers. Results: Four positive clones were obtained through about one week. Conclusion: This method can be applied to screening not only normal cDNA clones, but also cDNA clones-containing small size fragments. This method offers significant advantages over traditional screening method in terms of sensitivity, specificity and efficiency

  16. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    International Nuclear Information System (INIS)

    Norris, Edward T.; Liu, Xin; Hsieh, Jiang

    2015-01-01

    . Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed

  17. Estimating Return on Investment in Translational Research: Methods and Protocols

    Science.gov (United States)

    Trochim, William; Dilts, David M.; Kirk, Rosalind

    2014-01-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706

  18. Estimating return on investment in translational research: methods and protocols.

    Science.gov (United States)

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind

    2013-12-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  19. Webometrics: Some Critical Issues of WWW Size Estimation Methods

    Directory of Open Access Journals (Sweden)

    Srinivasan Mohana Arunachalam

    2018-04-01

    Full Text Available The number of webpages in the Internet has increased tremendously over the last two decades however only a part of it is indexed by various search engines. This small portion is the indexable web of the Internet and can be usually reachable from a Search Engine. Search engines play a big role in making the World Wide Web accessible to the end user, and how much of the World Wide Web is accessible on the size of the search engine’s index. Researchers have proposed several ways to estimate this size of the indexable web using search engines with and without privileged access to the search engine’s database. Our report provides a summary of methods used in the last two decades to estimate the size of the World Wide Web, as well as describe how this knowledge can be used in other aspects/tasks concerning the World Wide Web.

  20. Analytical Method to Estimate the Complex Permittivity of Oil Samples

    Directory of Open Access Journals (Sweden)

    Lijuan Su

    2018-03-01

    Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.

  1. Estimation of metallic impurities in uranium by carrier distillation method

    International Nuclear Information System (INIS)

    Page, A.G.; Godbole, S.V.; Deshkar, S.B.; Joshi, B.D.

    1976-01-01

    An emission spectrographic method has been standardised for the estimation of twenty-two metallic impurities in uranium using carrier-distillation technique. Silver chloride with a concentration of 5% has been used as the carrier and palladium and gallium are used as internal standards. Precision and accuracy determinations of the synthetic samples indicate 6-15% deviation for most of the elements. Using the method described here, five uranium reference samples received from C.E.A.-France were analysed. The detection limits obtained for Cd, Co and W are lower than those reported in the literature while limits for the remaining elements are comparable to the values reported. The method is suitable for the chemical quality control analysis of uranium used for the Fast Breeder Test Reactor (FBTR) fuel. (author)

  2. Method for estimating boiling temperatures of crude oils

    International Nuclear Information System (INIS)

    Jones, R.K.

    1996-01-01

    Evaporation is often the dominant mechanism for mass loss during the first few days following an oil spill. The initial boiling point of the oil and the rate at which the boiling point changes as the oil evaporates are needed to initialize some computer models used in spill response. The lack of available boiling point data often limits the usefulness of these models in actual emergency situations. A new computational method was developed to estimate the temperature at which a crude oil boils as a function of the fraction evaporated using only standard distillation data, which are commonly available. This method employs established thermodynamic rules and approximations, and was designed to be used with automated spill-response models. Comparisons with measurements show a strong correlation between results obtained with this method and measured values

  3. Considerations for Task Analysis Methods and Rapid E-Learning Development Techniques

    Directory of Open Access Journals (Sweden)

    Dr. Ismail Ipek

    2014-02-01

    Full Text Available The purpose of this paper is to provide basic dimensions for rapid training development in e-learning courses in education and business. Principally, it starts with defining task analysis and how to select tasks for analysis and task analysis methods for instructional design. To do this, first, learning and instructional technologies as visions of the future were discussed. Second, the importance of task analysis methods in rapid e-learning was considered, with learning technologies as asynchronous and synchronous e-learning development. Finally, rapid instructional design concepts and e-learning design strategies were defined and clarified with examples, that is, all steps for effective task analysis and rapid training development techniques based on learning and instructional design approaches were discussed, such as m-learning and other delivery systems. As a result, the concept of task analysis, rapid e-learning development strategies and the essentials of online course design were discussed, alongside learner interface design features for learners and designers.

  4. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2009-01-01

    an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second

  5. Rapid in vivo screening method for the evaluation of new anti ...

    African Journals Online (AJOL)

    Rapid in vivo screening method for the evaluation of new anti helicobacter ... Six to eight week-old mice pre-treated (7 days) with Amoxicillin/Metronidazole (25 ... These findings were used as a mouse model of Helicobacter pylori infection to ...

  6. Simple rapid methods for freezing hybridomas in 96-well microculture plates.

    Science.gov (United States)

    Wells, D E; Price, P J

    1983-04-15

    Macroscopic hybridoma colonies were frozen and recovered in a good state of viability in 96-well microculture plates using 2 freezing procedures. These methods offer convenient and rapid means of preserving hybridomas and will permit laboratories developing monoclonal antibodies to distribute workloads to more manageable levels without discarding possibly valuable hybridomas.

  7. A critical analysis of methods for rapid and nondestructive determination of wood density in standing trees

    Science.gov (United States)

    Shan Gao; Xiping Wang; Michael C. Wiemann; Brian K. Brashaw; Robert J. Ross; Lihai Wang

    2017-01-01

    Key message Field methods for rapid determination of wood density in trees have evolved from increment borer, torsiometer, Pilodyn, and nail withdrawal into sophisticated electronic tools of resistance drilling measurement. A partial resistance drilling approach coupled with knowledge of internal tree density distribution may...

  8. Estimating Aquifer Transmissivity Using the Recession-Curve-Displacement Method in Tanzania’s Kilombero Valley

    Directory of Open Access Journals (Sweden)

    William Senkondo

    2017-12-01

    Full Text Available Information on aquifer processes and characteristics across scales has long been a cornerstone for understanding water resources. However, point measurements are often limited in extent and representativeness. Techniques that increase the support scale (footprint of measurements or leverage existing observations in novel ways can thus be useful. In this study, we used a recession-curve-displacement method to estimate regional-scale aquifer transmissivity (T from streamflow records across the Kilombero Valley of Tanzania. We compare these estimates to local-scale estimates made from pumping tests across the Kilombero Valley. The median T from the pumping tests was 0.18 m2/min. This was quite similar to the median T estimated from the recession-curve-displacement method applied during the wet season for the entire basin (0.14 m2/min and for one of the two sub-basins tested (0.16 m2/min. On the basis of our findings, there appears to be reasonable potential to inform water resource management and hydrologic model development through streamflow-derived transmissivity estimates, which is promising for data-limited environments facing rapid development, such as the Kilombero Valley.

  9. Rapid radiological characterization method based on the use of dose coefficients

    International Nuclear Information System (INIS)

    Dulama, C.; Toma, Al.; Dobrin, R.; Valeca, M.

    2010-01-01

    Intervention actions in case of radiological emergencies and exploratory radiological surveys require rapid methods for the evaluation of the range and extent of contamination. When simple and homogeneous radionuclide composition characterize the radioactive contamination, surrogate measurements can be used to reduce the costs implied by laboratory analyses and to speed-up the process of decision support. A dose-rate measurement-based methodology can be used in conjunction with adequate dose coefficients to assess radionuclide inventories and to calculate dose projections for various intervention scenarios. The paper presents the results obtained for dose coefficients in some particular exposure geometries and the methodology used for deriving dose rate guidelines from activity concentration upper levels specified as contamination limits. All calculations were performed by using the commercial software MicroShield from Grove Software Inc. A test case was selected as to meet the conditions from EPA Federal Guidance Report no. 12 (FGR12) concerning the evaluation of dose coefficients for external exposure from contaminated soil and the obtained results were compared to values given in the referred document. The geometries considered as test cases are: contaminated ground surface; - infinite extended homogeneous surface contamination and soil contaminated to a depth of 15 cm. As shown by the results, the values agree within 50% relative difference for most of the cases. The greatest discrepancies were observed for depth contamination simulation and in the case of radionuclides with complicated gamma emission and this is due to the different approach from MicroShield and FGR12. A case study is presented for validation of the methodology, where both dose rate measurements and laboratory analyses were performed on an extended quasi-homogeneous NORM contamination. The dose rate estimations obtained by applying the dose coefficients to the radionuclide concentrations

  10. Advances in Time Estimation Methods for Molecular Data.

    Science.gov (United States)

    Kumar, Sudhir; Hedges, S Blair

    2016-04-01

    Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data

  11. A Rapid Method for Measuring Strontium-90 Activity in Crops in China

    Science.gov (United States)

    Pan, Lingjing Pan; Yu, Guobing; Wen, Deyun; Chen, Zhi; Sheng, Liusi; Liu, Chung-King; Xu, X. George

    2017-09-01

    A rapid method for measuring Sr-90 activity in crop ashes is presented. Liquid scintillation counting, combined with ion exchange columns 4`, 4"(5")-di-t-butylcyclohexane-18-crown-6, is used to determine the activity of Sr-90 in crops. The yields of chemical procedure are quantified using gravimetric analysis. The conventional method that uses ion-exchange resin with HDEHP could not completely remove all the bismuth when comparatively large lead and bismuth exist in the samples. This is overcome by the rapid method. The chemical yield of this method is about 60% and the MDA for Sr-90 is found to be 2:32 Bq/kg. The whole procedure together with using spectrum analysis to determine the activity only takes about one day, which is really a large improvement compared with the conventional method. A modified conventional method is also described here to verify the value of the rapid one. These two methods can meet di_erent needs of daily monitoring and emergency situation.

  12. Rapid HPLC-MS method for the simultaneous determination of tea catechins and folates.

    Science.gov (United States)

    Araya-Farias, Monica; Gaudreau, Alain; Rozoy, Elodie; Bazinet, Laurent

    2014-05-14

    An effective and rapid HPLC-MS method for the simultaneous separation of the eight most abundant tea catechins, gallic acid, and caffeine was developed. These compounds were rapidly separated within 9 min by a linear gradient elution using a Zorbax SB-C18 packed with sub 2 μm particles. This methodology did not require preparative and semipreparative HPLC steps. In fact, diluted tea samples can be easily analyzed using HPLC-MS as described in this study. The use of mass spectrometry detection for quantification of catechins ensured a higher specificity of the method. The percent relative standard deviation was generally lower than 4 and 7% for most of the compounds tested in tea drinks and tea extracts, respectively. Furthermore, the method provided excellent resolution for folate determination alone or in combination with catechins. To date, no HPLC method able to discriminate catechins and folates in a quick analysis has been reported in the literature.

  13. A rapid method for soil cement design : Louisiana slope value method.

    Science.gov (United States)

    1964-03-01

    The current procedure used by the Louisiana Department of Highways for laboratory design of cement stabilized soil base and subbase courses is taken from standard AASHO test methods, patterned after Portland Cement Association criteria. These methods...

  14. Evaluation of rapid methods for in-situ characterization of organic contaminant load and biodegradation rates in winery wastewater.

    Science.gov (United States)

    Carvallo, M J; Vargas, I; Vega, A; Pizarro, G; Pizarr, G; Pastén, P

    2007-01-01

    Rapid methods for the in-situ evaluation of the organic load have recently been developed and successfully implemented in municipal wastewater treatment systems. Their direct application to winery wastewater treatment is questionable due to substantial differences between municipal and winery wastewater. We critically evaluate the use of UV-VIS spectrometry, buffer capacity testing (BCT), and respirometry as rapid methods to determine organic load and biodegradation rates of winery wastewater. We tested three types of samples: actual and treated winery wastewater, synthetic winery wastewater, and samples from a biological batch reactor. Not surprisingly, respirometry gave a good estimation of biodegradation rates for substrate of different complexities, whereas UV-VIS and BCT did not provide a quantitative measure of the easily degradable sugars and ethanol, typically the main components of the COD in the influent. However, our results strongly suggest that UV-VIS and BCT can be used to identify and estimate the concentration of complex substrates in the influent and soluble microbial products (SMP) in biological reactors and their effluent. Furthermore, the integration of UV-VIS spectrometry, BCT, and mathematical modeling was able to differentiate between the two components of SMPs: substrate utilization associated products (UAP) and biomass associated products (BAP). Since the effluent COD in biologically treated wastewaters is composed primarily by SMPs, the quantitative information given by these techniques may be used for plant control and optimization.

  15. Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2

    Energy Technology Data Exchange (ETDEWEB)

    Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.

    1994-07-01

    of complex issues that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.

  16. Rapid estimation of the organic sulphur content of kerogens, coals and asphaltenes by pyrolysis-gas chromatography

    NARCIS (Netherlands)

    Sinninghe Damsté, J.S.; Eglinton, T.I.; Kohnen, M.E.L.; Leeuw, J.W. de

    1990-01-01

    A pyrolysis-gas Chromatographic (py-g.c.) method for estimation of the Sorg/C ratio in kerogens and other forms of sedimentary macromolecular organic matter is described. The method is based upon flash pyrolysis at 610 °C for 10s and areal integration of the FID peaks attributed to

  17. A method for fast energy estimation and visualization of protein-ligand interaction

    Science.gov (United States)

    Tomioka, Nobuo; Itai, Akiko; Iitaka, Yoichi

    1987-10-01

    A new computational and graphical method for facilitating ligand-protein docking studies is developed on a three-dimensional computer graphics display. Various physical and chemical properties inside the ligand binding pocket of a receptor protein, whose structure is elucidated by X-ray crystal analysis, are calculated on three-dimensional grid points and are stored in advance. By utilizing those tabulated data, it is possible to estimate the non-bonded and electrostatic interaction energy and the number of possible hydrogen bonds between protein and ligand molecules in real time during an interactive docking operation. The method also provides a comprehensive visualization of the local environment inside the binding pocket. With this method, it becomes easier to find a roughly stable geometry of ligand molecules, and one can therefore make a rapid survey of the binding capability of many drug candidates. The method will be useful for drug design as well as for the examination of protein-ligand interactions.

  18. Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods

    Directory of Open Access Journals (Sweden)

    aboalhasan fathabadi

    2017-02-01

    Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to

  19. Rapid-Viability PCR Method for Detection of Live, Virulent Bacillus anthracis in Environmental Samples ▿

    OpenAIRE

    Létant, Sonia E.; Murphy, Gloria A.; Alfaro, Teneile M.; Avila, Julie R.; Kane, Staci R.; Raber, Ellen; Bunt, Thomas M.; Shah, Sanjiv R.

    2011-01-01

    In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real...

  20. Apparatus and method for rapid separation and detection of hydrocarbon fractions in a fluid stream

    Science.gov (United States)

    Sluder, Charles S.; Storey, John M.; Lewis, Sr., Samuel A.

    2013-01-22

    An apparatus and method for rapid fractionation of hydrocarbon phases in a sample fluid stream are disclosed. Examples of the disclosed apparatus and method include an assembly of elements in fluid communication with one another including one or more valves and at least one sorbent chamber for removing certain classifications of hydrocarbons and detecting the remaining fractions using a detector. The respective ratios of hydrocarbons are determined by comparison with a non separated fluid stream.

  1. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    International Nuclear Information System (INIS)

    Sabourin, D; Snakenborg, D; Dufva, M

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observation. The interconnection block method is scalable, flexible and supports high interconnection density. The average pressure limit of the interconnection block was near 5.5 bar and all individual results were well above the 2 bar threshold considered applicable to most microfluidic applications

  2. Rapid qualitative research methods during complex health emergencies: A systematic review of the literature.

    Science.gov (United States)

    Johnson, Ginger A; Vindrola-Padros, Cecilia

    2017-09-01

    The 2013-2016 Ebola outbreak in West Africa highlighted both the successes and limitations of social science contributions to emergency response operations. An important limitation was the rapid and effective communication of study findings. A systematic review was carried out to explore how rapid qualitative methods have been used during global heath emergencies to understand which methods are commonly used, how they are applied, and the difficulties faced by social science researchers in the field. We also asses their value and benefit for health emergencies. The review findings are used to propose recommendations for qualitative research in this context. Peer-reviewed articles and grey literature were identified through six online databases. An initial search was carried out in July 2016 and updated in February 2017. The PRISMA checklist was used to guide the reporting of methods and findings. The articles were assessed for quality using the MMAT and AACODS checklist. From an initial search yielding 1444 articles, 22 articles met the criteria for inclusion. Thirteen of the articles were qualitative studies and nine used a mixed-methods design. The purpose of the rapid studies included: the identification of causes of the outbreak, and assessment of infrastructure, control strategies, health needs and health facility use. The studies varied in duration (from 4 days to 1 month). The main limitations identified by the authors were: the low quality of the collected data, small sample sizes, and little time for cross-checking facts with other data sources to reduce bias. Rapid qualitative methods were seen as beneficial in highlighting context-specific issues that need to be addressed locally, population-level behaviors influencing health service use, and organizational challenges in response planning and implementation. Recommendations for carrying out rapid qualitative research in this context included the early designation of community leaders as a point of

  3. Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method

    OpenAIRE

    Hadi Farhadian; Homayoon Katibeh

    2015-01-01

    In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these t...

  4. Twitter as Information Source for Rapid Damage Estimation after Major Earthquakes

    Science.gov (United States)

    Eggert, Silke; Fohringer, Joachim

    2014-05-01

    Natural disasters like earthquakes require a fast response from local authorities. Well trained rescue teams have to be available, equipment and technology has to be ready set up, information have to be directed to the right positions so the head quarter can manage the operation precisely. The main goal is to reach the most affected areas in a minimum of time. But even with the best preparation for these cases, there will always be the uncertainty of what really happened in the affected area. Modern geophysical sensor networks provide high quality data. These measurements, however, are only mapping disjoint values from their respective locations for a limited amount of parameters. Using observations of witnesses represents one approach to enhance measured values from sensors ("humans as sensors"). These observations are increasingly disseminated via social media platforms. These "social sensors" offer several advantages over common sensors, e.g. high mobility, high versatility of captured parameters as well as rapid distribution of information. Moreover, the amount of data offered by social media platforms is quite extensive. We analyze messages distributed via Twitter after major earthquakes to get rapid information on what eye-witnesses report from the epicentral area. We use this information to (a) quickly learn about damage and losses to support fast disaster response and to (b) densify geophysical networks in areas where there is sparse information to gain a more detailed insight on felt intensities. We present a case study from the Mw 7.1 Philippines (Bohol) earthquake that happened on Oct. 15 2013. We extract Twitter messages, so called tweets containing one or more specified keywords from the semantic field of "earthquake" and use them for further analysis. For the time frame of Oct. 15 to Oct 18 we get a data base of in total 50.000 tweets whereof 2900 tweets are geo-localized and 470 have a photo attached. Analyses for both national level and locally for

  5. Methods on estimation of the evaporation from water surface

    International Nuclear Information System (INIS)

    Trajanovska, Lidija; Tanushevska, Dushanka; Aleksovska, Nina

    2001-01-01

    The whole world water supply on the Earth is in close dependence on hydrological cycle connected with water circulation at Earth-Atmosphere route through evaporation, precipitation and water runoff. Evaporation exists worldwide where the atmosphere is unsatiated of water steam (when there is humidity in short supply) and it depends on climatic conditions in some regions. The purpose of this paper is to determine a method for estimation of evaporation of natural water surface in our areas, that means its determination as exact as possible. (Original)

  6. Rapid methods for the extraction and archiving of molecular grade fungal genomic DNA.

    Science.gov (United States)

    Borman, Andrew M; Palmer, Michael; Johnson, Elizabeth M

    2013-01-01

    The rapid and inexpensive extraction of fungal genomic DNA that is of sufficient quality for molecular approaches is central to the molecular identification, epidemiological analysis, taxonomy, and strain typing of pathogenic fungi. Although many commercially available and in-house extraction procedures do eliminate the majority of contaminants that commonly inhibit molecular approaches, the inherent difficulties in breaking fungal cell walls lead to protocols that are labor intensive and that routinely take several hours to complete. Here we describe several methods that we have developed in our laboratory that allow the extremely rapid and inexpensive preparation of fungal genomic DNA.

  7. Rapid method to determine actinides and 89/90Sr in limestone and marble samples

    International Nuclear Information System (INIS)

    Maxwell, S.L.; Culligan, Brian; Hutchison, J.B.; Utsey, R.C.; Sudowe, Ralf; McAlister, D.R.

    2016-01-01

    A new method for the determination of actinides and radiostrontium in limestone and marble samples has been developed that utilizes a rapid sodium hydroxide fusion to digest the sample. Following rapid pre-concentration steps to remove sample matrix interferences, the actinides and 89 / 90 Sr are separated using extraction chromatographic resins and measured radiometrically. The advantages of sodium hydroxide fusion versus other fusion techniques will be discussed. This approach has a sample preparation time for limestone and marble samples of <4 h. (author)

  8. Application of a rapid screening method to detect irradiated meat in Brazil

    International Nuclear Information System (INIS)

    Villavicencio, A.L.C.H.; Mancini-Filho, J.; Delincee, H.

    2000-01-01

    Based on the enormous potential for food irradiation in Brazil, and to ensure free consumer choice, there is a need to find a convenient and rapid method for detection of irradiated food. Since treatment with ionising radiation causes DNA fragmentation, the analysis of DNA damage might be promising. In this paper, the DNA Comet Assay was used to identify exotic meat (boar, jacare and capybara), irradiated with 60 Co gamma rays. The applied radiation doses were 0, 1.5, 3.0 and 4.5 kGy. Analysis of the DNA migration enabled a rapid identification of the radiation treatment

  9. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    Science.gov (United States)

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  10. Rapid estimation of glucosinolate thermal degradation rate constants in leaves of Chinese kale and broccoli (Brassica oleracea) in two seasons.

    Science.gov (United States)

    Hennig, Kristin; Verkerk, Ruud; Bonnema, Guusje; Dekker, Matthijs

    2012-08-15

    Kinetic modeling was used as a tool to quantitatively estimate glucosinolate thermal degradation rate constants. Literature shows that thermal degradation rates differ in different vegetables. Well-characterized plant material, leaves of broccoli and Chinese kale plants grown in two seasons, was used in the study. It was shown that a first-order reaction is appropriate to model glucosinolate degradation independent from the season. No difference in degradation rate constants of structurally identical glucosinolates was found between broccoli and Chinese kale leaves when grown in the same season. However, glucosinolate degradation rate constants were highly affected by the season (20-80% increase in spring compared to autumn). These results suggest that differences in glucosinolate degradation rate constants can be due to variation in environmental as well as genetic factors. Furthermore, a methodology to estimate rate constants rapidly is provided to enable the analysis of high sample numbers for future studies.

  11. Application of a rapid screening method to detect irradiated meat in Brazil

    International Nuclear Information System (INIS)

    Villavicencio, A.L.C.H.; Delincee, H.

    1998-01-01

    Complete text of publication follows. Based on the enormous potential for food irradiation in Brazil, and to ensure free consumer choice, there is a need to find a convenient and rapid method for detection of irradiated food. Since treatment with ionizing radiation causes DNA fragmentation, the analysis of DNA damage might be promising. In fact, DNA fragmentation measured in single cells by agarose gel electrophoresis - DNA Comet Assay - has shown to offer great potential as a rapid tool to detect whether a wide variety of foodstuffs has been radiation processed. However, more work is needed to exploit the full potential of this promising technique. In this paper, the DNA Comet Assay was used to identify exotic meat (boar, jacare and capybara), irradiated with 60 Co gamma-rays. The applied radiation doses were 0, 1.5, 3.0 and 4.5 kGy. Analysis of the DNA migration enable a rapid identification of the radiation treatment

  12. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations

  13. Estimation of creatinine in Urine sample by Jaffe's method

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Arunkumar, Suja; Sawant, Pramilla D.; Rao, B.B.

    2012-01-01

    In-vitro bioassay monitoring is based on the determination of activity concentrations in biological samples excreted from the body and is most suitable for alpha and beta emitters. A truly representative bioassay sample is the one having all the voids collected during a 24-h period however, this being technically difficult, overnight urine samples collected by the workers are analyzed. These overnight urine samples are collected for 10-16 h, however in the absence of any specific information, 12 h duration is assumed and the observed results are then corrected accordingly obtain the daily excretion rate. To reduce the uncertainty due to unknown duration of sample collection, IAEA has recommended two methods viz., measurement of specific gravity and creatinine excretion rate in urine sample. Creatinine is a final metabolic product creatinine phosphate in the body and is excreted at a steady rate for people with normally functioning kidneys. It is, therefore, often used as a normalization factor for estimation of duration of sample collection. The present study reports the chemical procedure standardized and its application for the estimation of creatinine in urine samples collected from occupational workers. Chemical procedure for estimation of creatinine in bioassay samples was standardized and applied successfully for its estimation in bioassay samples collected from the workers. The creatinine excretion rate observed for these workers is lower than observed in literature. Further, work is in progress to generate a data bank of creatinine excretion rate for most of the workers and also to study the variability in creatinine coefficient for the same individual based on the analysis of samples collected for different duration

  14. Estimation and Validation of RapidEye-Based Time-Series of Leaf Area Index for Winter Wheat in the Rur Catchment (Germany

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2015-03-01

    Full Text Available Leaf Area Index (LAI is an important variable for numerous processes in various disciplines of bio- and geosciences. In situ measurements are the most accurate source of LAI among the LAI measuring methods, but the in situ measurements have the limitation of being labor intensive and site specific. For spatial-explicit applications (from regional to continental scales, satellite remote sensing is a promising source for obtaining LAI with different spatial resolutions. However, satellite-derived LAI measurements using empirical models require calibration and validation with the in situ measurements. In this study, we attempted to validate a direct LAI retrieval method from remotely sensed images (RapidEye with in situ LAI (LAIdestr. Remote sensing LAI (LAIrapideye were derived using different vegetation indices, namely SAVI (Soil Adjusted Vegetation Index and NDVI (Normalized Difference Vegetation Index. Additionally, applicability of the newly available red-edge band (RE was also analyzed through Normalized Difference Red-Edge index (NDRE and Soil Adjusted Red-Edge index (SARE. The LAIrapideye obtained from vegetation indices with red-edge band showed better correlation with LAIdestr (r = 0.88 and Root Mean Square Devation, RMSD = 1.01 & 0.92. This study also investigated the need to apply radiometric/atmospheric correction methods to the time-series of RapidEye Level 3A data prior to LAI estimation. Analysis of the the RapidEye Level 3A data set showed that application of the radiometric/atmospheric correction did not improve correlation of the estimated LAI with in situ LAI.

  15. Effectiveness of Rapid Cooling as a Method of Euthanasia for Young Zebrafish (Danio rerio).

    Science.gov (United States)

    Wallace, Chelsea K; Bright, Lauren A; Marx, James O; Andersen, Robert P; Mullins, Mary C; Carty, Anthony J

    2018-01-01

    Despite increased use of zebrafish (Danio rerio) in biomedical research, consistent information regarding appropriate euthanasia methods, particularly for embryos, is sparse. Current literature indicates that rapid cooling is an effective method of euthanasia for adult zebrafish, yet consistent guidelines regarding zebrafish younger than 6 mo are unavailable. This study was performed to distinguish the age at which rapid cooling is an effective method of euthanasia for zebrafish and the exposure times necessary to reliably euthanize zebrafish using this method. Zebrafish at 3, 4, 7, 14, 16, 19, 21, 28, 60, and 90 d postfertilization (dpf) were placed into an ice water bath for 5, 10, 30, 45, or 60 min (n = 12 to 40 per group). In addition, zebrafish were placed in ice water for 12 h (age ≤14 dpf) or 30 s (age ≥14 dpf). After rapid cooling, fish were transferred to a recovery tank and the number of fish alive at 1, 4, and 12-24 h after removal from ice water was documented. Euthanasia was defined as a failure when evidence of recovery was observed at any point after removal from ice water. Results showed that younger fish required prolonged exposure to rapid cooling for effective euthanasia, with the required exposure time decreasing as fish age. Although younger fish required long exposure times, animals became immobilized immediately upon exposure to the cold water, and behavioral indicators of pain or distress rarely occurred. We conclude that zebrafish 14 dpf and younger require as long as 12 h, those 16 to 28 dpf of age require 5 min, and those older than 28 dpf require 30 s minimal exposure to rapid cooling for reliable euthanasia.

  16. Study on color difference estimation method of medicine biochemical analysis

    Science.gov (United States)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  17. Estimation of citicoline sodium in tablets by difference spectrophotometric method

    Directory of Open Access Journals (Sweden)

    Sagar Suman Panda

    2013-01-01

    Full Text Available Aim: The present work deals with development and validation of a novel, precise, and accurate spectrophotometric method for the estimation of citicoline sodium (CTS in tablets. This spectrophotometric method is based on the principle that CTS shows two different forms that differs in the absorption spectra in basic and acidic medium. Materials and Methods: The present work was being carried out on Shimadzu 1800 Double Beam UV-visible spectrophotometer. Difference spectra were generated using 10 mm quartz cells over the range of 200-400 nm. Solvents used were 0.1 M NaOH and 0.1 M HCl. Results: The maxima and minima in the difference spectra of CTS were found to be 239 nm and 283 nm, respectively. Amplitude was calculated from the maxima and minima of spectrum. The drug follows linearity in the range of 1-50 μ/ml (R 2 = 0.999. The average % recovery from the tablet formulation was found to be 98.47%. The method was validated as per International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use: ICH Q2(R1 Validation of Analytical Procedures: Text and Methodology guidelines. Conclusion: This method is simple and inexpensive. Hence it can be applied for determination of the drug in pharmaceutical dosage forms.

  18. Introduction to the methods of estimating nuclear power generating costs

    Energy Technology Data Exchange (ETDEWEB)

    1961-11-01

    The present report prepared by the Agency with the guidance and assistance of a panel of experts from Member States, the names of whom will be found at the end of this report, represents the first step in the methods of cost evaluation. The main objectives of the report are: (1) The preparation of a full list of the cost items likely to be encountered so that the preliminary estimates for a given nuclear power system can be relied upon in deciding on its economic merits. (2) A survey of the methods currently used for the estimation of the generating costs of the power produced by a nuclear station. The survey is intended for a wide audience ranging from engineers to public officials with an interest in the prospects of nuclear power. An attempt has therefore been made to refrain from detailed technical discussions in order to make the presentation easily understandable to readers with only a very general knowledge of the principles of nuclear engineering. 3 figs, tabs.

  19. A Qualitative Method to Estimate HSI Display Complexity

    International Nuclear Information System (INIS)

    Hugo, Jacques; Gertman, David

    2013-01-01

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation

  20. A Qualitative Method to Estimate HSI Display Complexity

    Energy Technology Data Exchange (ETDEWEB)

    Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)

    2013-04-15

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  1. Three methods for estimating a range of vehicular interactions

    Science.gov (United States)

    Krbálek, Milan; Apeltauer, Jiří; Apeltauer, Tomáš; Szabová, Zuzana

    2018-02-01

    We present three different approaches how to estimate the number of preceding cars influencing a decision-making procedure of a given driver moving in saturated traffic flows. The first method is based on correlation analysis, the second one evaluates (quantitatively) deviations from the main assumption in the convolution theorem for probability, and the third one operates with advanced instruments of the theory of counting processes (statistical rigidity). We demonstrate that universally-accepted premise on short-ranged traffic interactions may not be correct. All methods introduced have revealed that minimum number of actively-followed vehicles is two. It supports an actual idea that vehicular interactions are, in fact, middle-ranged. Furthermore, consistency between the estimations used is surprisingly credible. In all cases we have found that the interaction range (the number of actively-followed vehicles) drops with traffic density. Whereas drivers moving in congested regimes with lower density (around 30 vehicles per kilometer) react on four or five neighbors, drivers moving in high-density flows respond to two predecessors only.

  2. Introduction to the methods of estimating nuclear power generating costs

    International Nuclear Information System (INIS)

    1961-01-01

    The present report prepared by the Agency with the guidance and assistance of a panel of experts from Member States, the names of whom will be found at the end of this report, represents the first step in the methods of cost evaluation. The main objectives of the report are: (1) The preparation of a full list of the cost items likely to be encountered so that the preliminary estimates for a given nuclear power system can be relied upon in deciding on its economic merits. (2) A survey of the methods currently used for the estimation of the generating costs of the power produced by a nuclear station. The survey is intended for a wide audience ranging from engineers to public officials with an interest in the prospects of nuclear power. An attempt has therefore been made to refrain from detailed technical discussions in order to make the presentation easily understandable to readers with only a very general knowledge of the principles of nuclear engineering. 3 figs, tabs

  3. Method of estimating investment decisions effectiveness in power engineering

    International Nuclear Information System (INIS)

    Kamrat, W.

    1996-01-01

    A new concept of determining efficient power plants investment decision-making is proposed.The results of research on capital expenditures for building and modernization of power plants are presented. The model introduced is based on the well-known Annual Cost Model which is modified by adding annual risk costs. So the formula for annual costs is: K = K f + K v + K r , where: K f are annual fixed costs, K v - annual variables costs, K r -annual risk costs. The annual risk costs can be calculated by the expression: K r = e i x K c , where e i is the investment risk factor, and K c - leveled capital investment. The risk factor was created on the basis of some elements of the taxonometric method with a high level of estimation probability. The essential problem is the selection of risk investment variables, most important of which are economic, financial, technical, social, political, legal. These variables create a multidimensional space. A so called 'ideal' model of the power plant is created taking into account capacity, type, fuel used, etc. The values of the multidimensional risk factor e i lie within limit and make it possible to rank the planned plants in series according to the estimated level of risk. This method can be used not only for risk evaluation in power engineering but also for investment efficiency studies in different industrial branches

  4. Visual and colorimetric methods for rapid determination of total tannins in vegetable raw materials

    Directory of Open Access Journals (Sweden)

    S. P. Kalinkina

    2016-01-01

    Full Text Available The article is dedicated to the development of rapid colorimetric method for determining the amount of tannins in aqueous extracts of vegetable raw materials. The sorption-based colorimetric test is determining sorption tannins polyurethane foam, impregnated of FeCl3, receiving on its surface painted in black and green color of the reaction products and the determination of their in sorbent matrix. Selectivity is achieved by determining the tannins specific interaction of polyphenols with iron ions (III. The conditions of sorption-colorimetric method: the concentration of ferric chloride (III, impregnated in the polyurethane foam; sorbent mass in the analytical cartridge; degree of loading his agent; the contact time of the phases. color scales have been developed for the visual determination of the amount of tannins in terms of gallic acid. Spend a digitized image obtained scales using computer program “Sorbfil TLC”, excluding a subjective assessment of the intensity of the color scale of the test. The results obtained determine the amount of tannins in aqueous extracts of vegetable raw rapid method using tablets and analytical cartridges. The results of the test determination of tannins with visual and densitometric analytical signal registration are compared to known methods. Spend a metrological evaluation of the results of determining the amount of tannins sorption rapid colorimetric methods. Time visual and densitometric rapid determination of tannins, taking into account the sample preparation is 25–30 minutes, the relative error does not exceed 28 %. The developed test methods for quantifying the content of tannins allow to exclude the use of sophisticated analytical equipment, carry out the analysis in non-laboratory conditions do not require highly skilled personnel.

  5. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Science.gov (United States)

    Junkes, Monica C; Fraiz, Fabian C; Sardenberg, Fernanda; Lee, Jessica Y; Paiva, Saul M; Ferreira, Fernanda M

    2015-01-01

    The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis. The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  6. Estimation of absorbed doses on the basis of cytogenetic methods

    International Nuclear Information System (INIS)

    Shevchenko, V.A.; Rubanovich, A.V.; Snigiryova, G.P.

    1998-01-01

    Long-term studies in the field of radiation cytogenetics have resulted in the discovery of relationship between induction of chromosome aberrations and the type of ionizing radiation, their intensity and dose. This has served as a basis of biological dosimetry as an area of application of the revealed relationship, and has been used in the practice to estimate absorbed doses in people exposed to emergency irradiation. The necessity of using the methods of biological dosimetry became most pressing in connection with the Chernobyl accident in 1986, as well as in connection with other radiation situations that occurred in nuclear industry of the former USSR. The materials presented in our works demonstrate the possibility of applying cytogenetic methods for assessing absorbed doses in populations of different regions exposed to radiation as a result of accidents at nuclear facilities (Chernobyl, the village Muslymovo on the Techa river, the Three Mile Island nuclear power station in the USA where an accident occurred in 1979). Fundamentally, new possibilities for retrospective dose assessment are provided by the FISH-method that permits the assessment of absorbed doses after several decades since the exposure occurred. In addition, the application of this method makes it possible to restore the dynamics of unstable chromosome aberrations (dicentrics and centric rings), which is important for further improvement of the method of biological dosimetry based on the analysis of unstable chromosome aberrations. The purpose of our presentation is a brief description of the cytogenetic methods used in biological dosimetry, consideration of statistical methods of data analysis and a description of concrete examples of their application. (J.P.N.)

  7. Rainfall estimation by inverting SMOS soil moisture estimates: A comparison of different methods over Australia

    Science.gov (United States)

    Brocca, Luca; Pellarin, Thierry; Crow, Wade T.; Ciabatta, Luca; Massari, Christian; Ryu, Dongryeol; Su, Chun-Hsu; Rüdiger, Christoph; Kerr, Yann

    2016-10-01

    Remote sensing of soil moisture has reached a level of maturity and accuracy for which the retrieved products can be used to improve hydrological and meteorological applications. In this study, the soil moisture product from the Soil Moisture and Ocean Salinity (SMOS) satellite is used for improving satellite rainfall estimates obtained from the Tropical Rainfall Measuring Mission multisatellite precipitation analysis product (TMPA) using three different "bottom up" techniques: SM2RAIN, Soil Moisture Analysis Rainfall Tool, and Antecedent Precipitation Index Modification. The implementation of these techniques aims at improving the well-known "top down" rainfall estimate derived from TMPA products (version 7) available in near real time. Ground observations provided by the Australian Water Availability Project are considered as a separate validation data set. The three algorithms are calibrated against the gauge-corrected TMPA reanalysis product, 3B42, and used for adjusting the TMPA real-time product, 3B42RT, using SMOS soil moisture data. The study area covers the entire Australian continent, and the analysis period ranges from January 2010 to November 2013. Results show that all the SMOS-based rainfall products improve the performance of 3B42RT, even at daily time scale (differently from previous investigations). The major improvements are obtained in terms of estimation of accumulated rainfall with a reduction of the root-mean-square error of more than 25%. Also, in terms of temporal dynamic (correlation) and rainfall detection (categorical scores) the SMOS-based products provide slightly better results with respect to 3B42RT, even though the relative performance between the methods is not always the same. The strengths and weaknesses of each algorithm and the spatial variability of their performances are identified in order to indicate the ways forward for this promising research activity. Results show that the integration of bottom up and top down approaches

  8. Public-Private Investment Partnerships: Efficiency Estimation Methods

    Directory of Open Access Journals (Sweden)

    Aleksandr Valeryevich Trynov

    2016-06-01

    Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in

  9. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  10. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    Science.gov (United States)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  11. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  12. Estimation of Anthocyanin Content of Berries by NIR Method

    International Nuclear Information System (INIS)

    Zsivanovits, G.; Ludneva, D.; Iliev, A.

    2010-01-01

    Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.

  13. An interactive website for analytical method comparison and bias estimation.

    Science.gov (United States)

    Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T

    2017-12-01

    Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  14. Method for estimating road salt contamination of Norwegian lakes

    Science.gov (United States)

    Kitterød, Nils-Otto; Wike Kronvall, Kjersti; Turtumøygaard, Stein; Haaland, Ståle

    2013-04-01

    Consumption of road salt in Norway, used to improve winter road conditions, has been tripled during the last two decades, and there is a need to quantify limits for optimal use of road salt to avoid further environmental harm. The purpose of this study was to implement methodology to estimate chloride concentration in any given water body in Norway. This goal is feasible to achieve if the complexity of solute transport in the landscape is simplified. The idea was to keep computations as simple as possible to be able to increase spatial resolution of input functions. The first simplification we made was to treat all roads exposed to regular salt application as steady state sources of sodium chloride. This is valid if new road salt is applied before previous contamination is removed through precipitation. The main reasons for this assumption are the significant retention capacity of vegetation; organic matter; and soil. The second simplification we made was that the groundwater table is close to the surface. This assumption is valid for major part of Norway, which means that topography is sufficient to delineate catchment area at any location in the landscape. Given these two assumptions, we applied spatial functions of mass load (mass NaCl pr. time unit) and conditional estimates of normal water balance (volume of water pr. time unit) to calculate steady state chloride concentration along the lake perimeter. Spatial resolution of mass load and estimated concentration along the lake perimeter was 25 m x 25 m while water balance had 1 km x 1 km resolution. The method was validated for a limited number of Norwegian lakes and estimation results have been compared to observations. Initial results indicate significant overlap between measurements and estimations, but only for lakes where the road salt is the major contribution for chloride contamination. For lakes in catchments with high subsurface transmissivity, the groundwater table is not necessarily following the

  15. Earthquake magnitude estimation using the τ c and P d method for earthquake early warning systems

    Science.gov (United States)

    Jin, Xing; Zhang, Hongcai; Li, Jun; Wei, Yongxiang; Ma, Qiang

    2013-10-01

    Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.

  16. Rapid determination of tannins in tanning baths by adaptation of BSA method.

    Science.gov (United States)

    Molinari, R; Buonomenna, M G; Cassano, A; Drioli, E

    2001-01-01

    A rapid and reproducible method for the determination of tannins in vegetable tanning baths is proposed as a modification of the BSA method for grain tannins existing in literature. The protein BSA was used instead of leather powder employed in the Filter Method, which is adopted in Italy and various others countries of Central Europe. In this rapid method the tannin contents is determined by means a spectrophotometric reading and not by means a gravimetric analysis of the Filter Method. The BSA method, which belongs to mixed methods (which use both precipitation and complexation of tannins), consists of selective precipitation of tannin from a solution containing also non tannins by BSA, the dissolution of precipitate and the quantification of free tannin amount by its complexation with Fe(III) in hydrochloric solutions. The absorbance values, read at 522 nm, have been expressed in terms of tannic acid concentration by using a calibration curve made with standard solutions of tannic acid; these have been correlated with the results obtained by using the Filter Method.

  17. Estimating recharge at yucca mountain, nevada, usa: comparison of methods

    International Nuclear Information System (INIS)

    Flint, A. L.; Flint, L. E.; Kwicklis, E. M.; Fabryka-Martin, J. T.; Bodvarsson, G. S.

    2001-01-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for and environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 nun/year near Yucca Crest. Site-scale recharge estimates range from less than I to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface. [References: 57

  18. Estimating recharge at Yucca Mountain, Nevada, USA: Comparison of methods

    Science.gov (United States)

    Flint, A.L.; Flint, L.E.; Kwicklis, E.M.; Fabryka-Martin, J. T.; Bodvarsson, G.S.

    2002-01-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.

  19. Application of Rapid Prototyping Methods to High-Speed Wind Tunnel Testing

    Science.gov (United States)

    Springer, A. M.

    1998-01-01

    This study was undertaken in MSFC's 14-Inch Trisonic Wind Tunnel to determine if rapid prototyping methods could be used in the design and manufacturing of high speed wind tunnel models in direct testing applications, and if these methods would reduce model design/fabrication time and cost while providing models of high enough fidelity to provide adequate aerodynamic data, and of sufficient strength to survive the test environment. Rapid prototyping methods utilized to construct wind tunnel models in a wing-body-tail configuration were: fused deposition method using both ABS plastic and PEEK as building materials, stereolithography using the photopolymer SL-5170, selective laser sintering using glass reinforced nylon, and laminated object manufacturing using plastic reinforced with glass and 'paper'. This study revealed good agreement between the SLA model, the metal model with an FDM-ABS nose, an SLA nose, and the metal model for most operating conditions, while the FDM-ABS data diverged at higher loading conditions. Data from the initial SLS model showed poor agreement due to problems in post-processing, resulting in a different configuration. A second SLS model was tested and showed relatively good agreement. It can be concluded that rapid prototyping models show promise in preliminary aerodynamic development studies at subsonic, transonic, and supersonic speeds.

  20. Multivariate regression methods for estimating velocity of ictal discharges from human microelectrode recordings

    Science.gov (United States)

    Liou, Jyun-you; Smith, Elliot H.; Bateman, Lisa M.; McKhann, Guy M., II; Goodman, Robert R.; Greger, Bradley; Davis, Tyler S.; Kellis, Spencer S.; House, Paul A.; Schevon, Catherine A.

    2017-08-01

    Objective. Epileptiform discharges, an electrophysiological hallmark of seizures, can propagate across cortical tissue in a manner similar to traveling waves. Recent work has focused attention on the origination and propagation patterns of these discharges, yielding important clues to their source location and mechanism of travel. However, systematic studies of methods for measuring propagation are lacking. Approach. We analyzed epileptiform discharges in microelectrode array recordings of human seizures. The array records multiunit activity and local field potentials at 400 micron spatial resolution, from a small cortical site free of obstructions. We evaluated several computationally efficient statistical methods for calculating traveling wave velocity, benchmarking them to analyses of associated neuronal burst firing. Main results. Over 90% of discharges met statistical criteria for propagation across the sampled cortical territory. Detection rate, direction and speed estimates derived from a multiunit estimator were compared to four field potential-based estimators: negative peak, maximum descent, high gamma power, and cross-correlation. Interestingly, the methods that were computationally simplest and most efficient (negative peak and maximal descent) offer non-inferior results in predicting neuronal traveling wave velocities compared to the other two, more complex methods. Moreover, the negative peak and maximal descent methods proved to be more robust against reduced spatial sampling challenges. Using least absolute deviation in place of least squares error minimized the impact of outliers, and reduced the discrepancies between local field potential-based and multiunit estimators. Significance. Our findings suggest that ictal epileptiform discharges typically take the form of exceptionally strong, rapidly traveling waves, with propagation detectable across millimeter distances. The sequential activation of neurons in space can be inferred from clinically

  1. Technical note: Rapid image-based field methods improve the quantification of termite mound structures and greenhouse-gas fluxes

    Directory of Open Access Journals (Sweden)

    P. A. Nauer

    2018-06-01

    Full Text Available Termite mounds (TMs mediate biogeochemical processes with global relevance, such as turnover of the important greenhouse gas methane (CH4. However, the complex internal and external morphology of TMs impede an accurate quantitative description. Here we present two novel field methods, photogrammetry (PG and cross-sectional image analysis, to quantify TM external and internal mound structure of 29 TMs of three termite species. Photogrammetry was used to measure epigeal volume (VE, surface area (AE and mound basal area (AB by reconstructing 3-D models from digital photographs, and compared against a water-displacement method and the conventional approach of approximating TMs by simple geometric shapes. To describe TM internal structure, we introduce TM macro- and micro-porosity (θM and θμ, the volume fractions of macroscopic chambers, and microscopic pores in the wall material, respectively. Macro-porosity was estimated using image analysis of single TM cross sections, and compared against full X-ray computer tomography (CT scans of 17 TMs. For these TMs we present complete pore fractions to assess species-specific differences in internal structure. The PG method yielded VE nearly identical to a water-displacement method, while approximation of TMs by simple geometric shapes led to errors of 4–200 %. Likewise, using PG substantially improved the accuracy of CH4 emission estimates by 10–50 %. Comprehensive CT scanning revealed that investigated TMs have species-specific ranges of θM and θμ, but similar total porosity. Image analysis of single TM cross sections produced good estimates of θM for species with thick walls and evenly distributed chambers. The new image-based methods allow rapid and accurate quantitative characterisation of TMs to answer ecological, physiological and biogeochemical questions. The PG method should be applied when measuring greenhouse-gas emissions from TMs to avoid large errors from inadequate shape

  2. TWO METHODS FOR REMOTE ESTIMATION OF COMPLETE URBAN SURFACE TEMPERATURE

    Directory of Open Access Journals (Sweden)

    L. Jiang

    2017-09-01

    Full Text Available Complete urban surface temperature (TC is a key parameter for evaluating the energy exchange between the urban surface and atmosphere. At the present stage, the estimation of TC still needs detailed 3D structure information of the urban surface, however, it is often difficult to obtain the geometric structure and composition of the corresponding temperature of urban surface, so that there is still lack of concise and efficient method for estimating the TC by remote sensing. Based on the four typical urban surface scale models, combined with the Envi-met model, thermal radiant directionality forward modeling and kernel model, we analyzed a complete day and night cycle hourly component temperature and radiation temperature in each direction of two seasons of summer and winter, and calculated hemispherical integral temperature and TC. The conclusion is obtained by examining the relationship of directional radiation temperature, hemispherical integral temperature and TC: (1 There is an optimal angle of radiation temperature approaching the TC in a single observation direction when viewing zenith angle is 45–60°, the viewing azimuth near the vertical surface of the sun main plane, the average absolute difference is about 1.1 K in the daytime. (2 There are several (3–5 times directional temperatures of different view angle, under the situation of using the thermal radiation directionality kernel model can more accurately calculate the hemispherical integral temperature close to TC, the mean absolute error is about 1.0 K in the daytime. This study proposed simple and effective strategies for estimating TC by remote sensing, which are expected to improve the quantitative level of remote sensing of urban thermal environment.

  3. Collaborative validation of a rapid method for efficient virus concentration in bottled water

    DEFF Research Database (Denmark)

    Schultz, Anna Charlotte; Perelle, Sylvie; Di Pasquale, Simona

    2011-01-01

    . Three newly developed methods, A, B and C, for virus concentration in bottled water were compared against the reference method D: (A) Convective Interaction Media (CIM) monolithic chromatography; filtration of viruses followed by (B) direct lysis of viruses on membrane; (C) concentration of viruses......Enteric viruses, including norovirus (NoV) and hepatitis A virus (HAV), have emerged as a major cause of waterborne outbreaks worldwide. Due to their low infectious doses and low concentrations in water samples, an efficient and rapid virus concentration method is required for routine control...... by ultracentrifugation; and (D) concentration of viruses by ultrafiltration, for each methods' (A, B and C) efficacy to recover 10-fold dilutions of HAV and feline calicivirus (FCV) spiked in bottles of 1.5L of mineral water. Within the tested characteristics, all the new methods showed better performance than method D...

  4. Fuji apple storage time rapid determination method using Vis/NIR spectroscopy

    Science.gov (United States)

    Liu, Fuqi; Tang, Xuxiang

    2015-01-01

    Fuji apple storage time rapid determination method using visible/near-infrared (Vis/NIR) spectroscopy was studied in this paper. Vis/NIR diffuse reflection spectroscopy responses to samples were measured for 6 days. Spectroscopy data were processed by stochastic resonance (SR). Principal component analysis (PCA) was utilized to analyze original spectroscopy data and SNR eigen value. Results demonstrated that PCA could not totally discriminate Fuji apples using original spectroscopy data. Signal-to-noise ratio (SNR) spectrum clearly classified all apple samples. PCA using SNR spectrum successfully discriminated apple samples. Therefore, Vis/NIR spectroscopy was effective for Fuji apple storage time rapid discrimination. The proposed method is also promising in condition safety control and management for food and environmental laboratories. PMID:25874818

  5. METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE

    Directory of Open Access Journals (Sweden)

    Татьяна Александровна Коркина

    2013-08-01

    Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11

  6. Strengths and limitations of period estimation methods for circadian data.

    Directory of Open Access Journals (Sweden)

    Tomasz Zielinski

    Full Text Available A key step in the analysis of circadian data is to make an accurate estimate of the underlying period. There are many different techniques and algorithms for determining period, all with different assumptions and with differing levels of complexity. Choosing which algorithm, which implementation and which measures of accuracy to use can offer many pitfalls, especially for the non-expert. We have developed the BioDare system, an online service allowing data-sharing (including public dissemination, data-processing and analysis. Circadian experiments are the main focus of BioDare hence performing period analysis is a major feature of the system. Six methods have been incorporated into BioDare: Enright and Lomb-Scargle periodograms, FFT-NLLS, mFourfit, MESA and Spectrum Resampling. Here we review those six techniques, explain the principles behind each algorithm and evaluate their performance. In order to quantify the methods' accuracy, we examine the algorithms against artificial mathematical test signals and model-generated mRNA data. Our re-implementation of each method in Java allows meaningful comparisons of the computational complexity and computing time associated with each algorithm. Finally, we provide guidelines on which algorithms are most appropriate for which data types, and recommendations on experimental design to extract optimal data for analysis.

  7. New methods for rapid data acquisition of contaminated land cover after NPP accident

    International Nuclear Information System (INIS)

    Hulka, J.; Cespirova, I.

    2008-01-01

    Aim of the research project is the analysis of the modem and rapid reliable data acquisition methods for agricultural countermeasures, feed-stuff restrictions and clean-up of large contaminated areas after NPP accident. Acquiring agricultural reliable data especially based on satellite technology and analysis of landscape contamination (based on computer code vs. in situ measurements, airborne and/or terrestrial mapping of contamination) are discussed. (authors)

  8. A rapid method for establishment of a reverse genetics system for canine parvovirus.

    Science.gov (United States)

    Yu, Yongle; Su, Jun; Wang, Jigui; Xi, Ji; Mao, Yaping; Hou, Qiang; Zhang, Xiaomei; Liu, Weiquan

    2017-12-01

    Canine parvovirus (CPV) is an important and highly prevalent pathogen of dogs that causes acute hemorrhagic enteritis disease. Here, we describe a rapid method for the construction and characterization of a full-length infectious clone (rCPV) of CPV. Feline kidney (F81) cells were transfected with rCPV incorporating an engineered EcoR I site that served as a genetic marker. The rescued virus was indistinguishable from that of wild-type virus in its biological properties.

  9. New methods for rapid data acquisition of contaminated land cover after NPP accident

    International Nuclear Information System (INIS)

    Hulka, J.; Cespirova, I.

    2009-01-01

    Aim of the research project is the analysis of the modem and rapid reliable data acquisition methods for agricultural countermeasures, feed-stuff restrictions and clean-up of large contaminated areas after NPP accident. Acquiring agricultural reliable data especially based on satellite technology and analysis of landscape contamination (based on computer code vs. in situ measurements, airborne and/or terrestrial mapping of contamination) are discussed. (authors)

  10. A simple, rapid and inexpensive screening method for the identification of Pythium insidiosum.

    Science.gov (United States)

    Tondolo, Juliana Simoni Moraes; Loreto, Erico Silva; Denardi, Laura Bedin; Mario, Débora Alves Nunes; Alves, Sydney Hartz; Santurio, Janio Morais

    2013-04-01

    Growth of Pythium insidiosum mycelia around minocycline disks (30μg) did not occur within 7days of incubation at 35°C when the isolates were grown on Sabouraud, corn meal, Muller-Hinton or RPMI agar. This technique offers a simple and rapid method for the differentiation of P. insidiosum from true filamentous fungi. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. A rapid method for the determination of some antihypertensive and antipyretic drugs by thermometric titrimetry.

    Science.gov (United States)

    Abbasi, U M; Chand, F; Bhanger, M I; Memon, S A

    1986-02-01

    A simple and rapid method is described for the direct thermometric determination of milligram amounts of methyl dopa, propranolol hydrochloride, 1-phenyl-3-methylpyrazolone (MPP) and 2,3-dimethyl-1-phenylpyrazol-5-one (phenazone) in the presence of excipients. The compounds are reacted with N'-bromosuccinimide and the heat of reaction is used to determine the end-point of the titration. The time required is approximately 2 min, and the accuracy is analytically acceptable.

  12. Evaluation of cost estimates and calculation methods used by SKB

    International Nuclear Information System (INIS)

    1994-01-01

    The Swedish Nuclear Fuel Management Co. (SKB) has estimated the costs for decommissioning the swedish nuclear power plants and managing the nuclear wastes in a 'traditional' manner i.e. by handling uncertainties through percentage additions. A 'normal' addition is used for uncertainties in specified technical systems. 'Extra' additions are used for systems uncertainties. An alternative method is suggested, using top-down principles for uncertainties, which should be applied successively, giving higher precision as the knowledge accumulates. This type of calculation can help project managers to identify and deal with areas common to different partial projects. A first step in this direction would be to perform sensitivity analyses for the most important calculation parameters. 21 refs

  13. Methods for estimating risks to nuclear power plants from shipping

    International Nuclear Information System (INIS)

    Walker, D.H.; Hartman, M.G.; Robbins, T.R.

    1975-01-01

    Nuclear power plants sited on land near shipping lanes or offshore can be exposed to potential risks if there is nearby ship or barge traffic which involves the transport of hazardous cargo. Methods that have been developed for estimating the degree of risk are summarized. Of concern are any accidents which could lead to a release or spill of the hazardous cargo, or to an explosion. A probability of occurrence of the order of 10 -7 per year is a general guideline which has been used to judge whether or not the risk from hazards created by accidents is acceptable. This guideline has been followed in the risk assessment discussed in this paper. 19 references

  14. A novel sample preparation method using rapid nonheated saponification method for the determination of cholesterol in emulsified foods.

    Science.gov (United States)

    Jeong, In-Seek; Kwak, Byung-Man; Ahn, Jang-Hyuk; Leem, Donggil; Yoon, Taehyung; Yoon, Changyong; Jeong, Jayoung; Park, Jung-Min; Kim, Jin-Man

    2012-10-01

    In this study, nonheated saponification was employed as a novel, rapid, and easy sample preparation method for the determination of cholesterol in emulsified foods. Cholesterol content was analyzed using gas chromatography with a flame ionization detector (GC-FID). The cholesterol extraction method was optimized for maximum recovery from baby food and infant formula. Under these conditions, the optimum extraction solvent was 10 mL ethyl ether per 1 to 2 g sample, and the saponification solution was 0.2 mL KOH in methanol. The cholesterol content in the products was determined to be within the certified range of certified reference materials (CRMs), NIST SRM 1544 and SRM 1849. The results of the recovery test performed using spiked materials were in the range of 98.24% to 99.45% with an relative standard devitation (RSD) between 0.83% and 1.61%. This method could be used to reduce sample pretreatment time and is expected to provide an accurate determination of cholesterol in emulsified food matrices such as infant formula and baby food. A novel, rapid, and easy sample preparation method using nonheated saponification was developed for cholesterol detection in emulsified foods. Recovery tests of CRMs were satisfactory, and the recoveries of spiked materials were accurate and precise. This method was effective and decreased the time required for analysis by 5-fold compared to the official method. © 2012 Institute of Food Technologists®

  15. Rapid column extraction method for actinides and strontium in fish and other animal tissue samples

    International Nuclear Information System (INIS)

    Maxwell III, S.L.; Faison, D.M.

    2008-01-01

    The analysis of actinides and radiostrontium in animal tissue samples is very important for environmental monitoring. There is a need to measure actinide isotopes and strontium with very low detection limits in animal tissue samples, including fish, deer, hogs, beef and shellfish. A new, rapid separation method has been developed that allows the measurement of plutonium, neptunium, uranium, americium, curium and strontium isotopes in large animal tissue samples (100-200 g) with high chemical recoveries and effective removal of matrix interferences. This method uses stacked TEVA Resin R , TRU Resin R and DGA Resin R cartridges from Eichrom Technologies (Darien, IL, USA) that allows the rapid separation of plutonium (Pu), neptunium (Np), uranium (U), americium (Am), and curium (Cm) using a single multi-stage column combined with alphaspectrometry. Strontium is collected on Sr Resin R from Eichrom Technologies (Darien, IL, USA). After acid digestion and furnace heating of the animal tissue samples, the actinides and 89/90 Sr are separated using column extraction chromatography. This method has been shown to be effective over a wide range of animal tissue matrices. Vacuum box cartridge technology with rapid flow rates is used to minimize sample preparation time. (author)

  16. GSMA: Gene Set Matrix Analysis, An Automated Method for Rapid Hypothesis Testing of Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Chris Cheadle

    2007-01-01

    Full Text Available Background: Microarray technology has become highly valuable for identifying complex global changes in gene expression patterns. The assignment of functional information to these complex patterns remains a challenging task in effectively interpreting data and correlating results from across experiments, projects and laboratories. Methods which allow the rapid and robust evaluation of multiple functional hypotheses increase the power of individual researchers to data mine gene expression data more efficiently.Results: We have developed (gene set matrix analysis GSMA as a useful method for the rapid testing of group-wise up- or downregulation of gene expression simultaneously for multiple lists of genes (gene sets against entire distributions of gene expression changes (datasets for single or multiple experiments. The utility of GSMA lies in its flexibility to rapidly poll gene sets related by known biological function or as designated solely by the end-user against large numbers of datasets simultaneously.Conclusions: GSMA provides a simple and straightforward method for hypothesis testing in which genes are tested by groups across multiple datasets for patterns of expression enrichment.

  17. A direct and rapid method to determine cyanide in urine by capillary electrophoresis.

    Science.gov (United States)

    Zhang, Qiyang; Maddukuri, Naveen; Gong, Maojun

    2015-10-02

    Cyanides are poisonous chemicals that widely exist in nature and industrial processes as well as accidental fires. Rapid and accurate determination of cyanide exposure would facilitate forensic investigation, medical diagnosis, and chronic cyanide monitoring. Here, a rapid and direct method was developed for the determination of cyanide ions in urinary samples. This technique was based on an integrated capillary electrophoresis system coupled with laser-induced fluorescence (LIF) detection. Cyanide ions were derivatized with naphthalene-2,3-dicarboxaldehyde (NDA) and a primary amine (glycine) for LIF detection. Three separate reagents, NDA, glycine, and cyanide sample, were mixed online, which secured uniform conditions between samples for cyanide derivatization and reduced the risk of precipitation formation of mixtures. Conditions were optimized; the derivatization was completed in 2-4min, and the separation was observed in 25s. The limit of detection (LOD) was 4.0nM at 3-fold signal-to-noise ratio for standard cyanide in buffer. The cyanide levels in urine samples from smokers and non-smokers were determined by using the method of standard addition, which demonstrated significant difference of cyanide levels in urinary samples from the two groups of people. The developed method was rapid and accurate, and is anticipated to be applicable to cyanide detection in waste water with appropriate modification. Published by Elsevier B.V.

  18. Application of pulse spectro- zonal luminescent method for the rapid method of material analysis

    International Nuclear Information System (INIS)

    Lisitsin, V.M.; Oleshko, V.I.; Yakovlev, A.N.

    2004-01-01

    Full text: The scope of luminescent methods of the analysis covers enough a big around of substances as the luminescence can be excited in overwhelming majority of nonmetals. Analytical opportunities of luminescent methods can be essentially expanded by use of pulse excitation and registration of spectra of a luminescence with the time resolved methods. The most perspective method is to use pulses of high-current electron beams with the nanosecond duration for excitation from the following reasons: excitation is carried out ionizing, deeply enough by a penetrating radiation; the pulse of radiation has high capacity, up to 10 8 W, but energy no more than 1 J; the pulse of radiation has the nanosecond duration. Electrons with energy in 300-400 keV will penetrate on depth into some tenth shares of mm, i.e. they create volumetric excitation of a sample. Therefore the luminescence raised by an electronic beam has the information about volumetric properties of substance. High density of excitation allow to find out and study the centers (defects) having a small yield of a luminescence, to analyze the weakly luminescent objects. Occurrence of the new effects is possible useful to analyze of materials. There is an opportunity of reception of the information from change of spectral structure of a luminescence during the time after the ending of a pulse of excitation and kinetic characteristics of attenuation of luminescence. The matter is the energy of radiation is absorbed mainly by a matrix, then electronic excitations one is transferred the centers of a luminescence (defects) of a lattice. Therefore during the time after creation electronic excitations the spectrum of a luminescence can repeatedly change, transferring the information on the centers (defects) which are the most effective radiators at present time. Hence, the study of change of spectra of radiation during the time allows providing an additional way of discrimination of the information on the centers of a

  19. Renal parenchyma thickness: a rapid estimation of renal function on computed tomography

    International Nuclear Information System (INIS)

    Kaplon, Daniel M.; Lasser, Michael S.; Sigman, Mark; Haleblian, George E.; Pareek, Gyan

    2009-01-01

    Purpose: To define the relationship between renal parenchyma thickness (RPT) on computed tomography and renal function on nuclear renography in chronically obstructed renal units (ORUs) and to define a minimal thickness ratio associated with adequate function. Materials and Methods: Twenty-eight consecutive patients undergoing both nuclear renography and CT during a six-month period between 2004 and 2006 were included. All patients that had a diagnosis of unilateral obstruction were included for analysis. RPT was measured in the following manner: The parenchyma thickness at three discrete levels of each kidney was measured using calipers on a CT workstation. The mean of these three measurements was defined as RPT. The renal parenchyma thickness ratio of the ORUs and non-obstructed renal unit (NORUs) was calculated and this was compared to the observed function on Mag-3 lasix Renogram. Results: A total of 28 patients were evaluated. Mean parenchyma thickness was 1.82 cm and 2.25 cm in the ORUs and NORUs, respectively. The mean relative renal function of ORUs was 39%. Linear regression analysis comparing renogram function to RPT ratio revealed a correlation coefficient of 0.48 (p * RPT ratio. A thickness ratio of 0.68 correlated with 20% renal function. Conclusion: RPT on computed tomography appears to be a powerful predictor of relative renal function in ORUs. Assessment of RPT is a useful and readily available clinical tool for surgical decision making (renal salvage therapy versus nephrectomy) in patients with ORUs. (author)

  20. A rapid, sensitive, and cost-efficient assay to estimate viability of potato cyst nematodes.

    Science.gov (United States)

    van den Elsen, Sven; Ave, Maaike; Schoenmakers, Niels; Landeweert, Renske; Bakker, Jaap; Helder, Johannes

    2012-02-01

    Potato cyst nematodes (PCNs) are quarantine organisms, and they belong to the economically most relevant pathogens of potato worldwide. Methodologies to assess the viability of their cysts, which can contain 200 to 500 eggs protected by the hardened cuticle of a dead female, are either time and labor intensive or lack robustness. We present a robust and cost-efficient viability assay based on loss of membrane integrity upon death. This assay uses trehalose, a disaccharide present at a high concentration in the perivitelline fluid of PCN eggs, as a viability marker. Although this assay can detect a single viable egg, the limit of detection for regular field samples was higher, ≈10 viable eggs, due to background signals produced by other soil components. On the basis of 30 nonviable PCN samples from The Netherlands, a threshold level was defined (ΔA(trehalose) = 0.0094) below which the presence of >10 viable eggs is highly unlikely (true for ≈99.7% of the observations). This assay can easily be combined with a subsequent DNA-based species determination. The presence of trehalose is a general phenomenon among cyst nematodes; therefore, this method can probably be used for (for example) soybean, sugar beet, and cereal cyst nematodes as well.

  1. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol.2

    International Nuclear Information System (INIS)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios

  2. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    Science.gov (United States)

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  3. Three rapid methods for determination 90Sr in milk samples using liquid scintillation spectrometry

    International Nuclear Information System (INIS)

    Abbasisiara, F.; Attarilar, N.; Afshar, N.

    2006-01-01

    Strontium radionuclide 90 Sr is one of the main long-lived components of the radioactive fallout which occurred as a result of previous atmospheric nuclear tests and also nuclear accidents such as Chernobyl accident. Due to chemical and biochemical similarities between strontium and calcium, more than 99% of strontium is efficiently incorporated into bone tissue and teeth and Characterized by along physical and biological half-life, it may cause damage to bone marrow. Since determination of this radionuclide often is a time consuming process, rapid determination methods specially in emergency situations is always desirable. In this work, three rapid methods for determination of this radionuclide in milk samples will be evaluated. All of the methods include two major steps: 1- strontium separation from fats and proteins which can be performed by drying (in case of the fresh milk samples), ashing and leaching by nitric acids or by using exchange or chelating resins which have strong affinity for alkaline earth cations such as Dowex 50W-X8. And 2- Separation of Sr-90 or its daughter product, Y-90. In two methods separation of 90 Sr is performed by extraction of the daughter nuclide, 90 Y, by aid of organic extracting agent, Tributylphosphate or T.B.P., and then Cherenkov counting of the Y-90 extracted. The third method is based on separation of this radionuclide using Crown Ether or Sr -Spec resin. The detailed radiochemical procedures and evaluation of each method advantages or disadvantages will explained in full text paper. (authors)

  4. Rapid assessment of health needs in mass emergencies: review of current concepts and methods.

    Science.gov (United States)

    Guha-Sapir, D

    1991-01-01

    The increase in the number of natural disasters and their impact on population is of growing concern to countries at risk and agencies involved in health and humanitarian action. The numbers of persons killed or disabled as a result of earthquakes, cyclones, floods and famines have reached record levels in the last decade. Population density, rampant urbanization and climatic changes have brought about risk patterns that are exposing larger and larger sections of populations in developing countries to life-threatening natural disasters. Despite substantial spending on emergency relief, the approaches to relief remain largely ad hoc and amateurish, resulting generally in inappropriate and/or delayed action. In recent years, mass emergencies of the kind experienced in Bangladesh or the Sahelian countries have highlighted the importance of rapid assessment of health needs for better allocation of resources and relief management. As a result, the development of techniques for rapid assessment of health needs has been identified as a priority for effective emergency action. This article sketches the health context of disasters in terms of mortality and morbidity patterns; it describes initial assessment techniques currently used and their methodological biases and constraints; it also discusses assessment needs which vary between different types of disasters and the time frame within which assessments are undertaken. Earthquakes, cyclones, famines, epidemics or refugees all have specific risk profiles and emergency conditions which differ for each situation. Vulnerability to mortality changes according to age and occupation, for earthquakes and famines. These risk factors then have significant implications for the design of rapid assessment protocols and checklists. Experiences from the field in rapid survey techniques and estimation of death rates are discussed, with emphasis on the need for a reliable denominator even for the roughest assessment. Finally, the

  5. Liquid Chromatography with Electrospray Ionization and Tandem Mass Spectrometry Applied in the Quantitative Analysis of Chitin-Derived Glucosamine for a Rapid Estimation of Fungal Biomass in Soil

    Directory of Open Access Journals (Sweden)

    Madelen A. Olofsson

    2016-01-01

    Full Text Available This method employs liquid chromatography-tandem mass spectrometry to rapidly quantify chitin-derived glucosamine for estimating fungal biomass. Analyte retention was achieved using hydrophilic interaction liquid chromatography, with a zwitter-ionic stationary phase (ZIC-HILIC, and isocratic elution using 60% 5 mM ammonium formate buffer (pH 3.0 and 40% ACN. Inclusion of muramic acid and its chromatographic separation from glucosamine enabled calculation of the bacterial contribution to the latter. Galactosamine, an isobaric isomer to glucosamine, found in significant amounts in soil samples, was also investigated. The two isomers form the same precursor and product ions and could not be chromatographically separated using this rapid method. Instead, glucosamine and galactosamine were distinguished mathematically, using the linear relationships describing the differences in product ion intensities for the two analytes. The m/z transitions of 180 → 72 and 180 → 84 were applied for the detection of glucosamine and galactosamine and that of 252 → 126 for muramic acid. Limits of detection were in the nanomolar range for all included analytes. The total analysis time was 6 min, providing a high sample throughput method.

  6. Rapid instrumental and separation methods for monitoring radionuclides in food and environmental samples. Progress report

    International Nuclear Information System (INIS)

    Bhat, I.S.; Shukla, V.K.; Singh, A.N.; Nair, C.K.G.; Hingorani, S.B.; Dey, N.N.; Jha, S.K.; Rao, D.D.

    1995-01-01

    When activity levels are low, the direct gamma counting of milk and water samples take very long time, initial concentration step increases the sensitivity. 131 I in aqueous samples can be concentrated by absorption on AgCI in acidic condition. In case of milk, initial treatment with TCA, separation of precipitated casin and stirring the acidified (dil. HNO 3 ) clear solution with about 500 mg AgCI gives all the 131 I (more than 95%) picked up by AgCI which can be counted in a well crystal gamma spectrometer. In case of water samples acidification and direct stirring with AgCI all 131 I gets absorbed on to AgCI. About half an hour stirring has been found sufficient to give reproducible result. The total time required will be about 3 hrs. In case of 137 Cs, the aqueous solution should be stirred with ammonium phosphomolybdate (AMP) after acidification with HNO 3 . After an hour of AMP settling time, decantation, filtration and centrifuging one can get the AMP ready for counting in a gamma spectrometer having a well type detector. The analysis can be completed within 2 hrs. AgCI concentration of 131 I and AMP concentration of 137 Cs reduces the counting time significantly. These methods have been used for sea water and milk samples analysis. Methods are being standardised for solvent extraction separation of Pu, Am and Cm from preconcentrated environmental samples and direct counting of organic extract by liquid scintillation counting. For Pu determination, solvent extraction by TTA, back extraction and reextraction to 5% D2EHPA and direct liquid scintillation counting of Pu-alphas is planned. This will reduce the time required for Pu analysis to a significant extent. After bringing the sample to solution, this separation step can be carried out within 1 1/2 to 2 hrs. With Instagel scintillator cocktail in the packard 1550 LSS, Pu-239 counting had 70% efficiency with 5.3 cpm background. Pu-239 estimated in a few sediment sample gave results by both LSS method and Si

  7. A Microfluidic Channel Method for Rapid Drug-Susceptibility Testing of Pseudomonas aeruginosa.

    Directory of Open Access Journals (Sweden)

    Yoshimi Matsumoto

    Full Text Available The recent global increase in the prevalence of antibiotic-resistant bacteria and lack of development of new therapeutic agents emphasize the importance of selecting appropriate antimicrobials for the treatment of infections. However, to date, the development of completely accelerated drug susceptibility testing methods has not been achieved despite the availability of a rapid identification method. We proposed an innovative rapid method for drug susceptibility testing for Pseudomonas aeruginosa that provides results within 3 h. The drug susceptibility testing microfluidic (DSTM device was prepared using soft lithography. It consisted of five sets of four microfluidic channels sharing one inlet slot, and the four channels are gathered in a small area, permitting simultaneous microscopic observation. Antimicrobials were pre-introduced into each channel and dried before use. Bacterial suspensions in cation-adjusted Mueller-Hinton broth were introduced from the inlet slot and incubated for 3 h. Susceptibilities were microscopically evaluated on the basis of differences in cell numbers and shapes between drug-treated and control cells, using dedicated software. The results of 101 clinically isolated strains of P. aeruginosa obtained using the DSTM method strongly correlated with results obtained using the ordinary microbroth dilution method. Ciprofloxacin, meropenem, ceftazidime, and piperacillin caused elongation in susceptible cells, while meropenem also induced spheroplast and bulge formation. Morphological observation could alternatively be used to determine the susceptibility of P. aeruginosa to these drugs, although amikacin had little effect on cell shape. The rapid determination of bacterial drug susceptibility using the DSTM method could also be applicable to other pathogenic species, and it could easily be introduced into clinical laboratories without the need for expensive instrumentation.

  8. Identification of new biomarker of radiation exposure for establishing rapid, simplified biodosimetric method

    International Nuclear Information System (INIS)

    Iizuka, Daisuke; Kawai, Hidehiko; Kamiya, Kenji; Suzuki, Fumio; Izumi, Shunsuke

    2014-01-01

    Until now, counting chromosome aberration is the most accurate method for evaluating radiation doses. However, this method is time consuming and requires skills for evaluating chromosome aberrations. It could be difficult to apply this method to majority of people who are expected to be exposed to ionizing radiation. In this viewpoint, establishment of rapid, simplified biodosimetric methods for triage will be anticipated. Due to the development of mass spectrometry method and the identification of new molecules such as microRNA (miRNA), it is conceivable that new molecular biomarker of radiation exposure using some newly developed mass spectrometry. In this review article, the part of our results including the changes of protein (including the changes of glycosylation), peptide, metabolite, miRNA after radiation exposure will be shown. (author)

  9. Rapid Determination of Isomeric Benzoylpaeoniflorin and Benzoylalbiflorin in Rat Plasma by LC-MS/MS Method

    Directory of Open Access Journals (Sweden)

    Chuanqi Zhou

    2017-01-01

    Full Text Available Benzoylpaeoniflorin (BP is a potential therapeutic agent against oxidative stress related Alzheimer’s disease. In this study, a more rapid, selective, and sensitive liquid chromatography-tandem mass spectrometric (LC-MS/MS method was developed to determine BP in rat plasma distinguishing with a monoterpene isomer, benzoylalbiflorin (BA. The method showed a linear response from 1 to 1000 ng/mL (r>0.9950. The precision of the interday and intraday ranged from 2.03 to 12.48% and the accuracy values ranged from −8.00 to 10.33%. Each running of the method could be finished in 4 minutes. The LC-MS/MS method was validated for specificity, linearity, precision, accuracy, recovery, and stability and was found to be acceptable for bioanalytical application. Finally, this fully validated method was successfully applied to a pharmacokinetic study in rats following oral administration.

  10. Multifrequency Excitation Method for Rapid and Accurate Dynamic Test of Micromachined Gyroscope Chips

    Directory of Open Access Journals (Sweden)

    Yan Deng

    2014-10-01

    Full Text Available A novel multifrequency excitation (MFE method is proposed to realize rapid and accurate dynamic testing of micromachined gyroscope chips. Compared with the traditional sweep-frequency excitation (SFE method, the computational time for testing one chip under four modes at a 1-Hz frequency resolution and 600-Hz bandwidth was dramatically reduced from 10 min to 6 s. A multifrequency signal with an equal amplitude and initial linear-phase-difference distribution was generated to ensure test repeatability and accuracy. The current test system based on LabVIEW using the SFE method was modified to use the MFE method without any hardware changes. The experimental results verified that the MFE method can be an ideal solution for large-scale dynamic testing of gyroscope chips and gyroscopes.

  11. Comparative study of the geostatistical ore reserve estimation method over the conventional methods

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1975-01-01

    Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide

  12. A simple and rapid method of purification of impure plutonium oxide

    International Nuclear Information System (INIS)

    Michael, K.M.; Rakshe, P.R.; Dharmpurikar, G.R.; Thite, B.S.; Lokhande, Manisha; Sinalkar, Nitin; Dakshinamoorthy, A.; Munshi, S.K.; Dey, P.K.

    2007-01-01

    Impure plutonium oxides are conventionally purified by dissolution in HNO 3 in presence of HF followed by ion exchange separation and oxalate precipitation. The method is tedious and use of HF enhances corrosion of the plant equipment's. A simple and rapid method has been developed for the purification of the oxide by leaching with various reagents like DM water, NaOH and oxalic acid. A combination of DM water followed by hot leaching with 0.4 M oxalic acid could bring down the impurity levels in the oxide to the desired level required for fuel fabrication. (author)

  13. Rapid method for protein quantitation by Bradford assay after elimination of the interference of polysorbate 80.

    Science.gov (United States)

    Cheng, Yongfeng; Wei, Haiming; Sun, Rui; Tian, Zhigang; Zheng, Xiaodong

    2016-02-01

    Bradford assay is one of the most common methods for measuring protein concentrations. However, some pharmaceutical excipients, such as detergents, interfere with Bradford assay even at low concentrations. Protein precipitation can be used to overcome sample incompatibility with protein quantitation. But the rate of protein recovery caused by acetone precipitation is only about 70%. In this study, we found that sucrose not only could increase the rate of protein recovery after 1 h acetone precipitation, but also did not interfere with Bradford assay. So we developed a method for rapid protein quantitation in protein drugs even if they contained interfering substances. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A rapid method of reprocessing for electronic microscopy of cut histological in paraffin

    International Nuclear Information System (INIS)

    Hernandez Chavarri, F.; Vargas Montero, M.; Rivera, P.; Carranza, A.

    2000-01-01

    A simple and rapid method is described for re-processing of light microscopy paraffin sections to observe they under transmission electron microscopy (TEM) and scanning electron microscopy (SEM) The paraffin-embedded tissue is sectioned and deparaffinized in toluene; then exposed to osmium vapor under microwave irradiation using a domestic microwave oven. The tissues were embedded in epoxy resin, polymerized and ultrathin sectioned. The method requires a relatively short time (about 30 minutes for TEM and 15 for SEM), and produces a reasonable quality of the ultrastructure for diagnostic purposes. (Author) [es

  15. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    Science.gov (United States)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  16. A method to estimate the ageing of a cooling tower

    International Nuclear Information System (INIS)

    Barnel, Nathalie; Courtois, Alexis; Ilie, Petre-Lazar

    2006-09-01

    This paper deals with cooling towers ageing. Our contribution is a method to determine which part of on site measured strain we are able to predict by means of simulations. As a result, we map a gap indicator on the structure. Calculations have been performed in three configurations. Comparing the values obtained in the three cases helps to determine which researches are worth to be done. Indeed, gap indicator reveals that: - THM can not be considered as the main and only ageing mechanism, so long as tower older than 10 years are examined. At least creep has to be taken into account too; - Gap indicator is sensitive to initial hydration conditions. Drying process before bringing into service should be estimated properly, taking into account the different construction steps; - Comparing different thermal conditions reveals that meteorological conditions have a significant influence on results. So, it will be interesting to differentiate the sunny and the shaded part of the tower when the measurements are done; - A large part of the values obtained can be explicated by construction defects. A study on this particular problematic seems to be essential. The four items mentioned must be considered as perspectives to improve the present method of simulations. (authors)

  17. IMPROVING THE METHODS OF ESTIMATION OF THE UNIT TRAIN EFFECTIVENESS

    Directory of Open Access Journals (Sweden)

    Dmytro KOZACHENKO

    2016-09-01

    Full Text Available The article presents the results of studies of freight transportation by unit trains. The article is aimed at developing the methods of the efficiency evaluation of unit train dispatch on the basis of full-scale experiments. Duration of the car turnover is a random variable when dispatching the single cars and group cars, as well as when dispatching them as a part of a unit train. The existing methodologies for evaluating the efficiency of unit trains’ make-up are based on the use of calculation methodologies and their results can give significant errors. The work presents a methodology that makes it possible to evaluate the efficiency of unit train shipments based on the processing of results of experimental travels using the methods of mathematical statistics. This approach provides probabilistic estimates of the rolling stock use efficiency for different approaches to the organization of car traffic volumes, as well as establishes the effect for each of the participants in the transportation process.

  18. Human factors estimation methods in nuclear power plant

    International Nuclear Information System (INIS)

    Takano, Kenichi; Yoshino, Kenji; Nagasaka, Akihiko; Ishii, Keichiro; Nakasa, Hiroyasu

    1985-01-01

    To improve the operational and maintenance work reliability, it is neccessary for workers to maintain his performance always at high level, that leads to decreasing mistaken judgements and operations. This paper inuolves the development and evaluation of ''Multi-Purpose Physiological Information Measurement system'' to estimate human performance and conditions with a highly fixed quantity. The following itemes is mentioned : (1) Most suitable physiological informations are selected to measure worker' performance in nuclear power plant with none-disturbance, ambulatory, continual, and multi channel measurement. (2) Relatively important physiological informations are measured with the real-time monitoring functions. (electrocardiogram, respirometric functions and EMG (electromyogram) pulse rete). (3) It is made to optimize the measurement condition and analysing methods in the use of a noise-cut function and a D.C. drift cutting method. (4) As a example, it is clear that, when the different weight is loaded to the arm and make it strech-bend motion, the EMG signal is measured and analysed by this system, the analysed EMG pulse rate and maximum amplitude is related to the arm loaded weight. (author)

  19. Pipeline heating method based on optimal control and state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu

    2010-07-01

    In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem

  20. N-nitrosodimethylamine in drinking water using a rapid, solid-phase extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins, S W.D. [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Koester, C J [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Taguchi, V Y [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Wang, D T [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Palmentier, J P.F.P. [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Hong, K P [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch

    1995-12-01

    A simple, rapid method for the extraction of N-nitrosodimethylamine (NDMA) from drinking and surface waters was developed using Ambersorb 572. Development of an alternative method to classical liquid-liquid extraction techniques was necessary to handle the workload presented by implementation of a provincial guideline of 9 ppt for drinking water and a regulatory level of 200 ppt for effluents. A granular absorbent, Ambersorb 572, was used to extract the NDMA from the water in the sample bottle. The NDMA was extracted from the Ambersorb 572 with dichloromethane in the autosampler vial. Method characteristics include a precision of 4% for replicate analyses, and accuracy of 6% at 10 ppt and a detection limit of 1.0 ppt NDMA in water. Comparative data between the Ambersorb 572 method and liquid-liquid extraction showed excellent agreement (average difference of 12%). With the Ambersorb 572 method, dichloromethane use has been reduced by a factor of 1,000 and productivity has been increased by a factor of 3-4. Monitoring of a drinking water supply showed rapidly changing concentrations of NDMA from day to day. (orig.)

  1. An optimized rapid bisulfite conversion method with high recovery of cell-free DNA.

    Science.gov (United States)

    Yi, Shaohua; Long, Fei; Cheng, Juanbo; Huang, Daixin

    2017-12-19

    Methylation analysis of cell-free DNA is a encouraging tool for tumor diagnosis, monitoring and prognosis. Sensitivity of methylation analysis is a very important matter due to the tiny amounts of cell-free DNA available in plasma. Most current methods of DNA methylation analysis are based on the difference of bisulfite-mediated deamination of cytosine between cytosine and 5-methylcytosine. However, the recovery of bisulfite-converted DNA based on current methods is very poor for the methylation analysis of cell-free DNA. We optimized a rapid method for the crucial steps of bisulfite conversion with high recovery of cell-free DNA. A rapid deamination step and alkaline desulfonation was combined with the purification of DNA on a silica column. The conversion efficiency and recovery of bisulfite-treated DNA was investigated by the droplet digital PCR. The optimization of the reaction results in complete cytosine conversion in 30 min at 70 °C and about 65% of recovery of bisulfite-treated cell-free DNA, which is higher than current methods. The method allows high recovery from low levels of bisulfite-treated cell-free DNA, enhancing the analysis sensitivity of methylation detection from cell-free DNA.

  2. Reliability and validity of the Turkish version of the Rapid Estimate of Adult Literacy in Dentistry (TREALD-30).

    Science.gov (United States)

    Peker, Kadriye; Köse, Taha Emre; Güray, Beliz; Uysal, Ömer; Erdem, Tamer Lütfi

    2017-04-01

    To culturally adapt the Turkish version of Rapid Estimate of Adult Literacy in Dentistry (TREALD-30) for Turkish-speaking adult dental patients and to evaluate its psychometric properties. After translation and cross-cultural adaptation, TREALD-30 was tested in a sample of 127 adult patients who attended a dental school clinic in Istanbul. Data were collected through clinical examinations and self-completed questionnaires, including TREALD-30, the Oral Health Impact Profile (OHIP), the Rapid Estimate of Adult Literacy in Medicine (REALM), two health literacy screening questions, and socio-behavioral characteristics. Psychometric properties were examined using Classical Test Theory (CTT) and Rasch analysis. Internal consistency (Cronbach's Alpha = 0.91) and test-retest reliability (Intraclass correlation coefficient = 0.99) were satisfactory for TREALD-30. It exhibited good convergent and predictive validity. Monthly family income, years of education, dental flossing, health literacy, and health literacy skills were found as stronger predictors of patients'oral health literacy (OHL). Confirmatory factor analysis (CFA) confirmed a two-factor model. The Rasch model explained 37.9% of the total variance in this dataset. In addition, TREALD-30 had eleven misfitting items, which indicated evidence of multidimensionality. The reliability indeces provided in Rasch analysis (person separation reliability = 0.91 and expected-a-posteriori/plausible reliability = 0.94) indicated that TREALD-30 had acceptable reliability. TREALD-30 showed satisfactory psychometric properties. It may be used to identify patients with low OHL. Socio-demographic factors, oral health behaviors and health literacy skills should be taken into account when planning future studies to assess the OHL in both clinical and community settings.

  3. A rapid Salmonella detection method involving thermophilic helicase-dependent amplification and a lateral flow assay.

    Science.gov (United States)

    Du, Xin-Jun; Zhou, Tian-Jiao; Li, Ping; Wang, Shuo

    2017-08-01

    Salmonella is a major foodborne pathogen that is widespread in the environment and can cause serious human and animal disease. Since conventional culture methods to detect Salmonella are time-consuming and laborious, rapid and accurate techniques to detect this pathogen are critically important for food safety and diagnosing foodborne illness. In this study, we developed a rapid, simple and portable Salmonella detection strategy that combines thermophilic helicase-dependent amplification (tHDA) with a lateral flow assay to provide a detection result based on visual signals within 90 min. Performance analyses indicated that the method had detection limits for DNA and pure cultured bacteria of 73.4-80.7 fg and 35-40 CFU, respectively. Specificity analyses showed no cross reactions with Escherichia coli, Staphylococcus aureus, Listeria monocytogenes, Enterobacter aerogenes, Shigella and Campylobacter jejuni. The results for detection in real food samples showed that 1.3-1.9 CFU/g or 1.3-1.9 CFU/mL of Salmonella in contaminated chicken products and infant nutritional cereal could be detected after 2 h of enrichment. The same amount of Salmonella in contaminated milk could be detected after 4 h of enrichment. This tHDA-strip can be used for the rapid detection of Salmonella in food samples and is particularly suitable for use in areas with limited equipment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Estimation methods of eco-environmental water requirements: Case study

    Institute of Scientific and Technical Information of China (English)

    YANG Zhifeng; CUI Baoshan; LIU Jingling

    2005-01-01

    Supplying water to the ecological environment with certain quantity and quality is significant for the protection of diversity and the realization of sustainable development. The conception and connotation of eco-environmental water requirements, including the definition of the conception, the composition and characteristics of eco-environmental water requirements, are evaluated in this paper. The classification and estimation methods of eco-environmental water requirements are then proposed. On the basis of the study on the Huang-Huai-Hai Area, the present water use, the minimum and suitable water requirement are estimated and the corresponding water shortage is also calculated. According to the interrelated programs, the eco-environmental water requirements in the coming years (2010, 2030, 2050) are estimated. The result indicates that the minimum and suitable eco-environmental water requirements fluctuate with the differences of function setting and the referential standard of water resources, and so as the water shortage. Moreover, the study indicates that the minimum eco-environmental water requirement of the study area ranges from 2.84×1010m3 to 1.02×1011m3, the suitable water requirement ranges from 6.45×1010m3 to 1.78×1011m3, the water shortage ranges from 9.1×109m3 to 2.16×1010m3 under the minimum water requirement, and it is from 3.07×1010m3 to 7.53×1010m3 under the suitable water requirement. According to the different values of the water shortage, the water priority can be allocated. The ranges of the eco-environmental water requirements in the three coming years (2010, 2030, 2050) are 4.49×1010m3-1.73×1011m3, 5.99×10m3?2.09×1011m3, and 7.44×1010m3-2.52×1011m3, respectively.

  5. Hardware architecture design of a fast global motion estimation method

    Science.gov (United States)

    Liang, Chaobing; Sang, Hongshi; Shen, Xubang

    2015-12-01

    VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.

  6. Infrared thermography method for fast estimation of phase diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Palomo Del Barrio, Elena [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Cadoret, Régis [Centre National de la Recherche Scientifique, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Daranlot, Julien [Solvay, Laboratoire du Futur, 178 Av du Dr Schweitzer, 33608 Pessac (France); Achchaq, Fouzia, E-mail: fouzia.achchaq@u-bordeaux.fr [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France)

    2016-02-10

    Highlights: • Infrared thermography is proposed to determine phase diagrams in record time. • Phase boundaries are detected by means of emissivity changes during heating. • Transition lines are identified by using Singular Value Decomposition techniques. • Different binary systems have been used for validation purposes. - Abstract: Phase change materials (PCM) are widely used today in thermal energy storage applications. Pure PCMs are rarely used because of non adapted melting points. Instead of them, mixtures are preferred. The search of suitable mixtures, preferably eutectics, is often a tedious and time consuming task which requires the determination of phase diagrams. In order to accelerate this screening step, a new method for estimating phase diagrams in record time (1–3 h) has been established and validated. A sample composed by small droplets of mixtures with different compositions (as many as necessary to have a good coverage of the phase diagram) deposited on a flat substrate is first prepared and cooled down to ambient temperature so that all droplets crystallize. The plate is then heated at constant heating rate up to a sufficiently high temperature for melting all the small crystals. The heating process is imaged by using an infrared camera. An appropriate method based on singular values decomposition technique has been developed to analyze the recorded images and to determine the transition lines of the phase diagram. The method has been applied to determine several simple eutectic phase diagrams and the reached results have been validated by comparison with the phase diagrams obtained by Differential Scanning Calorimeter measurements and by thermodynamic modelling.

  7. The performance studies of DKDP crystals grown by a rapid horizontal growth method

    Science.gov (United States)

    Xie, Xiaoyi; Qi, Hongji; Wang, Bin; Wang, Hu; Chen, Duanyang; Shao, Jianda

    2018-04-01

    A deuterated potassium dihydrogen phosphate (DKDP) crystal with about 70% deuterium level was grown by a rapid horizontal growth method with independent design equipment, which includes a continuous filtration system. The cooling program during crystal growth was designed according to a self-developed software to catch the size of growing crystal in real time. The crystal structure, optical performance and laser induced damage threshold (LIDT) of this DKDP crystal were investigated in this paper. The deuterium concentration of the crystal was confirmed by the neutron diffraction technique, which was effective and available in determining a complete range of deuteration level. The dielectric property was measured to evaluate the perfection of the lattice. The transmittance and LIDT were carried out further to evaluate the optical and functional properties of this DKDP crystal grown in the rapid horizontal growth technique. All of the detailed characterization for DKDP figured out that the 70% deuterated KDP crystal grown in this way had relatively good qualities.

  8. Standardization of HPTLC method for the estimation of oxytocin in edibles.

    Science.gov (United States)

    Rani, Roopa; Medhe, Sharad; Raj, Kumar Rohit; Srivastava, Manmohan

    2013-12-01

    Adulteration in food stuff has been regarded as a major social evil and is a mind-boggling problem in society. In this study, a rapid, reliable and cost effective High Performance thin layer Chromatography (HPTLC) has been established for the estimation of oxytocin (adulterant) in vegetables, fruits and milk samples. Oxytocin is one of the most frequently used adulterant added in vegetables and fruits for increasing the growth rate and also to enhance milk production from lactating animals. The standardization of the method was based on simulation parameters of mobile phase, stationary phase and saturation time. The mobile phase used was MeOH: Ammonia (pH 6.8), optimized stationary phase was silica gel and saturation time of 5 min. The method was validated by testing its linearity, accuracy, precision, repeatability and limits of detection and quantification. Thus, the proposed method is simple, rapid and specific and was successfully employed for quality and quantity monitoring of oxytocin content in edible products.

  9. A rapid and specific titrimetric method for the precise determination of plutonium using redox indicator

    International Nuclear Information System (INIS)

    Chitnis, R.T.; Dubey, S.C.

    1976-01-01

    A simple and rapid method for the determination of plutonium in plutonium nitrate solution and its application to the purex process solutions is discussed. The method involves the oxidation of plutonium to Pu(VI) with the help of argentic oxide followed by the destruction of the excess argentic oxide by means of sulphamic acid. The determination of plutonium is completed by adding ferrous ammonium sulphate solution which reduces Pu(VI) to Pu(IV) and titrating the excess ferrous with standard potassium dichromate solution using sodium diphenylamine sulphonate as the internal indicator. The effect of the various reagents add during the oxidation and reduction of plutonium, on the final titration has been investigated. The method works satisfactorily for the analysis of plutonium in the range of 0.5 to 5 mg. The precision of the method is found to be within 0.1%. (author)

  10. Population size estimation of men who have sex with men through the network scale-up method in Japan.

    Directory of Open Access Journals (Sweden)

    Satoshi Ezoe

    Full Text Available BACKGROUND: Men who have sex with men (MSM are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. METHODS AND FINDINGS: An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. CONCLUSIONS: The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.

  11. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    Science.gov (United States)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  12. New modelling method for fast reactor neutronic behaviours analysis; Nouvelles methodes de modelisation neutronique des reacteurs rapides de quatrieme Generation

    Energy Technology Data Exchange (ETDEWEB)

    Jacquet, P.

    2011-05-23

    Due to safety rules running on fourth generation reactors' core development, neutronics simulation tools have to be as accurate as never before. First part of this report enumerates every step of fast reactor's neutronics simulation implemented in current reference code: ECCO. Considering the field of fast reactors that meet criteria of fourth generation, ability of models to describe self-shielding phenomenon, to simulate neutrons leakage in a lattice of fuel assemblies and to produce representative macroscopic sections is evaluated. The second part of this thesis is dedicated to the simulation of fast reactors' core with steel reflector. These require the development of advanced methods of condensation and homogenization. Several methods are proposed and compared on a typical case: the ZONA2B core of MASURCA reactor. (author) [French] Les criteres de surete qui regissent le developpement de coeurs de reacteurs de quatrieme generation implique l'usage d'outils de calcul neutronique performants. Une premiere partie de la these reprend toutes les etapes de modelisation neutronique des reacteurs rapides actuellement d'usage dans le code de reference ECCO. La capacite des modeles a decrire le phenomene d'autoprotection, a representer les fuites neutroniques au niveau d'un reseau d'assemblages combustibles et a generer des sections macroscopiques representatives est appreciee sur le domaine des reacteurs rapides innovants respectant les criteres de quatrieme generation. La deuxieme partie de ce memoire se consacre a la modelisation des coeurs rapides avec reflecteur acier. Ces derniers necessitent le developpement de methodes avancees de condensation et d'homogenisation. Plusieurs methodes sont proposees et confrontees sur un probleme de modelisation typique: le coeur ZONA2B du reacteur maquette MASURCA

  13. Development of iodimetric redox method for routine estimation of ascorbic acid from fresh fruit and vegetables

    International Nuclear Information System (INIS)

    Munir, M.; Baloch, A. K.; Khan, W. A.; Ahmad, F.; Jamil, M.

    2013-01-01

    The iodimetric method (Im) is developed for rapid estimation of ascorbic acid from fresh fruit and vegetables. The efficiency of Im was compared with standard with standard dye method (Dm) utilizing a variety of model solutions and aqueous extracts from fresh fruit and vegetables of different colors. The Im presented consistently accurate and precise results from colorless to colored model solutions and from fruit/vegetable extracts with standard deviation (Stdev) in the range of +-0.013 - +-0.405 and +-0.019 - +-0.428 respectively with no significant difference between the replicates. The Dm worked also satisfactorily for colorless model solutions and extracts (Stdev range +-0.235 - +-0.309) while producing unsatisfactory results (+-0.464 - +-3.281) for colored counterparts. Severe discrepancies/ overestimates continued to pileup (52% to 197%) estimating the nutrient from high (3.0 mg/10mL) to low (0.5 mg/10mL) concentration levels, respectively. On the basis of precision and reliability, the Im technique is suggested for adoption in general laboratories for routine estimation of ascorbic acid from fruit and vegetables possessing any shade. (author)

  14. Seismic Methods of Identifying Explosions and Estimating Their Yield

    Science.gov (United States)

    Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.

    2014-12-01

    Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models

  15. Rapid-viability PCR method for detection of live, virulent Bacillus anthracis in environmental samples.

    Science.gov (United States)

    Létant, Sonia E; Murphy, Gloria A; Alfaro, Teneile M; Avila, Julie R; Kane, Staci R; Raber, Ellen; Bunt, Thomas M; Shah, Sanjiv R

    2011-09-01

    In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples.

  16. A rapid colorimetric screening method for vanillic acid and vanillin-producing bacterial strains.

    Science.gov (United States)

    Zamzuri, N A; Abd-Aziz, S; Rahim, R A; Phang, L Y; Alitheen, N B; Maeda, T

    2014-04-01

    To isolate a bacterial strain capable of biotransforming ferulic acid, a major component of lignin, into vanillin and vanillic acid by a rapid colorimetric screening method. For the production of vanillin, a natural aroma compound, we attempted to isolate a potential strain using a simple screening method based on pH change resulting from the degradation of ferulic acid. The strain Pseudomonas sp. AZ10 UPM exhibited a significant result because of colour changes observed on the assay plate on day 1 with a high intensity of yellow colour. The biotransformation of ferulic acid into vanillic acid by the AZ10 strain provided the yield (Yp/s ) and productivity (Pr ) of 1·08 mg mg(-1) and 53·1 mg L(-1) h(-1) , respectively. In fact, new investigations regarding lignin degradation revealed that the strain was not able to produce vanillin and vanillic acid directly from lignin; however, partially digested lignin by mixed enzymatic treatment allowed the strain to produce 30·7 mg l(-1) and 1·94 mg l(-1) of vanillic acid and biovanillin, respectively. (i) The rapid colorimetric screening method allowed the isolation of a biovanillin producer using ferulic acid as the sole carbon source. (ii) Enzymatic treatment partially digested lignin, which could then be utilized by the strain to produce biovanillin and vanillic acid. To the best of our knowledge, this is the first study reporting the use of a rapid colorimetric screening method for bacterial strains producing vanillin and vanillic acid from ferulic acid. © 2013 The Society for Applied Microbiology.

  17. [Accuracy of three methods for the rapid diagnosis of oral candidiasis].

    Science.gov (United States)

    Lyu, X; Zhao, C; Yan, Z M; Hua, H

    2016-10-09

    Objective: To explore a simple, rapid and efficient method for the diagnosis of oral candidiasis in clinical practice. Methods: Totally 124 consecutive patients with suspected oral candidiasis were enrolled from Department of Oral Medicine, Peking University School and Hospital of Stomatology, Beijing, China. Exfoliated cells of oral mucosa and saliva or concentrated oral rinse) obtained from all participants were tested by three rapid smear methods(10% KOH smear, gram-stained smear, Congo red stained smear). The diagnostic efficacy(sensitivity, specificity, Youden's index, likelihood ratio, consistency, predictive value and area under curve(AUC) of each of the above mentioned three methods was assessed by comparing the results with the gold standard(combination of clinical diagnosis, laboratory diagnosis and expert opinion). Results: Gram-stained smear of saliva(or concentrated oral rinse) demonstrated highest sensitivity(82.3%). Test of 10%KOH smear of exfoliated cells showed highest specificity(93.5%). Congo red stained smear of saliva(or concentrated oral rinse) displayed highest diagnostic efficacy(79.0% sensitivity, 80.6% specificity, 0.60 Youden's index, 4.08 positive likelihood ratio, 0.26 negative likelihood ratio, 80% consistency, 80.3% positive predictive value, 79.4% negative predictive value and 0.80 AUC). Conclusions: Test of Congo red stained smear of saliva(or concentrated oral rinse) could be used as a point-of-care tool for the rapid diagnosis of oral candidiasis in clinical practice. Trial registration: Chinese Clinical Trial Registry, ChiCTR-DDD-16008118.

  18. Comparing models of rapidly rotating relativistic stars constructed by two numerical methods

    Science.gov (United States)

    Stergioulas, Nikolaos; Friedman, John L.

    1995-05-01

    We present the first direct comparison of codes based on two different numerical methods for constructing rapidly rotating relativistic stars. A code based on the Komatsu-Eriguchi-Hachisu (KEH) method (Komatsu et al. 1989), written by Stergioulas, is compared to the Butterworth-Ipser code (BI), as modified by Friedman, Ipser, & Parker. We compare models obtained by each method and evaluate the accuracy and efficiency of the two codes. The agreement is surprisingly good, and error bars in the published numbers for maximum frequencies based on BI are dominated not by the code inaccuracy but by the number of models used to approximate a continuous sequence of stars. The BI code is faster per iteration, and it converges more rapidly at low density, while KEH converges more rapidly at high density; KEH also converges in regions where BI does not, allowing one to compute some models unstable against collapse that are inaccessible to the BI code. A relatively large discrepancy recently reported (Eriguchi et al. 1994) for models based on Friedman-Pandharipande equation of state is found to arise from the use of two different versions of the equation of state. For two representative equations of state, the two-dimensional space of equilibrium configurations is displayed as a surface in a three-dimensional space of angular momentum, mass, and central density. We find, for a given equation of state, that equilibrium models with maximum values of mass, baryon mass, and angular momentum are (generically) either all unstable to collapse or are all stable. In the first case, the stable model with maximum angular velocity is also the model with maximum mass, baryon mass, and angular momentum. In the second case, the stable models with maximum values of these quantities are all distinct. Our implementation of the KEH method will be available as a public domain program for interested users.

  19. Note: Non-invasive optical method for rapid determination of alignment degree of oriented nanofibrous layers

    Energy Technology Data Exchange (ETDEWEB)

    Pokorny, M.; Rebicek, J. [R& D Department, Contipro Biotech s.r.o., 561 02 Dolni Dobrouc (Czech Republic); Klemes, J. [R& D Department, Contipro Pharma a.s., 561 02 Dolni Dobrouc (Czech Republic); Kotzianova, A. [R& D Department, Contipro Pharma a.s., 561 02 Dolni Dobrouc (Czech Republic); Department of Chemistry, Faculty of Science, Masaryk University, Kamenice 5, CZ-62500 Brno (Czech Republic); Velebny, V. [R& D Department, Contipro Biotech s.r.o., 561 02 Dolni Dobrouc (Czech Republic); R& D Department, Contipro Pharma a.s., 561 02 Dolni Dobrouc (Czech Republic)

    2015-10-15

    This paper presents a rapid non-destructive method that provides information on the anisotropic internal structure of nanofibrous layers. A laser beam of a wavelength of 632.8 nm is directed at and passes through a nanofibrous layer prepared by electrostatic spinning. Information about the structural arrangement of nanofibers in the layer is directly visible in the form of a diffraction image formed on a projection screen or obtained from measured intensities of the laser beam passing through the sample which are determined by the dependency of the angle of the main direction of polarization of the laser beam on the axis of alignment of nanofibers in the sample. Both optical methods were verified on Polyvinyl alcohol (PVA) nanofibrous layers (fiber diameter of 470 nm) with random, single-axis aligned and crossed structures. The obtained results match the results of commonly used methods which apply the analysis of electron microscope images. The presented simple method not only allows samples to be analysed much more rapidly and without damaging them but it also makes possible the analysis of much larger areas, up to several square millimetres, at the same time.

  20. Note: Non-invasive optical method for rapid determination of alignment degree of oriented nanofibrous layers

    International Nuclear Information System (INIS)

    Pokorny, M.; Rebicek, J.; Klemes, J.; Kotzianova, A.; Velebny, V.

    2015-01-01

    This paper presents a rapid non-destructive method that provides information on the anisotropic internal structure of nanofibrous layers. A laser beam of a wavelength of 632.8 nm is directed at and passes through a nanofibrous layer prepared by electrostatic spinning. Information about the structural arrangement of nanofibers in the layer is directly visible in the form of a diffraction image formed on a projection screen or obtained from measured intensities of the laser beam passing through the sample which are determined by the dependency of the angle of the main direction of polarization of the laser beam on the axis of alignment of nanofibers in the sample. Both optical methods were verified on Polyvinyl alcohol (PVA) nanofibrous layers (fiber diameter of 470 nm) with random, single-axis aligned and crossed structures. The obtained results match the results of commonly used methods which apply the analysis of electron microscope images. The presented simple method not only allows samples to be analysed much more rapidly and without damaging them but it also makes possible the analysis of much larger areas, up to several square millimetres, at the same time