WorldWideScience

Sample records for simple estimation method

  1. A simple and rapid method to estimate radiocesium in man

    International Nuclear Information System (INIS)

    Kindl, P.; Steger, F.

    1990-09-01

    A simple and rapid method for monitoring internal contamination of radiocesium in man was developed. This method is based on measurements of the γ-rays emitted from the muscular parts between the thights by a simple NaJ(Tl)-system. The experimental procedure, the calibration, the estimation of the body activity and results are explained and discussed. (Authors)

  2. Fill rate estimation in periodic review policies with lost sales using simple methods

    Energy Technology Data Exchange (ETDEWEB)

    Cardós, M.; Guijarro Tarradellas, E.; Babiloni Griñón, E.

    2016-07-01

    Purpose: The exact estimation of the fill rate in the lost sales case is complex and time consuming. However, simple and suitable methods are needed for its estimation so that inventory managers could use them. Design/methodology/approach: Instead of trying to compute the fill rate in one step, this paper focuses first on estimating the probabilities of different on-hand stock levels so that the fill rate is computed later. Findings: As a result, the performance of a novel proposed method overcomes the other methods and is relatively simple to compute. Originality/value: Existing methods for estimating stock levels are examined, new procedures are proposed and their performance is assessed.

  3. A simple method for estimating the entropy of neural activity

    International Nuclear Information System (INIS)

    Berry II, Michael J; Tkačik, Gašper; Dubuis, Julien; Marre, Olivier; Da Silveira, Rava Azeredo

    2013-01-01

    The number of possible activity patterns in a population of neurons grows exponentially with the size of the population. Typical experiments explore only a tiny fraction of the large space of possible activity patterns in the case of populations with more than 10 or 20 neurons. It is thus impossible, in this undersampled regime, to estimate the probabilities with which most of the activity patterns occur. As a result, the corresponding entropy—which is a measure of the computational power of the neural population—cannot be estimated directly. We propose a simple scheme for estimating the entropy in the undersampled regime, which bounds its value from both below and above. The lower bound is the usual ‘naive’ entropy of the experimental frequencies. The upper bound results from a hybrid approximation of the entropy which makes use of the naive estimate, a maximum entropy fit, and a coverage adjustment. We apply our simple scheme to artificial data, in order to check their accuracy; we also compare its performance to those of several previously defined entropy estimators. We then apply it to actual measurements of neural activity in populations with up to 100 cells. Finally, we discuss the similarities and differences between the proposed simple estimation scheme and various earlier methods. (paper)

  4. Simple method for the estimation of glomerular filtration rate

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)

    1977-02-01

    A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.

  5. A simple method for estimating thermal response of building ...

    African Journals Online (AJOL)

    This paper develops a simple method for estimating the thermal response of building materials in the tropical climatic zone using the basic heat equation. The efficacy of the developed model has been tested with data from three West African cities, namely Kano (lat. 12.1 ºN) Nigeria, Ibadan (lat. 7.4 ºN) Nigeria and Cotonou ...

  6. Simple method for quick estimation of aquifer hydrogeological parameters

    Science.gov (United States)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  7. A New and Simple Method for Crosstalk Estimation in Homogeneous Trench-Assisted Multi-Core Fibers

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2014-01-01

    A new and simple method for inter-core crosstalk estimation in homogeneous trench-assisted multi-core fibers is presented. The crosstalk calculated by this method agrees well with experimental measurement data for two kinds of fabricated 12-core fibers.......A new and simple method for inter-core crosstalk estimation in homogeneous trench-assisted multi-core fibers is presented. The crosstalk calculated by this method agrees well with experimental measurement data for two kinds of fabricated 12-core fibers....

  8. Use of eddy-covariance methods to "calibrate" simple estimators of evapotranspiration

    Science.gov (United States)

    Sumner, David M.; Geurink, Jeffrey S.; Swancar, Amy

    2017-01-01

    Direct measurement of actual evapotranspiration (ET) provides quantification of this large component of the hydrologic budget, but typically requires long periods of record and large instrumentation and labor costs. Simple surrogate methods of estimating ET, if “calibrated” to direct measurements of ET, provide a reliable means to quantify ET. Eddy-covariance measurements of ET were made for 12 years (2004-2015) at an unimproved bahiagrass (Paspalum notatum) pasture in Florida. These measurements were compared to annual rainfall derived from rain gage data and monthly potential ET (PET) obtained from a long-term (since 1995) U.S. Geological Survey (USGS) statewide, 2-kilometer, daily PET product. The annual proportion of ET to rainfall indicates a strong correlation (r2=0.86) to annual rainfall; the ratio increases linearly with decreasing rainfall. Monthly ET rates correlated closely (r2=0.84) to the USGS PET product. The results indicate that simple surrogate methods of estimating actual ET show positive potential in the humid Florida climate given the ready availability of historical rainfall and PET.

  9. A simple method to estimate radiation interception by nursery stock conifers: a case study of eastern white cedar

    International Nuclear Information System (INIS)

    Pronk, A.A.; Goudriaan, J.; Stilma, E.; Challa, H.

    2003-01-01

    A simple method was developed to estimate the fraction radiation intercepted by small eastern white cedar plants (Thuja occidentalis ‘Brabant’). The method, which describes the crop canopy as rows of cuboids, was compared with methods used for estimating radiation interception by crops with homogeneous canopies and crops grown in rows. The extinction coefficient k was determined at different plant arrangements and an average k-value of 0.48 ± 0.03 (R 2 = 0.89) was used in the calculations. Effects of changing plant characteristics and inter- and intra-row plant distances were explored. The fraction radiation intercepted that was estimated with the method for rows of cuboids was up to 20% and for row crops up to 8% lower than estimated with the method for homogeneous canopies at low plant densities and a LAI of 1. The fraction radiation intercepted by small plants of Thuja occidentalis ‘Brabant’ was best estimated by the simple method described in this paper

  10. A simple method to estimate radiation interception by nursery stock conifers: a case study of eastern white cedar

    NARCIS (Netherlands)

    Pronk, A.A.; Goudriaan, J.; Stilma, E.S.C.; Challa, H.

    2003-01-01

    A simple method was developed to estimate the fraction radiation intercepted by small eastern white cedar plants (Thuja occidentalis 'Brabant'). The method, which describes the crop canopy as rows of cuboids, was compared with methods used for estimating radiation interception by crops with

  11. A simple method for estimating the size of nuclei on fractal surfaces

    Science.gov (United States)

    Zeng, Qiang

    2017-10-01

    Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.

  12. Methods for estimating wake flow and effluent dispersion near simple block-like buildings

    International Nuclear Information System (INIS)

    Hosker, R.P. Jr.

    1981-05-01

    This report is intended as an interim guide for those who routinely face air quality problems associated with near-building exhaust stack placement and height, and the resulting concentration patterns. Available data and methods for estimating wake flow and effluent dispersion near isolated block-like structures are consolidated. The near-building and wake flows are described, and quantitative estimates for frontal eddy size, height and extent of roof and wake cavities, and far wake behavior are provided. Concentration calculation methods for upwind, near-building, and downwind pollutant sources are given. For an upwind source, it is possible to estimate the required stack height, and to place upper limits on the likely near-building concentration. The influences of near-building source location and characteristics relative to the building geometry and orientation are considered. Methods to estimate effective stack height, upper limits for concentration due to flush roof vents, and the effect of changes in rooftop stack height are summarized. Current wake and wake cavity models are presented. Numerous graphs of important expressions have been prepared to facilitate computations and quick estimates of flow patterns and concentration levels for specific simple buildings. Detailed recommendations for additional work are given

  13. Estimation method of finger tapping dynamics using simple magnetic detection system.

    Science.gov (United States)

    Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo

    2010-05-01

    We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.

  14. Estimation method of finger tapping dynamics using simple magnetic detection system

    Science.gov (United States)

    Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo

    2010-05-01

    We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.

  15. A Simple Method to Estimate Large Fixed Effects Models Applied to Wage Determinants and Matching

    OpenAIRE

    Mittag, Nikolas

    2016-01-01

    Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employer-employee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that are likely to cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large two-way fixed effects (TWFE) and worker-firm match effect models without additional assum...

  16. Development of a simple estimation tool for LMFBR construction cost

    International Nuclear Information System (INIS)

    Yoshida, Kazuo; Kinoshita, Izumi

    1999-01-01

    A simple tool for estimating the construction costs of liquid-metal-cooled fast breeder reactors (LMFBRs), 'Simple Cost' was developed in this study. Simple Cost is based on a new estimation formula that can reduce the amount of design data required to estimate construction costs. Consequently, Simple cost can be used to estimate the construction costs of innovative LMFBR concepts for which detailed design has not been carried out. The results of test calculation show that Simple Cost provides cost estimations equivalent to those obtained with conventional methods within the range of plant power from 325 to 1500 MWe. Sensitivity analyses for typical design parameters were conducted using Simple Cost. The effects of four major parameters - reactor vessel diameter, core outlet temperature, sodium handling area and number of secondary loops - on the construction costs of LMFBRs were evaluated quantitatively. The results show that the reduction of sodium handling area is particularly effective in reducing construction costs. (author)

  17. A simple procedure to estimate reactivity with good noise filtering characteristics

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    2014-01-01

    Highlights: • A new and simple on-line reactivity estimation method is proposed. • The estimator has robust noise filtering characteristics. • The noise filtering is equivalent to those of conventional reactivity meters. • The new estimator eliminates the burden of selecting optimum filter constants. • The new estimation performance is assessed without and with measurement noise. - Abstract: A new and simple on-line reactivity estimation method is proposed. The estimator has robust noise filtering characteristics without the use of complex filters. The noise filtering capability is equivalent to or better than that of a conventional estimator based on Inverse Point Kinetics (IPK). The new estimator can also eliminate the burden of selecting optimum filter time constants, such as would be required for the IPK-based estimator, or noise covariance matrices, which are needed if the extended Kalman filter (EKF) technique is used. In this paper, the new estimation method is introduced and its performance assessed without and with measurement noise

  18. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  19. A simple, reproducible and sensitive spectrophotometric method to estimate microalgal lipids

    Energy Technology Data Exchange (ETDEWEB)

    Chen Yimin [ChELSI Institute, Department of Chemical and Biological Engineering, University of Sheffield, Sheffield S1 3JD (United Kingdom); Vaidyanathan, Seetharaman, E-mail: s.vaidyanathan@sheffield.ac.uk [ChELSI Institute, Department of Chemical and Biological Engineering, University of Sheffield, Sheffield S1 3JD (United Kingdom)

    2012-04-29

    Highlights: Black-Right-Pointing-Pointer FAs released from lipids form complex with Cu-TEA in chloroform. Black-Right-Pointing-Pointer The FA-Cu-TEA complex gives strong absorbance at 260 nm. Black-Right-Pointing-Pointer The absorbance is sensitive and independent of C-atom number in the FAs (10-18). Black-Right-Pointing-Pointer Microalgal lipid extract and pure FA (such as C16) can both be used as standards. - Abstract: Quantification of total lipids is a necessity for any study of lipid production by microalgae, especially given the current interest in microalgal carbon capture and biofuels. In this study, we employed a simple yet sensitive method to indirectly measure the lipids in microalgae by measuring the fatty acids (FA) after saponification. The fatty acids were reacted with triethanolamine-copper salts (TEA-Cu) and the ternary TEA-Cu-FA complex was detected at 260 nm using a UV-visible spectrometer without any colour developer. The results showed that this method could be used to analyse low levels of lipids in the range of nano-moles from as little as 1 mL of microalgal culture. Furthermore, the structure of the TEA-Cu-FA complex and related reaction process are proposed to better understand this assay. There is no special instrument required and the method is very reproducible. To the best of our knowledge, this is the first report of the use of UV absorbance of copper salts with FA as a method to estimate lipids in algal cultures. It will pave the way for a more convenient assay of lipids in microalgae and can readily be expanded for estimating lipids in other biological systems.

  20. A simple, reproducible and sensitive spectrophotometric method to estimate microalgal lipids

    International Nuclear Information System (INIS)

    Chen Yimin; Vaidyanathan, Seetharaman

    2012-01-01

    Highlights: ► FAs released from lipids form complex with Cu–TEA in chloroform. ► The FA–Cu–TEA complex gives strong absorbance at 260 nm. ► The absorbance is sensitive and independent of C-atom number in the FAs (10–18). ► Microalgal lipid extract and pure FA (such as C16) can both be used as standards. - Abstract: Quantification of total lipids is a necessity for any study of lipid production by microalgae, especially given the current interest in microalgal carbon capture and biofuels. In this study, we employed a simple yet sensitive method to indirectly measure the lipids in microalgae by measuring the fatty acids (FA) after saponification. The fatty acids were reacted with triethanolamine–copper salts (TEA–Cu) and the ternary TEA–Cu–FA complex was detected at 260 nm using a UV–visible spectrometer without any colour developer. The results showed that this method could be used to analyse low levels of lipids in the range of nano-moles from as little as 1 mL of microalgal culture. Furthermore, the structure of the TEA–Cu–FA complex and related reaction process are proposed to better understand this assay. There is no special instrument required and the method is very reproducible. To the best of our knowledge, this is the first report of the use of UV absorbance of copper salts with FA as a method to estimate lipids in algal cultures. It will pave the way for a more convenient assay of lipids in microalgae and can readily be expanded for estimating lipids in other biological systems.

  1. Simple estimating method of damages of concrete gravity dam based on linear dynamic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, T.; Kanenawa, K.; Yamaguchi, Y. [Public Works Research Institute, Tsukuba, Ibaraki (Japan). Hydraulic Engineering Research Group

    2004-07-01

    Due to the occurrence of large earthquakes like the Kobe Earthquake in 1995, there is a strong need to verify seismic resistance of dams against much larger earthquake motions than those considered in the present design standard in Japan. Problems exist in using nonlinear analysis to evaluate the safety of dams including: that the influence which the set material properties have on the results of nonlinear analysis is large, and that the results of nonlinear analysis differ greatly according to the damage estimation models or analysis programs. This paper reports the evaluation indices based on a linear dynamic analysis method and the characteristics of the progress of cracks in concrete gravity dams with different shapes using a nonlinear dynamic analysis method. The study concludes that if simple linear dynamic analysis is appropriately conducted to estimate tensile stress at potential locations of initiating cracks, the damage due to cracks would be predicted roughly. 4 refs., 1 tab., 13 figs.

  2. SIMPLE ESTIMATOR AND CONSISTENT STRONGLY OF STABLE DISTRIBUTIONS

    Directory of Open Access Journals (Sweden)

    Cira E. Guevara Otiniano

    2016-06-01

    Full Text Available Stable distributions are extensively used to analyze earnings of financial assets, such as exchange rates and stock prices assets. In this paper we propose a simple and strongly consistent estimator for the scale parameter of a symmetric stable L´evy distribution. The advantage of this estimator is that your computational time is minimum thus it can be used to initialize intensive computational procedure such as maximum likelihood. With random samples of sized n we tested the efficacy of these estimators by Monte Carlo method. We also included applications for three data sets.

  3. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Science.gov (United States)

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  4. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Directory of Open Access Journals (Sweden)

    Alessandro Saccà

    Full Text Available Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  5. Estimation of the simple correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  6. Application of a simple parameter estimation method to predict effluent transport in the Savannah River

    International Nuclear Information System (INIS)

    Hensel, S.J.; Hayes, D.W.

    1993-01-01

    A simple parameter estimation method has been developed to determine the dispersion and velocity parameters associated with stream/river transport. The unsteady one dimensional Burgers' equation was chosen as the model equation, and the method has been applied to recent Savannah River dye tracer studies. The computed Savannah River transport coefficients compare favorably with documented values, and the time/concentration curves calculated from these coefficients compare well with the actual tracer data. The coefficients were used as a predictive capability and applied to Savannah River tritium concentration data obtained during the December 1991 accidental tritium discharge from the Savannah River Site. The peak tritium concentration at the intersection of Highway 301 and the Savannah River was underpredicted by only 5% using the coefficients computed from the dye data

  7. A simple method for estimating the convection- dispersion equation ...

    African Journals Online (AJOL)

    Jane

    2011-08-31

    Aug 31, 2011 ... approach of modeling solute transport in porous media uses the deterministic ... Methods of estimating CDE transport parameters can be divided into statistical ..... diffusion-type model for longitudinal mixing of fluids in flow.

  8. A simple method for estimation of phosphorous in urine

    International Nuclear Information System (INIS)

    Chaudhary, Seema; Gondane, Sonali; Sawant, Pramilla D.; Rao, D.D.

    2016-01-01

    Following internal contamination of 32 P, it is preferentially eliminated from the body in urine. It is estimated by in-situ precipitation of ammonium molybdo-phosphate (AMP) in urine followed by gross beta counting. The amount of AMP formed in-situ depends on the amount of stable phosphorous (P) present in the urine and hence, it was essential to generate information regarding urinary excretion of stable P. If amount of P excreted is significant then the amount of AMP formed would correspondingly increase leading to absorption of some of the β particles. The present study was taken up for the estimation of daily urinary excretion of P using the phospho-molybdate spectrophotometry method. Few urine samples received from radiation workers were analyzed and based on the observed range of stable P in urine; volume of sample required for 32 P estimation was finalized

  9. Fall in hematocrit per 1000 parasites cleared from peripheral blood: a simple method for estimating drug-related fall in hematocrit after treatment of malaria infections.

    Science.gov (United States)

    Gbotosho, Grace Olusola; Okuboyejo, Titilope; Happi, Christian Tientcha; Sowunmi, Akintunde

    2014-01-01

    A simple method to estimate antimalarial drug-related fall in hematocrit (FIH) after treatment of Plasmodium falciparum infections in the field is described. The method involves numeric estimation of the relative difference in hematocrit at baseline (pretreatment) and the first 1 or 2 days after treatment begun as numerator and the corresponding relative difference in parasitemia as the denominator, and expressing it per 1000 parasites cleared from peripheral blood. Using the method showed that FIH/1000 parasites cleared from peripheral blood (cpb) at 24 or 48 hours were similar in artemether-lumefantrine and artesunate-amodiaquine-treated children (0.09; 95% confidence interval, 0.052-0.138 vs 0.10; 95% confidence interval, 0.069-0.139%; P = 0.75) FIH/1000 parasites cpb in patients with higher parasitemias were significantly (P 1000 parasites cpb were similar in anemic and nonanemic children. Estimation of FIH/1000 parasites cpb is simple, allows estimation of relatively conserved hematocrit during treatment, and can be used in both observational studies and clinical trials involving antimalarial drugs.

  10. A simple finite element method for linear hyperbolic problems

    International Nuclear Information System (INIS)

    Mu, Lin; Ye, Xiu

    2017-01-01

    Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.

  11. Noninvasive and simple method for the estimation of myocardial metabolic rate of glucose by PET and 18F-FDG

    International Nuclear Information System (INIS)

    Takahashi, Norio; Tamaki, Nagara; Kawamoto, Masahide

    1994-01-01

    To estimate regional myocardial metabolic rate of glucose (rMRGlu) with positron emission tomography (PET) and 2-[ 18 F] fluoro-2-deoxy-D-glucose (FDG), non invasive simple method has been investigated using dynamic PET imaging in 14 patients with ischemic heart disease. This imaging approach uses a blood time-activity curve (TAC) derived from a region of interest (ROI) drawn over dynamic PET images of the left ventricle (LV), left atrium (LA) and aorta. Patlak graphic analysis was used to estimate k 1 k 3 /(k 2 +k 3 ) from serial plasma and myocardial radioactivities. FDG counts ratio between whole blood and plasma was relatively constant (0.91±0.02) both throughout the time and among different patients. Although TACs derived from dynamic PET images gradually increased at later phase due to spill over from the myocardium into the cavity, three were good agreements between the estimated K complex values obtained from arterial blood sampling and dynamic PET imaging (LV r=0.95, LA r=0.96, aorta r=0.98). These results demonstrate the practical usefulness of a simplified and noninvasive method for the estimation of rMRGlu in humans by PET. (author)

  12. Simple and Reliable Method to Estimate the Fingertip Static Coefficient of Friction in Precision Grip.

    Science.gov (United States)

    Barrea, Allan; Bulens, David Cordova; Lefevre, Philippe; Thonnard, Jean-Louis

    2016-01-01

    The static coefficient of friction (µ static ) plays an important role in dexterous object manipulation. Minimal normal force (i.e., grip force) needed to avoid dropping an object is determined by the tangential force at the fingertip-object contact and the frictional properties of the skin-object contact. Although frequently assumed to be constant for all levels of normal force (NF, the force normal to the contact), µ static actually varies nonlinearly with NF and increases at low NF levels. No method is currently available to measure the relationship between µ static and NF easily. Therefore, we propose a new method allowing the simple and reliable measurement of the fingertip µ static at different NF levels, as well as an algorithm for determining µ static from measured forces and torques. Our method is based on active, back-and-forth movements of a subject's finger on the surface of a fixed six-axis force and torque sensor. µ static is computed as the ratio of the tangential to the normal force at slip onset. A negative power law captures the relationship between µ static and NF. Our method allows the continuous estimation of µ static as a function of NF during dexterous manipulation, based on the relationship between µ static and NF measured before manipulation.

  13. Observation of lens aberrations for high resolution electron microscopy II: Simple expressions for optimal estimates

    Energy Technology Data Exchange (ETDEWEB)

    Saxton, W. Owen, E-mail: wos1@cam.ac.uk

    2015-04-15

    This paper lists simple closed-form expressions estimating aberration coefficients (defocus, astigmatism, three-fold astigmatism, coma / misalignment, spherical aberration) on the basis of image shift or diffractogram shape measurements as a function of injected beam tilt. Simple estimators are given for a large number of injected tilt configurations, optimal in the sense of least-squares fitting of all the measurements, and so better than most reported previously. Standard errors are given for most, allowing different approaches to be compared. Special attention is given to the measurement of the spherical aberration, for which several simple procedures are given, and the effect of foreknowledge of this on other aberration estimates is noted. Details and optimal expressions are also given for a new and simple method of analysis, requiring measurements of the diffractogram mirror axis direction only, which are simpler to make than the focus and astigmatism measurements otherwise required. - Highlights: • Optimal estimators for CTEM lens aberrations are more accurate and/or use fewer observations. • Estimators have been found for defocus, astigmatism, three-fold astigmatism, coma and spherical aberration. • Estimators have been found relying on diffractogram shape, image shift and diffractogram orientation only, for a variety of beam tilts. • The standard error for each estimator has been found.

  14. Rapid, Simple, and Sensitive Spectrofluorimetric Method for the Estimation of Ganciclovir in Bulk and Pharmaceutical Formulations

    Directory of Open Access Journals (Sweden)

    Garima Balwani

    2013-01-01

    Full Text Available A new, simple, rapid, sensitive, accurate, and affordable spectrofluorimetric method was developed and validated for the estimation of ganciclovir in bulk as well as in marketed formulations. The method was based on measuring the native fluorescence of ganciclovir in 0.2 M hydrochloric acid buffer of pH 1.2 at 374 nm after excitation at 257 nm. The calibration graph was found to be rectilinear in the concentration range of 0.25–2.00 μg mL−1. The limit of quantification and limit of detection were found to be 0.029 μg mL−1 and 0.010 μg mL−1, respectively. The method was fully validated for various parameters according to ICH guidelines. The results demonstrated that the procedure is accurate, precise, and reproducible (relative standard deviation <2% and can be successfully applied for the determination of ganciclovir in its commercial capsules with average percentage recovery of 101.31 ± 0.90.

  15. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  16. A Comparison of Multidimensional Item Selection Methods in Simple and Complex Test Designs

    Directory of Open Access Journals (Sweden)

    Eren Halil ÖZBERK

    2017-03-01

    Full Text Available In contrast with the previous studies, this study employed various test designs (simple and complex which allow the evaluation of the overall ability score estimations across multiple real test conditions. In this study, four factors were manipulated, namely the test design, number of items per dimension, correlation between dimensions and item selection methods. Using the generated item and ability parameters, dichotomous item responses were generated in by using M3PL compensatory multidimensional IRT model with specified correlations. MCAT composite ability score accuracy was evaluated using absolute bias (ABSBIAS, correlation and the root mean square error (RMSE between true and estimated ability scores. The results suggest that the multidimensional test structure, number of item per dimension and correlation between dimensions had significant effect on item selection methods for the overall score estimations. For simple structure test design it was found that V1 item selection has the lowest absolute bias estimations for both long and short tests while estimating overall scores. As the model gets complex KL item selection method performed better than other two item selection method.

  17. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  18. A simple, direct method for x-ray scatter estimation and correction in digital radiography and cone-beam CT

    International Nuclear Information System (INIS)

    Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.

    2006-01-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling

  19. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    Science.gov (United States)

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides

  20. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CF{sub SSDE}{sup organ}) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CF{sub SSDE}{sup organ} were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCF{sub SSDE}{sup organ} were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CF{sub SSDE}{sup organ}, was compared to

  1. A simple method for estimating the length density of convoluted tubular systems.

    Science.gov (United States)

    Ferraz de Carvalho, Cláudio A; de Campos Boldrini, Silvia; Nishimaru, Flávio; Liberti, Edson A

    2008-10-01

    We present a new method for estimating the length density (Lv) of convoluted tubular structures exhibiting an isotropic distribution. Although the traditional equation Lv=2Q/A is used, the parameter Q is obtained by considering the collective perimeters of tubular sections. This measurement is converted to a standard model of the structure, assuming that all cross-sections are approximately circular and have an average perimeter similar to that of actual circular cross-sections observed in the same material. The accuracy of this method was tested in eight experiments using hollow macaroni bent into helical shapes. After measuring the length of the macaroni segments, they were boiled and randomly packed into cylindrical volumes along with an aqueous suspension of gelatin and India ink. The solidified blocks were cut into slices 1.0 cm thick and 33.2 cm2 in area (A). The total perimeter of the macaroni cross-sections so revealed was stereologically estimated using a test system of straight parallel lines. Given Lv and the reference volume, the total length of macaroni in each section could be estimated. Additional corrections were made for the changes induced by boiling, and the off-axis position of the thread used to measure length. No statistical difference was observed between the corrected estimated values and the actual lengths. This technique is useful for estimating the length of capillaries, renal tubules, and seminiferous tubules.

  2. Tax planning: analysis between national simple and the estimated gain

    OpenAIRE

    Bassoli, Marlene Kempfer; Somma, Giovana Mattioli

    2010-01-01

    This study was initiated because of the need to define the legal situation that, really, is the tax planning in Brazil. The use of comparative method between the estimated gain and the national simple level to clarify an avoidance induced by the law, mainly, demonstrate the possibility of a reduced tax burden and tax savings for companies. Under the focus of the State of Law that honors the principles of strict legality and typicality closed. At first, the article focuses on Tax Planning, tal...

  3. Accuracy of two simple methods for estimation of thyroidal 131I kinetics for dosimetry-based treatment of Graves' disease

    International Nuclear Information System (INIS)

    Traino, A. C.; Xhafa, B.

    2009-01-01

    One of the major challenges to the more widespread use of individualized, dosimetry-based radioiodine treatment of Graves' disease is the development of a reasonably fast, simple, and cost-effective method to measure thyroidal 131 I kinetics in patients. Even though the fixed activity administration method does not optimize the therapy, giving often too high or too low a dose to the gland, it provides effective treatment for almost 80% of patients without consuming excessive time and resources. In this article two simple methods for the evaluation of the kinetics of 131 I in the thyroid gland are presented and discussed. The first is based on two measurements 4 and 24 h after a diagnostic 131 I administration and the second on one measurement 4 h after such an administration and a linear correlation between this measurement and the maximum uptake in the thyroid. The thyroid absorbed dose calculated by each of the two methods is compared to that calculated by a more complete 131 I kinetics evaluation, based on seven thyroid uptake measurements for 35 patients at various times after the therapy administration. There are differences in the thyroid absorbed doses between those derived by each of the two simpler methods and the ''reference'' value (derived by more complete uptake measurements following the therapeutic 131 I administration), with 20% median and 40% 90-percentile differences for the first method (i.e., based on two thyroid uptake measurements at 4 and 24 h after 131 I administration) and 25% median and 45% 90-percentile differences for the second method (i.e., based on one measurement at 4 h post-administration). Predictably, although relatively fast and convenient, neither of these simpler methods appears to be as accurate as thyroid dose estimates based on more complete kinetic data.

  4. Thermodynamic properties of organic compounds estimation methods, principles and practice

    CERN Document Server

    Janz, George J

    1967-01-01

    Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the

  5. A simple method to estimate the episode and programme sensitivity of breast cancer screening programmes.

    Science.gov (United States)

    Zorzi, Manuel; Guzzinati, Stefano; Puliti, Donella; Paci, Eugenio

    2010-01-01

    The estimation of breast cancer screening sensitivity is a major aim in the quality assessment of screening programmes. The proportional incidence method for the estimation of the sensitivity of breast cancer screening programmes is rarely used to estimate the underlying incidence rates. We present a method to estimate episode and programme sensitivity of screening programmes, based solely on cancers detected within screening cycles (excluding breast cancer cases at prevalent screening round) and on the number of incident cases in the total target population (steady state). The assumptions, strengths and limitations of the method are discussed. An example of calculation of episode and programme sensitivities is given, on the basis of the data from the IMPACT study, a large observational study of breast cancer screening programmes in Italy. The programme sensitivity from the fifth year of screening onwards ranged between 41% and 48% of the total number of cases in the target population. At steady state episode sensitivity was 0.70, with a trend across age groups, with lowest values in women aged 50-54 years (0.52) and highest in those 65-69 (0.77). The method is a very serviceable tool for estimating sensitivity in service screening programmes, and the results are comparable with those of other methods of estimation.

  6. Entropy estimates for simple random fields

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    1995-01-01

    We consider the problem of determining the maximum entropy of a discrete random field on a lattice subject to certain local constraints on symbol configurations. The results are expected to be of interest in the analysis of digitized images and two dimensional codes. We shall present some examples...... of binary and ternary fields with simple constraints. Exact results on the entropies are known only in a few cases, but we shall present close bounds and estimates that are computationally efficient...

  7. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  8. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  9. Note on a simple test method for estimaing J/sub Ic/

    International Nuclear Information System (INIS)

    Whipple, T.A.; McHenry, H.I.

    1980-01-01

    Fracture toughness testing is generally a time-consuming and expensive procedure; therefore, there has been a significant amount of effort directed toward developing an inexpensive and rapid method of estimating the fracture toughness of materials. In this paper, a simple method for estimating J/sub Ic/ through the use of small, notched, bend bars is evaluated. The test only involves the measurement of the energy necessary to fracture the sample. Initial tests on Fe-18Cr-3Ni-13Mn and 304L stainless steel at 76 and 4 0 K have yielded results consistent with other fracture toughness tests, for materials in the low- to medium-toughness range

  10. A simple algorithm for estimation of source-to-detector distance in Compton imaging

    International Nuclear Information System (INIS)

    Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.

    2008-01-01

    Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data

  11. Comparison of eight degree-days estimation methods in four agroecological regions in Colombia

    Directory of Open Access Journals (Sweden)

    Daniel Rodríguez Caicedo

    2012-01-01

    Full Text Available Eight methods were used to estimate degree-days in four Colombian localities. Four methods have been previously proposed in literature: Simple Sine, Double Sine, Simple Triangle, and Double Triangle methods. The other four methods are proposed in this research: Simple Logistic, Double Logistic, Simple Normal, and Double Normal. The estimation of the degree-days through hourly temperature values was used as the reference standard method, and the four localities from where the temperature values were taken were the municipalities of Cajicá (Cundinamarca, Santa Elena (Antioquia, Carepa (Urabá Antioqueño, and Ciudad Bolivar (Zona cafetera Antioqueña. Degree-days obtained by all methods under study were compared through linear regression to those obtained by the reference standard method. There were differences in the correlation of each method to the reference when compared within each region and among regions. The Simple Logistic and Double Logistic methods showed the best performance with acceptable R² values and considerably lower bias than the other methods. The poorest fit was found in Cajicá, where the average R² was 0.571. For the regions of Santa Elena and Carepa, the average R² was 0.756 and 0.733. The best fit was found in Ciudad Bolivar, with an average R² of 0.826.

  12. A simple method for estimating the ventilatory response to CO2 in infants.

    Science.gov (United States)

    Brady, J P; Durand, M; McCann, E

    1983-04-01

    We report a new noninvasive method for the estimation of CO2 response in uncooperative infants. By comparing the changes in inspired minus alveolar PO2 breathing air and 4% CO2, an indirect estimate of increase in alveolar ventilation can be obtained. This report is a comparison of 3 possible indirect methods: changes in inspired minus alveolar PCO2 (delta AIPCO2), changes in inspired minus alveolar PO2 (delta IAPO2), and changes in transcutaneous PO2 (TcPO2) with the standard steady-state method for ventilatory response to CO2. Twenty-one comparisons were made, 16 on 7 preterm infants, and 5 on an older child (at 2.5 and at 4 yr of age), with central hypoventilation syndrome. We found that changes in delta IAPO2 gave the best correlation with changes in minute ventilation (r = 0.83, p less than 0.001), that changes in delta AIPCO2 were less valid (r = 0.66, p less than 0.001), and that there was no correlation with changes in TcPO2. We conclude that changes in inspired PO2 minus alveolar PO2 can be used in uncooperative infants to estimate the ventilatory response to CO2.

  13. Substoichiometric method in the simple radiometric analysis

    International Nuclear Information System (INIS)

    Ikeda, N.; Noguchi, K.

    1979-01-01

    The substoichiometric method is applied to simple radiometric analysis. Two methods - the standard reagent method and the standard sample method - are proposed. The validity of the principle of the methods is verified experimentally in the determination of silver by the precipitation method, or of zinc by the ion-exchange or solvent-extraction method. The proposed methods are simple and rapid compared with the conventional superstoichiometric method. (author)

  14. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  15. A simple method to approximate liver size on cross-sectional images using living liver models

    International Nuclear Information System (INIS)

    Muggli, D.; Mueller, M.A.; Karlo, C.; Fornaro, J.; Marincek, B.; Frauenfelder, T.

    2009-01-01

    Aim: To assess whether a simple. diameter-based formula applicable to cross-sectional images can be used to calculate the total liver volume. Materials and methods: On 119 cross-sectional examinations (62 computed tomography and 57 magnetic resonance imaging) a simple, formula-based method to approximate the liver volume was evaluated. The total liver volume was approximated measuring the largest craniocaudal (cc), ventrodorsal (vd), and coronal (cor) diameters by two readers and implementing the equation: Vol estimated =ccxvdxcorx0.31. Inter-rater reliability, agreement, and correlation between liver volume calculation and virtual liver volumetry were analysed. Results: No significant disagreement between the two readers was found. The formula correlated significantly with the volumetric data (r > 0.85, p < 0.0001). In 81% of cases the error of the approximated volume was <10% and in 92% of cases <15% compared to the volumetric data. Conclusion: Total liver volume can be accurately estimated on cross-sectional images using a simple, diameter-based equation.

  16. Drift estimation from a simple field theory

    International Nuclear Information System (INIS)

    Mendes, F. M.; Figueiredo, A.

    2008-01-01

    Given the outcome of a Wiener process, what can be said about the drift and diffusion coefficients? If the process is stationary, these coefficients are related to the mean and variance of the position displacements distribution. However, if either drift or diffusion are time-dependent, very little can be said unless some assumption about that dependency is made. In Bayesian statistics, this should be translated into some specific prior probability. We use Bayes rule to estimate these coefficients from a single trajectory. This defines a simple, and analytically tractable, field theory.

  17. A New, Simple Method for Estimating Pleural Effusion Size on CT Scans

    Science.gov (United States)

    Moy, Matthew P.; Berko, Netanel S.; Godelman, Alla; Jain, Vineet R.; Haramati, Linda B.

    2013-01-01

    Background: There is no standardized system to grade pleural effusion size on CT scans. A validated, systematic grading system would improve communication of findings and may help determine the need for imaging guidance for thoracentesis. Methods: CT scans of 34 patients demonstrating a wide range of pleural effusion sizes were measured with a volume segmentation tool and reviewed for qualitative and simple quantitative features related to size. A classification rule was developed using the features that best predicted size and distinguished among small, moderate, and large effusions. Inter-reader agreement for effusion size was assessed on the CT scans for three groups of physicians (radiology residents, pulmonologists, and cardiothoracic radiologists) before and after implementation of the classification rule. Results: The CT imaging features found to best classify effusions as small, moderate, or large were anteroposterior (AP) quartile and maximum AP depth measured at the midclavicular line. According to the decision rule, first AP-quartile effusions are small, second AP-quartile effusions are moderate, and third or fourth AP-quartile effusions are large. In borderline cases, AP depth is measured with 3-cm and 10-cm thresholds for the upper limit of small and moderate, respectively. Use of the rule improved interobserver agreement from κ = 0.56 to 0.79 for all physicians, 0.59 to 0.73 for radiology residents, 0.54 to 0.76 for pulmonologists, and 0.74 to 0.85 for cardiothoracic radiologists. Conclusions: A simple, two-step decision rule for sizing pleural effusions on CT scans improves interobserver agreement from moderate to substantial levels. PMID:23632863

  18. A simple route to maximum-likelihood estimates of two-locus

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Genetics; Volume 94; Issue 3. A simple route to maximum-likelihood estimates of two-locus recombination fractions under inequality restrictions. Iain L. Macdonald Philasande Nkalashe. Research Note Volume 94 Issue 3 September 2015 pp 479-481 ...

  19. Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.

    Science.gov (United States)

    Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo

    2017-04-01

    To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.

  20. Estimation method for volumes of hot spots created by heavy ions

    International Nuclear Information System (INIS)

    Kanno, Ikuo; Kanazawa, Satoshi; Kajii, Yuji

    1999-01-01

    As a ratio of volumes of hot spots to cones, which have the same lengths and bottom radii with the ones of hot spots, a simple and convenient method for estimating the volumes of hot spots is described. This calculation method is useful for the study of damage producing mechanism in hot spots, and is also convenient for the estimation of the electron-hole densities in plasma columns created by heavy ions in semiconductor detectors. (author)

  1. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  2. Simple method for direct crown base height estimation of individual conifer trees using airborne LiDAR data.

    Science.gov (United States)

    Luo, Laiping; Zhai, Qiuping; Su, Yanjun; Ma, Qin; Kelly, Maggi; Guo, Qinghua

    2018-05-14

    Crown base height (CBH) is an essential tree biophysical parameter for many applications in forest management, forest fuel treatment, wildfire modeling, ecosystem modeling and global climate change studies. Accurate and automatic estimation of CBH for individual trees is still a challenging task. Airborne light detection and ranging (LiDAR) provides reliable and promising data for estimating CBH. Various methods have been developed to calculate CBH indirectly using regression-based means from airborne LiDAR data and field measurements. However, little attention has been paid to directly calculate CBH at the individual tree scale in mixed-species forests without field measurements. In this study, we propose a new method for directly estimating individual-tree CBH from airborne LiDAR data. Our method involves two main strategies: 1) removing noise and understory vegetation for each tree; and 2) estimating CBH by generating percentile ranking profile for each tree and using a spline curve to identify its inflection points. These two strategies lend our method the advantages of no requirement of field measurements and being efficient and effective in mixed-species forests. The proposed method was applied to a mixed conifer forest in the Sierra Nevada, California and was validated by field measurements. The results showed that our method can directly estimate CBH at individual tree level with a root-mean-squared error of 1.62 m, a coefficient of determination of 0.88 and a relative bias of 3.36%. Furthermore, we systematically analyzed the accuracies among different height groups and tree species by comparing with field measurements. Our results implied that taller trees had relatively higher uncertainties than shorter trees. Our findings also show that the accuracy for CBH estimation was the highest for black oak trees, with an RMSE of 0.52 m. The conifer species results were also good with uniformly high R 2 ranging from 0.82 to 0.93. In general, our method has

  3. Development of a new method for estimating visceral fat area with multi-frequency bioelectrical impedance

    International Nuclear Information System (INIS)

    Nagai, Masato; Komiya, Hideaki; Mori, Yutaka; Ohta, Teruo; Kasahara, Yasuhiro; Ikeda, Yoshio

    2008-01-01

    Excessive visceral fat area (VFA) is a major risk factor in such conditions as cardiovascular disease. In assessing VFA, computed tomography (CT) is adopted as the gold standard; however, this method is cost intensive and involves radiation exposure. In contrast, the bioelectrical impedance (BI) method for estimating body composition is simple and noninvasive and thus its potential application in VFA assessment is being studied. To overcome the difference in obtained impedance due to measurement conditions, we developed a more precise estimation method by selecting the optimum body posture, electrode arrangement, and frequency. The subjects were 73 healthy volunteers, 37 men and 36 women, who underwent CT scans to assess VFA and who were measured for anthropometry parameters, subcutaneous fat layer thickness, abdominal tissue area, and impedance. Impedance was measured by the tetrapolar impedance method using multi-frequency BI. Multiple regression analysis was conducted to estimate VFA. The results revealed a strong correlation between VFA observed by CT and VFA estimated by impedance (r=0.920). The regression equation accurately classified VFA≥100 cm 2 in 13 out of 14 men and 1 of 1 woman. Moreover, it classified VFA≥100 cm 2 or 2 in 3 out of 4 men and 1 of 1 woman misclassified by waist circumference (W) which was adopted as a simple index to evaluate VFA. Therefore, using this simple and convenient method for estimating VFA, we obtained an accurate assessment of VFA using the BI method. (author)

  4. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  5. A simple high-performance liquid chromatographic method for the estimation of boswellic acids from the market formulations containing Boswellia serrata extract.

    Science.gov (United States)

    Shah, Shailesh A; Rathod, Ishwarsinh S; Suhagia, Bhanubhai N; Pandya, Saurabh S; Parmar, Vijay K

    2008-09-01

    A simple, rapid, and reproducible reverse-phase high-performance liquid chromatographic method is developed for the estimation of boswellic acids, the active constituents in Boswellia serrata oleo-gum resin. The chromatographic separation is performed using a mobile phase consisting of acetonitrile-water (90:10, % v/v) adjusted to pH 4 with glacial acetic acid on a Kromasil 100 C18 analytical column with flow rate of 2.0 mL/min and detection at 260 nm. The elution times are 4.30 and 7.11 min for 11-keto beta-boswellic acid (11-KBA) and 3-acetyl 11-keto beta-boswellic acid (A-11-KBA), respectively. The calibration curve is linear in the 11.66-58.30 microg/mL and 6.50-32.50 microg/mL range for 11-KBA and A-11-KBA, respectively. The limits of detection are 2.33 microg/mL and 1.30 microg/mL for 11-KBA and A-11-KBA, respectively. The mean recoveries are 98.24% to 104.17% and 94.12% to 105.92% for 11-KBA and A-11-KBA, respectively. The inter- and intra-day variation coefficients are less than 5%. The present method is successfully applied for the estimation of boswellic acids from the market formulations containing Boswellia serrata extract.

  6. A simple method for estimating potential source term bypass fractions from confinement structures

    International Nuclear Information System (INIS)

    Kalinich, D.A.; Paddleford, D.F.

    1997-01-01

    Confinement structures house many of the operating processes at the Savannah River Site (SRS). Under normal operating conditions, a confinement structure in conjunction with its associated ventilation systems prevents the release of radiological material to the environment. However, under potential accident conditions, the performance of the ventilation systems and integrity of the structure may be challenged. In order to calculate the radiological consequences associated with a potential accident (e.g. fires, explosion, spills, etc.), it is necessary to determine the fraction of the source term initially generated by the accident that escapes from the confinement structure to the environment. While it would be desirable to estimate the potential bypass fraction using sophisticated control-volume/flow path computer codes (e.g. CONTAIN, MELCOR, etc.) in order to take as much credit as possible for the mitigative effects of the confinement structure, there are many instances where using such codes is not tractable due to limits on the level-of-effort allotted to perform the analysis. Moreover, the current review environment, with its emphasis on deterministic/bounding-versus probabilistic/best-estimate-analysis discourages using analytical techniques that require the consideration of a large number of parameters. Discussed herein is a simplified control-volume/flow path approach for calculating source term bypass fraction that is amenable to solution in a spreadsheet or with a commercial mathematical solver (e.g. MathCad or Mathematica). It considers the effects of wind and fire pressure gradients on the structure, ventilation system operation, and Halon discharges. Simple models are used to characterize the engineered and non-engineered flow paths. By making judicious choices for the limited set of problem parameters, the results from this approach can be defended as bounding and conservative

  7. Slip estimation methods for proprioceptive terrain classification using tracked mobile robots

    CSIR Research Space (South Africa)

    Masha, Ditebogo F

    2017-11-01

    Full Text Available Recent work has shown that proprioceptive measurements such as terrain slip can be used for terrain classification. This paper investigates the suitability of four simple slip estimation methods for differentiating between indoor and outdoor terrain...

  8. Comparison of different methods in estimating potential evapotranspiration at Muda Irrigation Scheme of Malaysia

    Directory of Open Access Journals (Sweden)

    Sobri Harun

    2012-04-01

    Full Text Available Evapotranspiration (ET is a complex process in the hydrological cycle that influences the quantity of runoff and thus the irrigation water requirements. Numerous methods have been developed to estimate potential evapotranspiration (PET. Unfortunately, most of the reliable PET methods are parameter rich models and therefore, not feasible for application in data scarce regions. On the other hand, accuracy and reliability of simple PET models vary widely according to regional climate conditions. The objective of the present study was to evaluate the performance of three temperature-based and three radiation-based simple ET methods in estimating historical ET and projecting future ET at Muda Irrigation Scheme at Kedah, Malaysia. The performance was measured by comparing those methods with the parameter intensive Penman-Monteith Method. It was found that radiation based methods gave better performance compared to temperature-based methods in estimation of ET in the study area. Future ET simulated from projected climate data obtained through statistical downscaling technique also showed that radiation-based methods can project closer ET values to that projected by Penman-Monteith Method. It is expected that the study will guide in selecting suitable methods for estimating and projecting ET in accordance to availability of meteorological data.

  9. Estimating the uncertainty of damage costs of pollution: A simple transparent method and typical results

    International Nuclear Information System (INIS)

    Spadaro, Joseph V.; Rabl, Ari

    2008-01-01

    Whereas the uncertainty of environmental impacts and damage costs is usually estimated by means of a Monte Carlo calculation, this paper shows that most (and in many cases all) of the uncertainty calculation involves products and/or sums of products and can be accomplished with an analytic solution which is simple and transparent. We present our own assessment of the component uncertainties and calculate the total uncertainty for the impacts and damage costs of the classical air pollutants; results for a Monte Carlo calculation for the dispersion part are also shown. The distribution of the damage costs is approximately lognormal and can be characterized in terms of geometric mean μ g and geometric standard deviation σ g , implying that the confidence interval is multiplicative. We find that for the classical air pollutants σ g is approximately 3 and the 68% confidence interval is [μ g / σ g , μ g σ g ]. Because the lognormal distribution is highly skewed for large σ g , the median is significantly smaller than the mean. We also consider the case where several lognormally distributed damage costs are added, for example to obtain the total damage cost due to all the air pollutants emitted by a power plant, and we find that the relative error of the sum can be significantly smaller than the relative errors of the summands. Even though the distribution for such sums is not exactly lognormal, we present a simple lognormal approximation that is quite adequate for most applications

  10. Permanent Magnet Flux Online Estimation Based on Zero-Voltage Vector Injection Method

    DEFF Research Database (Denmark)

    Xie, Ge; Lu, Kaiyuan; Kumar, Dwivedi Sanjeet

    2015-01-01

    In this paper, a simple signal injection method is proposed for sensorless control of PMSM at low speed, which ideally requires one voltage vector only for position estimation. The proposed method is easy to implement resulting in low computation burden. No filters are needed for extracting...

  11. Evaluating the performance of simple estimators for probit models with two dummy endogenous regressors

    DEFF Research Database (Denmark)

    Holm, Anders; Nielsen, Jacob Arendt

    2013-01-01

    This study considers the small sample performance of approximate but simple two-stage estimators for probit models with two endogenous binary covariates. Monte Carlo simulations showthat all the considered estimators, including the simulated maximum-likelihood (SML) estimation, of the trivariate ...

  12. Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling

    OpenAIRE

    Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.

    2013-01-01

    Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...

  13. Simple and efficient methods for the accurate evaluation of patterning effects in ultrafast photonic switches

    DEFF Research Database (Denmark)

    Xu, Jing; Ding, Yunhong; Peucheret, Christophe

    2011-01-01

    Although patterning effects (PEs) are known to be a limiting factor of ultrafast photonic switches based on semiconductor optical amplifiers (SOAs), a simple approach for their evaluation in numerical simulations and experiments is missing. In this work, we experimentally investigate and verify...... as well as the operation bit rate. Furthermore, a simple and effective method for probing the maximum PEs is demonstrated, which may relieve the computational effort or the experimental difficulties associated with the use of long PRBSs for the simulation or characterization of SOA-based switches. Good...... agrement with conventional PRBS characterization is obtained. The method is suitable for quick and systematic estimation and optimization of the switching performance....

  14. A new ore reserve estimation method, Yang Chizhong filtering and inferential measurement method, and its application

    International Nuclear Information System (INIS)

    Wu Jingqin.

    1989-01-01

    Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described

  15. Combining the triangle method with thermal inertia to estimate regional evapotranspiration

    DEFF Research Database (Denmark)

    Stisen, Simon; Sandholt, Inge; Nørgaard, Anette

    2008-01-01

    Spatially distributed estimates of evaporative fraction and actual evapotranspiration are pursued using a simple remote sensing technique based on a remotely sensed vegetation index (NDVI) and diurnal changes in land surface temperature. The technique, known as the triangle method, is improved by...

  16. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    Energy Technology Data Exchange (ETDEWEB)

    Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-05-01

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  17. Simple gas chromatographic method for furfural analysis.

    Science.gov (United States)

    Gaspar, Elvira M S M; Lopes, João F

    2009-04-03

    A new, simple, gas chromatographic method was developed for the direct analysis of 5-hydroxymethylfurfural (5-HMF), 2-furfural (2-F) and 5-methylfurfural (5-MF) in liquid and water soluble foods, using direct immersion SPME coupled to GC-FID and/or GC-TOF-MS. The fiber (DVB/CAR/PDMS) conditions were optimized: pH effect, temperature, adsorption and desorption times. The method is simple and accurate (RSDfurfurals will contribute to characterise and quantify their presence in the human diet.

  18. A simple method for assigning genomic grade to individual breast tumours

    International Nuclear Information System (INIS)

    Wennmalm, Kristian; Bergh, Jonas

    2011-01-01

    The prognostic value of grading in breast cancer can be increased with microarray technology, but proposed strategies are disadvantaged by the use of specific training data or parallel microscopic grading. Here, we investigate the performance of a method that uses no information outside the breast profile of interest. In 251 profiled tumours we optimised a method that achieves grading by comparing rank means for genes predictive of high and low grade biology; a simpler method that allows for truly independent estimation of accuracy. Validation was carried out in 594 patients derived from several independent data sets. We found that accuracy was good: for low grade (G1) tumors 83- 94%, for high grade (G3) tumors 74- 100%. In keeping with aim of improved grading, two groups of intermediate grade (G2) cancers with significantly different outcome could be discriminated. This validates the concept of microarray-based grading in breast cancer, and provides a more practical method to achieve it. A simple R script for grading is available in an additional file. Clinical implementation could achieve better estimation of recurrence risk for 40 to 50% of breast cancer patients

  19. A simple method for assigning genomic grade to individual breast tumours

    Directory of Open Access Journals (Sweden)

    Bergh Jonas

    2011-07-01

    Full Text Available Abstract Background The prognostic value of grading in breast cancer can be increased with microarray technology, but proposed strategies are disadvantaged by the use of specific training data or parallel microscopic grading. Here, we investigate the performance of a method that uses no information outside the breast profile of interest. Results In 251 profiled tumours we optimised a method that achieves grading by comparing rank means for genes predictive of high and low grade biology; a simpler method that allows for truly independent estimation of accuracy. Validation was carried out in 594 patients derived from several independent data sets. We found that accuracy was good: for low grade (G1 tumors 83- 94%, for high grade (G3 tumors 74- 100%. In keeping with aim of improved grading, two groups of intermediate grade (G2 cancers with significantly different outcome could be discriminated. Conclusion This validates the concept of microarray-based grading in breast cancer, and provides a more practical method to achieve it. A simple R script for grading is available in an additional file. Clinical implementation could achieve better estimation of recurrence risk for 40 to 50% of breast cancer patients.

  20. Methods for estimation of internal dose of the public from dietary

    International Nuclear Information System (INIS)

    Zhu Hongda

    1987-01-01

    Following the issue of its Publication 26, ICRP has successively published its Publication 30 to meet the great changes and improvements made in the Basic Recommendations since July of 1979. In Part 1 of Publcation 30, ICRP recommended a new method for internal dose estimation and pressented some important data. In this report, comparison is made among methods for estimation of internal dose for the public from dietary. They include: (1) the new method suggested by ICRP; (2) the simple and convenient method using transfer factors under equilibrium conditions; (3) the methods based on the similarities of several radionuclides to their chemical analogs. It is concluded that the first method is better than the others and should be used from now on

  1. Simple method to estimate mean heart dose from Hodgkin lymphoma radiation therapy according to simulation X-rays.

    Science.gov (United States)

    van Nimwegen, Frederika A; Cutter, David J; Schaapveld, Michael; Rutten, Annemarieke; Kooijman, Karen; Krol, Augustinus D G; Janus, Cécile P M; Darby, Sarah C; van Leeuwen, Flora E; Aleman, Berthe M P

    2015-05-01

    To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case-control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor-intensive representative CT-based method. This simpler method may produce a

  2. Application of a simple analytical model to estimate effectiveness of radiation shielding for neutrons

    International Nuclear Information System (INIS)

    Frankle, S.C.; Fitzgerald, D.H.; Hutson, R.L.; Macek, R.J.; Wilkinson, C.A.

    1993-01-01

    Neutron dose equivalent rates have been measured for 800-MeV proton beam spills at the Los Alamos Meson Physics Facility. Neutron detectors were used to measure the neutron dose levels at a number of locations for each beam-spill test, and neutron energy spectra were measured for several beam-spill tests. Estimates of expected levels for various detector locations were made using a simple analytical model developed for 800-MeV proton beam spills. A comparison of measurements and model estimates indicates that the model is reasonably accurate in estimating the neutron dose equivalent rate for simple shielding geometries. The model fails for more complicated shielding geometries, where indirect contributions to the dose equivalent rate can dominate

  3. Fusion rule estimation using vector space methods

    International Nuclear Information System (INIS)

    Rao, N.S.V.

    1997-01-01

    In a system of N sensors, the sensor S j , j = 1, 2 .... N, outputs Y (j) element-of Re, according to an unknown probability distribution P (Y(j) /X) , corresponding to input X element-of [0, 1]. A training n-sample (X 1 , Y 1 ), (X 2 , Y 2 ), ..., (X n , Y n ) is given where Y i = (Y i (1) , Y i (2) , . . . , Y i N ) such that Y i (j) is the output of S j in response to input X i . The problem is to estimate a fusion rule f : Re N → [0, 1], based on the sample, such that the expected square error is minimized over a family of functions Y that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f to f* is feasible. We estimate the sample size sufficient to ensure that f provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (a) the sample size estimate is a simple function of the dimensionality of F, and (b) the estimate f can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks

  4. A simple tool for estimating city-wide annual electrical energy savings from cooler surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Pomerantz, Melvin; Rosado, Pablo J.; Levinson, Ronnen

    2015-12-01

    We present a simple method to estimate the maximum possible electrical energy saving that might be achieved by increasing the albedo of surfaces in a large city. We restrict this to the “indirect effect”, the cooling of outside air that lessens the demand for air conditioning (AC). Given the power demand of the electric utilities and data about the city, we can use a single linear equation to estimate the maximum savings. For example, the result for an albedo change of 0.2 of pavements in a typical warm city in California, such as Sacramento, is that the saving is less than about 2 kWh per m2 per year. This may help decision makers choose which heat island mitigation techniques are economical from an energy-saving perspective.

  5. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  6. Simple method for calculating island widths

    International Nuclear Information System (INIS)

    Cary, J.R.; Hanson, J.D.; Carreras, B.A.; Lynch, V.E.

    1989-01-01

    A simple method for calculating magnetic island widths has been developed. This method uses only information obtained from integrating along the closed field line at the island center. Thus, this method is computationally less intensive than the usual method of producing surfaces of section of sufficient detail to locate and resolve the island separatrix. This method has been implemented numerically and used to analyze the buss work islands of ATF. In this case the method proves to be accurate to at least within 30%. 7 refs

  7. A simple model to estimate the optimal doping of p - Type oxide superconductors

    Directory of Open Access Journals (Sweden)

    Adir Moysés Luiz

    2008-12-01

    Full Text Available Oxygen doping of superconductors is discussed. Doping high-Tc superconductors with oxygen seems to be more efficient than other doping procedures. Using the assumption of double valence fluctuations, we present a simple model to estimate the optimal doping of p-type oxide superconductors. The experimental values of oxygen content for optimal doping of the most important p-type oxide superconductors can be accounted for adequately using this simple model. We expect that our simple model will encourage further experimental and theoretical researches in superconducting materials.

  8. Simple Calculation Programs for Biology Immunological Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Immunological Methods. Computation of Ab/Ag Concentration from EISA data. Graphical Method; Raghava et al., 1992, J. Immuno. Methods 153: 263. Determination of affinity of Monoclonal Antibody. Using non-competitive ...

  9. Simulation methods to estimate design power: an overview for applied research.

    Science.gov (United States)

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  10. A simple method to retrospectively estimate patient dose-area product for chest tomosynthesis examinations performed using VolumeRAD

    Energy Technology Data Exchange (ETDEWEB)

    Båth, Magnus, E-mail: magnus.bath@vgregion.se; Svalkvist, Angelica [Department of Radiation Physics, Institute of Clinical Sciences, The Sahlgrenska Academy at University of Gothenburg, Gothenburg SE-413 45, Sweden and Department of Medical Physics and Biomedical Engineering, Sahlgrenska University Hospital, Gothenburg SE-413 45 (Sweden); Söderman, Christina [Department of Radiation Physics, Institute of Clinical Sciences, The Sahlgrenska Academy at University of Gothenburg, Gothenburg SE-413 45 (Sweden)

    2014-10-15

    Purpose: The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. Methods: DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. Results: A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. Conclusions: A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis.

  11. A simple method to retrospectively estimate patient dose-area product for chest tomosynthesis examinations performed using VolumeRAD

    International Nuclear Information System (INIS)

    Båth, Magnus; Svalkvist, Angelica; Söderman, Christina

    2014-01-01

    Purpose: The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. Methods: DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. Results: A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. Conclusions: A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis

  12. A Convenient Method for Estimation of the Isotopic Abundance in Uranium Bearing Samples

    International Nuclear Information System (INIS)

    AI -Saleh, F.S.; AI-Mukren, Alj.H.; Farouk, M.A.

    2008-01-01

    A convenient and simple method for estimation of the isotopic abundance in some uranium bearing samples using gamma-ray spectrometry is developed using a hyper pure germanium spectrometer and a standard uranium sample with known isotopic abundance

  13. An extensive study on a simple method estimating response spectrum based on a simulated spectrum

    International Nuclear Information System (INIS)

    Sato, H.; Komazaki, M.; Ohori, M.

    1977-01-01

    The basic description of the procedure will be briefly described in the paper. Corresponding to peaks of the response spectrum for the earthquake motion the component of the respective ground predominant period was taken. The acceleration amplification factor of a building structure for the respective predominant period above taken was obtained from the spectrum for the simulated earthquake with single predominant period. The rate of the respective component in summing these amplification factors was given by satisfying the ratio among the magnitude of the peaks of the spectrum. The summation was made by the principle of the square root of sum of squares. The procedure was easily applied to estimate the spectrum of the building appendage structure. The method is attempted to extend for multi-storey building structure and appendage to this building. Analysis is made as for a two storey structure system the mode of which for the first natural frequency is that the amplitude ratio of the upper mass to the lower is 2 to 1, so that the mode shape is a reversed triangle. The behavior of the system is dealt with by the normal coordinate. The amplification factors due to two ground predominant periods are estimated for the system with the first natural frequency. In this procedure the method developed for the single-degree-of-freedom system is directly applicable. The same method is used for the system with the second natural frequency. Thus estimated amplification factor for the mode of the respective natural frequency is summed again due to the principle of the square root of sum of squares after multiplying the excitation coefficient of each mode by the corresponding factor

  14. A Simple Preparation Method for Diphosphoimidazole

    DEFF Research Database (Denmark)

    Rosenberg, T.

    1964-01-01

    A simple method for the preparation of diphosphoimidazole is presented that involves direct phosphorylation of imidazole by phosphorus oxychloride in alkaline aqueous solution. Details are given on the use of diphosphoimidazole in preparing sodium phosphoramidate and certain phosphorylated amino...

  15. A validated HPTLC method for estimation of moxifloxacin hydrochloride in tablets.

    Science.gov (United States)

    Dhillon, Vandana; Chaudhary, Alok Kumar

    2010-10-01

    A simple HPTLC method having high accuracy, precision and reproducibility was developed for the routine estimation of moxifloxacin hydrochloride in the tablets available in market and was validated for various parameters according to ICH guidelines. moxifloxacin hydrochloride was estimated at 292 nm by densitometry using Silica gel 60 F254 as stationary phase and a premix of methylene chloride: methanol: strong ammonia solution and acetonitrile (10:10:5:10) as mobile phase. Method was found linear in a range of 9-54 nanograms with a correlation coefficient >0.99. The regression equation was: AUC = 65.57 × (Amount in nanograms) + 163 (r(2) = 0.9908).

  16. A simple method for human peripheral blood monocyte Isolation

    Directory of Open Access Journals (Sweden)

    Marcos C de Almeida

    2000-04-01

    Full Text Available We describe a simple method using percoll gradient for isolation of highly enriched human monocytes. High numbers of fully functional cells are obtained from whole blood or buffy coat cells. The use of simple laboratory equipment and a relatively cheap reagent makes the described method a convenient approach to obtaining human monocytes.

  17. A simple method to retrospectively estimate patient dose-area product for chest tomosynthesis examinations performed using VolumeRAD.

    Science.gov (United States)

    Båth, Magnus; Söderman, Christina; Svalkvist, Angelica

    2014-10-01

    The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis.

  18. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    Science.gov (United States)

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  19. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves

    Directory of Open Access Journals (Sweden)

    Madaín Pérez-Patricio

    2018-02-01

    Full Text Available This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance, a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica, Canavalia ensiforme, and Lycopersicon esculentum. Experimental results showed that—in terms of accuracy and processing speed—the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica, where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  20. Simple Calculation Programs for Biology Other Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Other Methods. Hemolytic potency of drugs. Raghava et al., (1994) Biotechniques 17: 1148. FPMAP: methods for classification and identification of microorganisms 16SrRNA. graphical display of restriction and fragment map of ...

  1. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  2. A Simple and Improved HPLC-PDA Method for Simultaneous Estimation of Fexofenadine and Pseudoephedrine in Extended Release Tablets by Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Ruhul Kayesh

    2017-01-01

    Full Text Available A simple RP-HPLC method has been developed for simultaneous estimation of fexofenadine and pseudoephedrine in their extended release tablet. The method was developed based on statistical design of experiments (DoE and Response Surface Methodology. Separation was achieved on double end-capped C18 column (250 mm × 4 mm, 5 μm. In this experiment, two components of mobile phase, namely, acetonitrile (% v/v and methanol (% v/v, were the factors whereas retention and resolution of the chromatographic peaks were the responses. The effects of different composition of factors on the corresponding responses were investigated. The optimum chromatographic condition for the current case was found as an isocratic mobile phase consisting of 20 mM phosphate buffer (pH 6.8 and acetonitrile and methanol in a ratio of 50 : 36 : 14 (% v/v at a flow rate of 1 mL/min for 7 minutes. The retention of pseudoephedrine and fexofenadine was found to be 2.6 min and 4.7 min, respectively. The method was validated according to the ICH and FDA guidelines and various validation parameters were determined. Also, forced degradation studies in acid, base, oxidation, and reduction media and in thermal condition were performed to establish specificity and stability-indicating property of this method. Practical applicability of this method was checked in extended release tablets available in Bangladeshi market.

  3. RSMASS: A simple model for estimating reactor and shield masses

    International Nuclear Information System (INIS)

    Marshall, A.C.; Aragon, J.; Gallup, D.

    1987-01-01

    A simple mathematical model (RSMASS) has been developed to provide rapid estimates of reactor and shield masses for space-based reactor power systems. Approximations are used rather than correlations or detailed calculations to estimate the reactor fuel mass and the masses of the moderator, structure, reflector, pressure vessel, miscellaneous components, and the reactor shield. The fuel mass is determined either by neutronics limits, thermal/hydraulic limits, or fuel damage limits, whichever yields the largest mass. RSMASS requires the reactor power and energy, 24 reactor parameters, and 20 shield parameters to be specified. This parametric approach should be applicable to a very broad range of reactor types. Reactor and shield masses calculated by RSMASS were found to be in good agreement with the masses obtained from detailed calculations

  4. A Simple Method for Estimating the Economic Cost of Productivity Loss Due to Blindness and Moderate to Severe Visual Impairment.

    Science.gov (United States)

    Eckert, Kristen A; Carter, Marissa J; Lansingh, Van C; Wilson, David A; Furtado, João M; Frick, Kevin D; Resnikoff, Serge

    2015-01-01

    To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) using simple models (analogous to how a rapid assessment model relates to a comprehensive model) based on minimum wage (MW) and gross national income (GNI) per capita (US$, 2011). Cost of blindness (COB) was calculated for the age group ≥50 years in nine sample countries by assuming the loss of current MW and loss of GNI per capita. It was assumed that all individuals work until 65 years old and that half of visual impairment prevalent in the ≥50 years age group is prevalent in the 50-64 years age group. For cost of MSVI (COMSVI), individual wage and GNI loss of 30% was assumed. Results were compared with the values of the uncorrected refractive error (URE) model of productivity loss. COB (MW method) ranged from $0.1 billion in Honduras to $2.5 billion in the United States, and COMSVI ranged from $0.1 billion in Honduras to $5.3 billion in the US. COB (GNI method) ranged from $0.1 million in Honduras to $7.8 billion in the US, and COMSVI ranged from $0.1 billion in Honduras to $16.5 billion in the US. Most GNI method values were near equivalent to those of the URE model. Although most people with blindness and MSVI live in developing countries, the highest productivity losses are in high income countries. The global economy could improve if eye care were made more accessible and more affordable to all.

  5. A simple in vitro test tube method for estimating the bioavailability of phosphorus in feed ingredients for swine.

    Science.gov (United States)

    Bollinger, David W; Tsunoda, Atsushi; Ledoux, David R; Ellersieck, Mark R; Veum, Trygve L

    2004-04-07

    A simplified in vitro test tube (TT) method was developed to estimate the percentage of available P in feed ingredients for swine. The entire digestion procedure with the TT method consists of three consecutive enzymatic digestions carried out in a 50-mL conical test tube: (1) Pre-digestion with endo-xylanase and beta-glucanase for 1 h, (2) peptic digestion for 2 h, and (3) pancreatic digestion for 2 or 4 h. The TT method is simpler and much easier to perform compared to the dialysis tubing (DT) method, because dialysis tubing is not used. Reducing sample size from 1.0 to 0.25 g for the TT method improved results. In conclusion, the accuracy and validity of the TT method is equal to that of our more complicated DT method (r = 0.97, P < 0.001), designed to mimic the digestive system of swine, for estimating the availability of P in plant-origin feed ingredients.

  6. A pilot study of a simple screening technique for estimation of salivary flow.

    Science.gov (United States)

    Kanehira, Takashi; Yamaguchi, Tomotaka; Takehara, Junji; Kashiwazaki, Haruhiko; Abe, Takae; Morita, Manabu; Asano, Kouzo; Fujii, Yoshinori; Sakamoto, Wataru

    2009-09-01

    The purpose of this study was to develop a simple screening technique for estimation of salivary flow and to test the usefulness of the method for determining decreased salivary flow. A novel assay system comprising 3 spots containing 30 microg starch and 49.6 microg potassium iodide per spot on filter paper and a coloring reagent, based on the color reaction of iodine-starch and theory of paper chromatography, was designed. We investigated the relationship between resting whole salivary rates and the number of colored spots on the filter produced by 41 hospitalized subjects. A significant negative correlation was observed between the number of colored spots and the resting salivary flow rate (n = 41; r = -0.803; P bedridden and disabled elderly people.

  7. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  8. Determining Service Life of Respirator Cartridges Using a Simple and Practical Method: Case Study in a Car Manufacturing Industry

    Directory of Open Access Journals (Sweden)

    A.M Rashidi

    2012-01-01

    Full Text Available Background and aims: For ensuring about proper performance of air-purifying respirators in providing protection against workplace contaminants, it is necessary to change the respirator cartridges before the end of their service life. The aim of this study was determination of service life of organic vapor cartridges using a simple and practical method in a spray painting booth of a car manufacturing industry.   Methods: NIOSH MultiVapor software was used for estimating service life of respirator cartridges based on workplace conditions and cartridge specifications. Efficiency of determined service life was investigated using an apparatus for field testing of cartridges in the workplace.   Results: The results showed that existing schedule for changing the respirator cartridges is not effective and no longer provide adequate protection for sprayers against organic contaminants while working in a painting booth. It is necessary to change the cartridges before their estimated service life (every 4 hours.   Conclusion: NIOSH MultiVapor has acceptable efficiency for determining respirator cartridges service life and could be used as a simple and practical method in the workplace. Moreover, Service life estimated by this software was confirmed by cartridge field test apparatus.

  9. UV Spectrophotometric Method for theEstimation of Itopride Hydrochloride in Pharmaceutical Formulation

    OpenAIRE

    K. R. Gupta; R. R. Joshi; R. B. Chawla; S. G. Wadodkar

    2010-01-01

    Three simple, precise and economical UV methods have been developed for the estimation of itopride hydrochloride in pharmaceutical formulations. Itopride hydrochloride in distilled water shows the maximum absorbance at 258.0 nm (Method A) and in first order derivative spectra of the same shows sharp peak at 247.0 nm, when n = 1 (Method B). Method C utilises area under curve (AUC) in the wavelength range from 262.0-254.0 nm for analysis of itopride hydrochloride. The drug was found to obey Bee...

  10. Simple and inexpensive method for CT-guided stereotaxy

    Energy Technology Data Exchange (ETDEWEB)

    Wester, K; Sortland, O; Hauglie-Hanssen, E

    1981-01-01

    A simple and inexpensive method for CT-guided stereotaxy is described. The method requires neither sophisticated computer programs nor additional stereotactic equipment, such as special head holders for the CT, and can be easily obtained without technical assistance. The method is designed to yield the vertical coordinates.

  11. A simple method to estimate restoration volume as a possible predictor for tooth fracture.

    Science.gov (United States)

    Sturdevant, J R; Bader, J D; Shugars, D A; Steet, T C

    2003-08-01

    Many dentists cite the fracture risk posed by a large existing restoration as a primary reason for their decision to place a full-coverage restoration. However, there is poor agreement among dentists as to when restoration placement is necessary because of the inability to make objective measurements of restoration size. The purpose of this study was to compare a new method to estimate restoration volumes in posterior teeth with analytically determined volumes. True restoration volume proportion (RVP) was determined for 96 melamine typodont teeth: 24 each of maxillary second premolar, mandibular second premolar, maxillary first molar, and mandibular first molar. Each group of 24 was subdivided into 3 groups to receive an O, MO, or MOD amalgam preparation design. Each preparation design was further subdivided into 4 groups of increasingly larger size. The density of amalgam used was calculated according to ANSI/ADA Specification 1. The teeth were weighed before and after restoration with amalgam. Restoration weight was calculated, and the density of amalgam was used to calculate restoration volume. A liquid pycnometer was used to calculate coronal volume after sectioning the anatomic crown from the root horizontally at the cementoenamel junction. True RVP was calculated by dividing restoration volume by coronal volume. An occlusal photograph and a bitewing radiograph were made of each restored tooth to provide 2 perpendicular views. Each image was digitized, and software was used to measure the percentage of the anatomic crown restored with amalgam. Estimated RVP was calculated by multiplying the percentage of the anatomic crown restored from the 2 views together. Pearson correlation coefficients were used to compare estimated RVP with true RVP. The Pearson correlation coefficient of true RVP with estimated RVP was 0.97 overall (Pvolume of restorative material in coronal tooth structure. The fact that it can be done in a nondestructive manner makes it attractive for

  12. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    Science.gov (United States)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  13. A simple time-delayed method to control chaotic systems

    International Nuclear Information System (INIS)

    Chen Maoyin; Zhou Donghua; Shang Yun

    2004-01-01

    Based on the adaptive iterative learning strategy, a simple time-delayed controller is proposed to stabilize unstable periodic orbits (UPOs) embedded in chaotic attractors. This controller includes two parts: one is a linear feedback part; the other is an adaptive iterative learning estimation part. Theoretical analysis and numerical simulation show the effectiveness of this controller

  14. The estimated possibilities of process monitoring in milk production by the simple thermodynamic sensors

    Directory of Open Access Journals (Sweden)

    Martin Adámek

    2016-12-01

    Full Text Available The characterization and monitoring of thermal processes in thermodynamic systems can be performed using the thermodynamic sensors (TDS. The basic idea of thermodynamic sensor is possible to use in many various applications (eq. monitoring of frictional heat, thermal radiation, pollution of cleaning fluid, etc.. One of application areas, where the thermodynamic sensor can find the new area for a using, is a production of milk products - cheese, yogurt, kefir, etc. This paper describes the estimated possibilities, advantages and disadvantages of the use of thermodynamic sensors in diary productions and simple experiments for characterization and monitoring of basic operations in milk production process by thermodynamic sensors. The milk products are often realized by fermenting or renneting process. Final stages of fermentation and renneting processes are often determined on the base of sensory evaluation, pH measurement or by analytical method. The exact time of the fermentation process completion is dependent on various parameters and is often the company know-how. The fast, clean and simple non-analytical non-contact method for monitoring and for the determination of process final stages does not exist in this time. Tests of fermentation process, renneting process and yoghurt process by thermodynamic sensors were characterized and measured in this work. Measurement of activity yeasts was tested in first series of experiments. In second series of experiments, measurement of processes in milk production was tested. First results of simple experiments show that the thermodynamic sensors might be used for determination of time behaviour of these processes. Therefore, the milk products (cheese, yogurt, kefir, etc. is opened as a one of new application areas, where the thermodynamic sensor can be used.

  15. Estimating incidence from prevalence in generalised HIV epidemics: methods and validation.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2008-04-01

    Full Text Available HIV surveillance of generalised epidemics in Africa primarily relies on prevalence at antenatal clinics, but estimates of incidence in the general population would be more useful. Repeated cross-sectional measures of HIV prevalence are now becoming available for general populations in many countries, and we aim to develop and validate methods that use these data to estimate HIV incidence.Two methods were developed that decompose observed changes in prevalence between two serosurveys into the contributions of new infections and mortality. Method 1 uses cohort mortality rates, and method 2 uses information on survival after infection. The performance of these two methods was assessed using simulated data from a mathematical model and actual data from three community-based cohort studies in Africa. Comparison with simulated data indicated that these methods can accurately estimates incidence rates and changes in incidence in a variety of epidemic conditions. Method 1 is simple to implement but relies on locally appropriate mortality data, whilst method 2 can make use of the same survival distribution in a wide range of scenarios. The estimates from both methods are within the 95% confidence intervals of almost all actual measurements of HIV incidence in adults and young people, and the patterns of incidence over age are correctly captured.It is possible to estimate incidence from cross-sectional prevalence data with sufficient accuracy to monitor the HIV epidemic. Although these methods will theoretically work in any context, we have able to test them only in southern and eastern Africa, where HIV epidemics are mature and generalised. The choice of method will depend on the local availability of HIV mortality data.

  16. Simple robust technique using time delay estimation for the control and synchronization of Lorenz systems

    International Nuclear Information System (INIS)

    Jin, Maolin; Chang, Pyung Hun

    2009-01-01

    This work presents two simple and robust techniques based on time delay estimation for the respective control and synchronization of chaos systems. First, one of these techniques is applied to the control of a chaotic Lorenz system with both matched and mismatched uncertainties. The nonlinearities in the Lorenz system is cancelled by time delay estimation and desired error dynamics is inserted. Second, the other technique is applied to the synchronization of the Lue system and the Lorenz system with uncertainties. The synchronization input consists of three elements that have transparent and clear meanings. Since time delay estimation enables a very effective and efficient cancellation of disturbances and nonlinearities, the techniques turn out to be simple and robust. Numerical simulation results show fast, accurate and robust performance of the proposed techniques, thereby demonstrating their effectiveness for the control and synchronization of Lorenz systems.

  17. Validation of a simple and inexpensive method for the quantitation of infarct in the rat brain

    Directory of Open Access Journals (Sweden)

    C.L.R. Schilichting

    2004-04-01

    Full Text Available A gravimetric method was evaluated as a simple, sensitive, reproducible, low-cost alternative to quantify the extent of brain infarct after occlusion of the medial cerebral artery in rats. In ether-anesthetized rats, the left medial cerebral artery was occluded for 1, 1.5 or 2 h by inserting a 4-0 nylon monofilament suture into the internal carotid artery. Twenty-four hours later, the brains were processed for histochemical triphenyltetrazolium chloride (TTC staining and quantitation of the schemic infarct. In each TTC-stained brain section, the ischemic tissue was dissected with a scalpel and fixed in 10% formalin at 0ºC until its total mass could be estimated. The mass (mg of the ischemic tissue was weighed on an analytical balance and compared to its volume (mm³, estimated either by plethysmometry using platinum electrodes or by computer-assisted image analysis. Infarct size as measured by the weighing method (mg, and reported as a percent (% of the affected (left hemisphere, correlated closely with volume (mm³, also reported as % estimated by computerized image analysis (r = 0.88; P < 0.001; N = 10 or by plethysmography (r = 0.97-0.98; P < 0.0001; N = 41. This degree of correlation was maintained between different experimenters. The method was also sensitive for detecting the effect of different ischemia durations on infarct size (P < 0.005; N = 23, and the effect of drug treatments in reducing the extent of brain damage (P < 0.005; N = 24. The data suggest that, in addition to being simple and low cost, the weighing method is a reliable alternative for quantifying brain infarct in animal models of stroke.

  18. Estimation of Duloxetine Hydrochloride in Pharmaceutical Formulations by RP-HPLC Method

    OpenAIRE

    Patel, Sejal K.; Patel, N. J.; Patel, K. M.; Patel, P. U.; Patel, B. H.

    2008-01-01

    Simple, specific, accurate and precise method, namely, reverse phase high performance liquid chromatography was developed for estimation of duloxetine HCl in pharmaceutical formulations. For the high performance liquid chromatography method, Phenomenox C-18, 5 µm column consisting of 250x4.6 mm i.d. in isocratic mode, with mobile phase containing 0.01M 5.5 pH phosphate buffer: acetonitrile (60:40 v/v) and final pH adjust to 5.5±0.02 with phosphoric acid was used. The flow rate w...

  19. A projection and density estimation method for knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Adam Stanski

    Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  20. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    Science.gov (United States)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  1. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    Science.gov (United States)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  2. A simple formula for estimating global solar radiation in central arid deserts of Iran

    International Nuclear Information System (INIS)

    Sabziparvar, Ali A.

    2008-01-01

    Over the last two decades, using simple radiation models has been an interesting task to estimate daily solar radiation in arid and semi-arid deserts such as those in Iran, where the number of solar observation sites is poor. In Iran, most of the models used so far, have been validated for a few specific locations based on short-term solar observations. In this work, three different radiation models (Sabbagh, Paltridge, Daneshyar) have been revised to predict the climatology of monthly average daily solar radiation on horizontal surfaces in various cities in central arid deserts of Iran. The modifications are made by the inclusion of altitude, monthly total number of dusty days and seasonal variation of Sun-Earth distance. A new height-dependent formula is proposed based on MBE, MABE, MPE and RMSE statistical analysis. It is shown that the revised Sabbagh method can be a good estimator for the prediction of global solar radiation in arid and semi-arid deserts with an average error of less than 2%, that performs a more accurate prediction than those in the previous studies. The required data for the suggested method are usually available in most meteorological sites. For the locations, where some of the input data are not reported, an alternative approach is presented. (author)

  3. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    Science.gov (United States)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  4. New Vehicle Detection Method with Aspect Ratio Estimation for Hypothesized Windows

    Directory of Open Access Journals (Sweden)

    Jisu Kim

    2015-12-01

    Full Text Available All kinds of vehicles have different ratios of width to height, which are called the aspect ratios. Most previous works, however, use a fixed aspect ratio for vehicle detection (VD. The use of a fixed vehicle aspect ratio for VD degrades the performance. Thus, the estimation of a vehicle aspect ratio is an important part of robust VD. Taking this idea into account, a new on-road vehicle detection system is proposed in this paper. The proposed method estimates the aspect ratio of the hypothesized windows to improve the VD performance. Our proposed method uses an Aggregate Channel Feature (ACF and a support vector machine (SVM to verify the hypothesized windows with the estimated aspect ratio. The contribution of this paper is threefold. First, the estimation of vehicle aspect ratio is inserted between the HG (hypothesis generation and the HV (hypothesis verification. Second, a simple HG method named a signed horizontal edge map is proposed to speed up VD. Third, a new measure is proposed to represent the overlapping ratio between the ground truth and the detection results. This new measure is used to show that the proposed method is better than previous works in terms of robust VD. Finally, the Pittsburgh dataset is used to verify the performance of the proposed method.

  5. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    Science.gov (United States)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  6. A simple model to estimate the impact of sea-level rise on platform beaches

    Science.gov (United States)

    Taborda, Rui; Ribeiro, Mónica Afonso

    2015-04-01

    Estimates of future beach evolution in response to sea-level rise are needed to assess coastal vulnerability. A research gap is identified in providing adequate predictive methods to use for platform beaches. This work describes a simple model to evaluate the effects of sea-level rise on platform beaches that relies on the conservation of beach sand volume and assumes an invariant beach profile shape. In closed systems, when compared with the Inundation Model, results show larger retreats; the differences are higher for beaches with wide berms and when the shore platform develops at shallow depths. The application of the proposed model to Cascais (Portugal) beaches, using 21st century sea-level rise scenarios, shows that there will be a significant reduction in beach width.

  7. A simple method to predict regional fish abundance: an example in the McKenzie River Basin, Oregon

    Science.gov (United States)

    D.J. McGarvey; J.M. Johnston

    2011-01-01

    Regional assessments of fisheries resources are increasingly called for, but tools with which to perform them are limited. We present a simple method that can be used to estimate regional carrying capacity and apply it to the McKenzie River Basin, Oregon. First, we use a macroecological model to predict trout densities within small, medium, and large streams in the...

  8. A simple numerical model to estimate the effect of coal selection on pulverized fuel burnout

    Energy Technology Data Exchange (ETDEWEB)

    Sun, J.K.; Hurt, R.H.; Niksa, S.; Muzio, L.; Mehta, A.; Stallings, J. [Brown University, Providence, RI (USA). Division Engineering

    2003-06-01

    The amount of unburned carbon in ash is an important performance characteristic in commercial boilers fired with pulverized coal. Unburned carbon levels are known to be sensitive to fuel selection, and there is great interest in methods of estimating the burnout propensity of coals based on proximate and ultimate analysis - the only fuel properties readily available to utility practitioners. A simple numerical model is described that is specifically designed to estimate the effects of coal selection on burnout in a way that is useful for commercial coal screening. The model is based on a highly idealized description of the combustion chamber but employs detailed descriptions of the fundamental fuel transformations. The model is validated against data from laboratory and pilot-scale combustors burning a range of international coals, and then against data obtained from full-scale units during periods of coal switching. The validated model form is then used in a series of sensitivity studies to explore the role of various individual fuel properties that influence burnout.

  9. Bubble nucleation in simple and molecular liquids via the largest spherical cavity method

    International Nuclear Information System (INIS)

    Gonzalez, Miguel A.; Abascal, José L. F.; Valeriani, Chantal; Bresme, Fernando

    2015-01-01

    In this work, we propose a methodology to compute bubble nucleation free energy barriers using trajectories generated via molecular dynamics simulations. We follow the bubble nucleation process by means of a local order parameter, defined by the volume of the largest spherical cavity (LSC) formed in the nucleating trajectories. This order parameter simplifies considerably the monitoring of the nucleation events, as compared with the previous approaches which require ad hoc criteria to classify the atoms and molecules as liquid or vapor. The combination of the LSC and the mean first passage time technique can then be used to obtain the free energy curves. Upon computation of the cavity distribution function the nucleation rate and free-energy barrier can then be computed. We test our method against recent computations of bubble nucleation in simple liquids and water at negative pressures. We obtain free-energy barriers in good agreement with the previous works. The LSC method provides a versatile and computationally efficient route to estimate the volume of critical bubbles the nucleation rate and to compute bubble nucleation free-energies in both simple and molecular liquids

  10. A simple approximation method for dilute Ising systems

    International Nuclear Information System (INIS)

    Saber, M.

    1996-10-01

    We describe a simple approximate method to analyze dilute Ising systems. The method takes into consideration the fluctuations of the effective field, and is based on a probability distribution of random variables which correctly accounts for all the single site kinematic relations. It is shown that the simplest approximation gives satisfactory results when compared with other methods. (author). 12 refs, 2 tabs

  11. Point Estimation Method of Electromagnetic Flowmeters Life Based on Randomly Censored Failure Data

    Directory of Open Access Journals (Sweden)

    Zhen Zhou

    2014-08-01

    Full Text Available This paper analyzes the characteristics of the enterprise after-sale service records for field failure data, and summarizes the types of field data. Maximum likelihood estimation method and the least squares method are presented for the complexity and difficulty of field failure data processing, and Monte Carlo simulation method is proposed. Monte Carlo simulation, the relatively simple calculation method, is an effective method, whose result is closed to that of the other two methods. Through the after-sale service records analysis of a specific electromagnetic flowmeter enterprises, this paper illustrates the effectiveness of field failure data processing methods.

  12. Utilising temperature differences as constraints for estimating parameters in a simple climate model

    International Nuclear Information System (INIS)

    Bodman, Roger W; Karoly, David J; Enting, Ian G

    2010-01-01

    Simple climate models can be used to estimate the global temperature response to increasing greenhouse gases. Changes in the energy balance of the global climate system are represented by equations that necessitate the use of uncertain parameters. The values of these parameters can be estimated from historical observations, model testing, and tuning to more complex models. Efforts have been made at estimating the possible ranges for these parameters. This study continues this process, but demonstrates two new constraints. Previous studies have shown that land-ocean temperature differences are only weakly correlated with global mean temperature for natural internal climate variations. Hence, these temperature differences provide additional information that can be used to help constrain model parameters. In addition, an ocean heat content ratio can also provide a further constraint. A pulse response technique was used to identify relative parameter sensitivity which confirmed the importance of climate sensitivity and ocean vertical diffusivity, but the land-ocean warming ratio and the land-ocean heat exchange coefficient were also found to be important. Experiments demonstrate the utility of the land-ocean temperature difference and ocean heat content ratio for setting parameter values. This work is based on investigations with MAGICC (Model for the Assessment of Greenhouse-gas Induced Climate Change) as the simple climate model.

  13. A simple and efficient algorithm to estimate daily global solar radiation from geostationary satellite data

    International Nuclear Information System (INIS)

    Lu, Ning; Qin, Jun; Yang, Kun; Sun, Jiulin

    2011-01-01

    Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.

  14. Spectrophotometric estimation of ethamsylate and mefenamic acid from a binary mixture by dual wavelength and simultaneous equation methods

    OpenAIRE

    Goyal Anju; Singhvi I

    2008-01-01

    Two simple, accurate, economical and reproducible spectrophotometric methods for simultaneous estimation of two-component drug mixture of ethamsylate and mefenamic acid in combined tablet dosage form have been developed. The first developed method involves formation and solving of simultaneous equation using 287.6 nm and 313.2 nm as two wavelengths. Second developed method is based on two wavelength calculation. Two wavelengths selected for estimation of ethamsylate were 274.4 nm and 301.2 nm...

  15. A combined method to estimate the appropriate age value of closed uraniun-lead system

    International Nuclear Information System (INIS)

    Malakhov, S.S.

    1982-01-01

    A new method is proposed for obtaining the appropriate age values of closed uranium-lead systems taking into account total indeependent information delivered as a result of spectral and isotope-lead analyses. A simple mathematical apparatus which permits to perform a geochronological interpretation of samples using miniature computers is considered and suggested. A simple estimation formula for determining the age of uranium-lead systems under the assumption of model development of isotope ratios of ordinary lead is derived and tested basing on the facts

  16. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    Science.gov (United States)

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  17. Thermalization calorimetry: A simple method for investigating glass transition and crystallization of supercooled liquids

    DEFF Research Database (Denmark)

    Jakobsen, Bo; Sanz, Alejandro; Niss, Kristine

    2016-01-01

    and their crystallization, e.g., for locating the glass transition and melting point(s), as well as for investigating the stability against crystallization and estimating the relative change in specific heat between the solid and liquid phases at the glass transition......We present a simple method for fast and cheap thermal analysis on supercooled glass-forming liquids. This “Thermalization Calorimetry” technique is based on monitoring the temperature and its rate of change during heating or cooling of a sample for which the thermal power input comes from heat...

  18. A graphical method for estimating the tunneling factor for mode conversion processes

    International Nuclear Information System (INIS)

    Swanson, D.G.

    1994-01-01

    The fundamental parameter characterizing the strength of any mode conversion process is the tunneling parameter, which is typically determined from a model dispersion relation which is transformed into a differential equation. Here a graphical method is described which gives the tunneling parameter from quantities directly measured from a simple graph of the dispersion relation. The accuracy of the estimate depends only on the accuracy of the measurements

  19. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    OpenAIRE

    Gunawan, Hendra; Micheldiament, Micheldiament; Mikhailov, Valentin

    2008-01-01

    http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density) estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting ...

  20. Simple analytical expression for crosstalk estimation in homogeneous trench-assisted multi-core fibers

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2014-01-01

    An analytical expression for the mode coupling coe cient in homogeneous trench-assisted multi-core fibers is derived, which has a sim- ple relationship with the one in normal step-index structures. The amount of inter-core crosstalk reduction (in dB) with trench-assisted structures compared...... to the one with normal step-index structures can then be written by a simple expression. Comparison with numerical simulations confirms that the obtained analytical expression has very good accuracy for crosstalk estimation. The crosstalk properties in trench-assisted multi-core fibers, such as crosstalk...... dependence on core pitch and wavelength-dependent crosstalk, can be obtained by this simple analytical expression....

  1. A simple score for estimating the long-term risk of fracture in patients with multiple sclerosis

    DEFF Research Database (Denmark)

    Bazelier, M. T.; van Staa, T. P.; Uitdehaag, B. M. J.

    2012-01-01

    was converted into integer risk scores. Results: In comparison with the FRAX calculator, our risk score contains several new risk factors that have been linked with fracture, which include MS, use of antidepressants, use of anticonvulsants, history of falling, and history of fatigue. We estimated the 5- and 10......Objective: To derive a simple score for estimating the long-term risk of osteoporotic and hip fracture in individual patients with MS. Methods: Using the UK General Practice Research Database linked to the National Hospital Registry (1997-2008), we identified patients with incident MS (n = 5......,494). They were matched 1:6 by year of birth, sex, and practice with patients without MS (control subjects). Cox proportional hazards models were used to calculate the long-term risk of osteoporotic and hip fracture. We fitted the regression model with general and specific risk factors, and the final Cox model...

  2. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  3. Effective thermal conductivity estimate of heterogenous media by a lattice Boltzmann method

    Energy Technology Data Exchange (ETDEWEB)

    Arab, M.R.; Pateyron, B.; El Ganaoui, M.; Labbe, J.C. [Limoges Univ., Limoges (France). Science des Procedes Ceramiques et de Traitements de Surface

    2009-07-01

    Statistical lattice Boltzmann methods (LBM) are often used to simulate isothermal fluid flow for problems with complex geometry or porous structures. This study used an LBM algorithm to evaluate the effective thermal conductivity (ETC) of simple 2-D configurations. The LBM algorithm was also used to estimate the ECT of a porous structure. The Bhatnagar-Gross-Krook approximation was used to determine the discrete form of the Boltzmann equation for a single phase flow. A comparison with the finite element method (FEM) was also conducted. Results of the study demonstrated that the LBM algorithm accurately simulates the phenomena of heat and mass transfer for both the simple 2-D configurations as well as the porous media. The tool will be used to determine the influence of thermal contact resistance on heat transfer. 6 refs., 1 tab., 7 figs.

  4. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    Science.gov (United States)

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  5. Focused ultrasound transducer spatial peak intensity estimation: a comparison of methods

    Science.gov (United States)

    Civale, John; Rivens, Ian; Shaw, Adam; ter Haar, Gail

    2018-03-01

    Characterisation of the spatial peak intensity at the focus of high intensity focused ultrasound transducers is difficult because of the risk of damage to hydrophone sensors at the high focal pressures generated. Hill et al (1994 Ultrasound Med. Biol. 20 259-69) provided a simple equation for estimating spatial-peak intensity for solid spherical bowl transducers using measured acoustic power and focal beamwidth. This paper demonstrates theoretically and experimentally that this expression is only strictly valid for spherical bowl transducers without a central (imaging) aperture. A hole in the centre of the transducer results in over-estimation of the peak intensity. Improved strategies for determining focal peak intensity from a measurement of total acoustic power are proposed. Four methods are compared: (i) a solid spherical bowl approximation (after Hill et al 1994 Ultrasound Med. Biol. 20 259-69), (ii) a numerical method derived from theory, (iii) a method using measured sidelobe to focal peak pressure ratio, and (iv) a method for measuring the focal power fraction (FPF) experimentally. Spatial-peak intensities were estimated for 8 transducers at three drive powers levels: low (approximately 1 W), moderate (~10 W) and high (20-70 W). The calculated intensities were compared with those derived from focal peak pressure measurements made using a calibrated hydrophone. The FPF measurement method was found to provide focal peak intensity estimates that agreed most closely (within 15%) with the hydrophone measurements, followed by the pressure ratio method (within 20%). The numerical method was found to consistently over-estimate focal peak intensity (+40% on average), however, for transducers with a central hole it was more accurate than using the solid bowl assumption (+70% over-estimation). In conclusion, the ability to make use of an automated beam plotting system, and a hydrophone with good spatial resolution, greatly facilitates characterisation of the FPF, and

  6. Gallium determination with Rodamina B: a simple method

    International Nuclear Information System (INIS)

    Queiroz, R.R.U. de.

    1981-01-01

    A simple method for determining gallium with Rhodamine B, by the modification of the method proposed by Onishi and Sandell. The complex (RH) GaCl 4 is extracted with a mixture benzene-ethylacetate (3:1 V/V), from an aqueous medium 6 M in hydrochloric acid. The interference of foreign ions is studied. (C.G.C.) [pt

  7. Bioanalytical HPTLC Method for Estimation of Zolpidem Tartrate from Human Plasma

    OpenAIRE

    Abhay R. Shirode; Bharti G. Jadhav; Vilasrao J. Kadam

    2016-01-01

    A simple and selective high performance thin layer chromatographic (HPTLC) method was developed and validated for the estimation of zolpidem tartrate from human plasma using eperisone hydrochloride as an internal standard (IS). Analyte and IS were extracted from human plasma by liquid liquid extraction (LLE) technique. The Camag HPTLC system, employed with software winCATS (ver.1.4.1.8) was used for the proposed bioanalytical work. Planar chromatographic development was carried out with the h...

  8. Comparison of different methods for estimation of potential evapotranspiration

    International Nuclear Information System (INIS)

    Nazeer, M.

    2010-01-01

    Evapotranspiration can be estimated with different available methods. The aim of this research study to compare and evaluate the originally measured potential evapotranspiration from Class A pan with the Hargreaves equation, the Penman equation, the Penman-Montheith equation, and the FAO56 Penman-Monteith equation. The evaporation rate from pan recorded greater than stated methods. For each evapotranspiration method, results were compared against mean monthly potential evapotranspiration (PET) from Pan data according to FAO (ET/sub o/=K/sub pan X E/sub pan)), from daily measured recorded data of the twenty-five years (1984-2008). On the basis of statistical analysis between the pan data and the FAO56- Penman-Monteith method are not considered to be very significant (=0.98) at 95% confidence and prediction intervals. All methods required accurate weather data for precise results, for the purpose of this study the past twenty five years data were analyzed and used including maximum and minimum air temperature, relative humidity, wind speed, sunshine duration and rainfall. Based on linear regression analysis results the FAO56 PMM ranked first (R/sup 2/=0.98) followed by Hergreaves method (R/sup 2/=0.96), Penman-Monteith method (R/sup 2/=0.94) and Penman method (=0.93). Obviously, using FAO56 Penman Monteith method with precise climatic variables for ET/sub o/ estimation is more reliable than the other alternative methods, Hergreaves is more simple and rely only on air temperatures data and can be used alternative of FAO56 Penman-Monteith method if other climatic data are missing or unreliable. (author)

  9. A simple and efficient electrochemical reductive method for ...

    Indian Academy of Sciences (India)

    Administrator

    This approach opens up a new, practical and green reducing method to prepare large- scale graphene. ... has the following significant advantages: (1) It is simple to operate. .... The authors thank the National High Technology Research.

  10. Solution of the schrodinger equation in one dimension by simple method for a simple step potential

    International Nuclear Information System (INIS)

    Ertik, H.

    2005-01-01

    The coefficients of the transmission and reflection for the simple-step barrier potential were calculated by a simple method. Their values were entirely different from those often encountered in the literature. Especially in the case that the total energy is equal to the barrier potential, the value of 0,20 for the reflection coefficient was obtained whereas this is zero in the literature. This may be considered as an interesting point

  11. Hexographic Method of Complex Town-Planning Terrain Estimate

    Science.gov (United States)

    Khudyakov, A. Ju

    2017-11-01

    The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also

  12. A Simple HPLC Bioanalytical Method for the Determination of ...

    African Journals Online (AJOL)

    Purpose: To develop a simple, accurate, and precise high performance chromatography (HPLC) method with spectrophotometric detection for the determination of doxorubicin hydrochloride in rat plasma. Methods: Doxorubicin hydrochloride and daunorubicin hydrochloride (internal standard, IS) were separated on a C18 ...

  13. Oral histories: a simple method of assigning chronological age to isotopic values from human dentine collagen.

    Science.gov (United States)

    Beaumont, Julia; Montgomery, Janet

    2015-01-01

    Stable isotope ratios of carbon (δ(13)C) and nitrogen (δ(15)N) in bone and dentine collagen have been used for over 30 years to estimate palaeodiet, subsistence strategy, breastfeeding duration and migration within burial populations. Recent developments in dentine microsampling allow improved temporal resolution for dietary patterns. A simple method is proposed which could be applied to human teeth to estimate chronological age represented by dentine microsamples in the direction of tooth growth, allowing comparison of dietary patterns between individuals and populations. The method is tested using profiles from permanent and deciduous teeth of two individuals. Using a diagrammatic representation of dentine development by approximate age for each human tooth (based on the Queen Mary University of London Atlas), this study estimated the age represented by each dentine section. Two case studies are shown: comparison of M1 and M2 from a 19th century individual from London, England, and identification of an unknown tooth from an Iron Age female adult from Scotland. The isotopic profiles demonstrate that variations in consecutively-forming teeth can be aligned using this method to extend the dietary history of an individual or identify an unknown tooth by matching the profiles.

  14. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  15. A simple method for multiday imaging of slice cultures.

    Science.gov (United States)

    Seidl, Armin H; Rubel, Edwin W

    2010-01-01

    The organotypic slice culture (Stoppini et al. A simple method for organotypic cultures of nervous tissue. 1991;37:173-182) has become the method of choice to answer a variety of questions in neuroscience. For many experiments, however, it would be beneficial to image or manipulate a slice culture repeatedly, for example, over the course of many days. We prepared organotypic slice cultures of the auditory brainstem of P3 and P4 mice and kept them in vitro for up to 4 weeks. Single cells in the auditory brainstem were transfected with plasmids expressing fluorescent proteins by way of electroporation (Haas et al. Single-cell electroporation for gene transfer in vivo. 2001;29:583-591). The culture was then placed in a chamber perfused with oxygenated ACSF and the labeled cell imaged with an inverted wide-field microscope repeatedly for multiple days, recording several time-points per day, before returning the slice to the incubator. We describe a simple method to image a slice culture preparation during the course of multiple days and over many continuous hours, without noticeable damage to the tissue or photobleaching. Our method uses a simple, inexpensive custom-built insulator constructed around the microscope to maintain controlled temperature and uses a perfusion chamber as used for in vitro slice recordings. (c) 2009 Wiley-Liss, Inc.

  16. Simple Synthesis Method for Alumina Nanoparticle

    Directory of Open Access Journals (Sweden)

    Daniel Damian

    2017-11-01

    Full Text Available Globally, the human population steady increase, expansion of urban areas, excessive industrialization including in agriculture, caused not only decrease to depletion of non-renewable resources, a rapid deterioration of the environment with negative impact on water quality, soil productivity and of course quality of life in general. This paper aims to prepare size controlled nanoparticles of aluminum oxide using a simple synthesis method. The morphology and dimensions of nanomaterial was investigated using modern analytical techniques: SEM/EDAX and XRD spectroscopy.

  17. Efficacy of bi-component cocrystals and simple binary eutectics screening using heat of mixing estimated under super cooled conditions.

    Science.gov (United States)

    Cysewski, Piotr

    2016-07-01

    The values of excess heat characterizing sets of 493 simple binary eutectic mixtures and 965 cocrystals were estimated under super cooled liquid condition. The application of a confusion matrix as a predictive analytical tool was applied for distinguishing between the two subsets. Among seven considered levels of computations the BP-TZVPD-FINE approach was found to be the most precise in terms of the lowest percentage of misclassified positive cases. Also much less computationally demanding AM1 and PM7 semiempirical quantum chemistry methods are likewise worth considering for estimation of the heat of mixing values. Despite intrinsic limitations of the approach of modeling miscibility in the solid state, based on components affinities in liquids under super cooled conditions, it is possible to define adequate criterions for classification of coformers pairs as simple binary eutectics or cocrystals. The predicted precision has been found as 12.8% what is quite accepted, bearing in mind simplicity of the approach. However, tuning theoretical screening to such precision implies the exclusion of many positive cases and this wastage exceeds 31% of cocrystals classified as false negatives. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Percutaneous Method of Management of Simple Bone Cyst

    Directory of Open Access Journals (Sweden)

    O. P. Lakhwani

    2013-01-01

    Full Text Available Introduction. Simple bone cyst or unicameral bone cysts are benign osteolytic lesions seen in metadiaphysis of long bones in growing children. Various treatment modalities with variable outcomes have been described in the literature. The case report illustrates the surgical technique of minimally invasive method of treatment. Case Study. A 14-year-old boy was diagnosed as active simple bone cyst proximal humerus with pathological fracture. The patient was treated by minimally invasive percutaneous curettage with titanium elastic nail (TENS and allogenic bone grafting mixed with bone marrow under image intensifier guidance. Results. Pathological fracture was healed and allograft filled in the cavity was well taken up. The patient achieved full range of motion with successful outcome. Conclusion. Minimally invasive percutaneous method using elastic intramedullary nail gives benefit of curettage cyst decompression and stabilization of fracture. Allogenic bone graft fills the cavity and healing of lesion by osteointegration. This method may be considered with advantage of minimally invasive technique in treatment of benign cystic lesions of bone, and the level of evidence was therapeutic level V.

  19. Percutaneous Method of Management of Simple Bone Cyst

    Science.gov (United States)

    Lakhwani, O. P.

    2013-01-01

    Introduction. Simple bone cyst or unicameral bone cysts are benign osteolytic lesions seen in metadiaphysis of long bones in growing children. Various treatment modalities with variable outcomes have been described in the literature. The case report illustrates the surgical technique of minimally invasive method of treatment. Case Study. A 14-year-old boy was diagnosed as active simple bone cyst proximal humerus with pathological fracture. The patient was treated by minimally invasive percutaneous curettage with titanium elastic nail (TENS) and allogenic bone grafting mixed with bone marrow under image intensifier guidance. Results. Pathological fracture was healed and allograft filled in the cavity was well taken up. The patient achieved full range of motion with successful outcome. Conclusion. Minimally invasive percutaneous method using elastic intramedullary nail gives benefit of curettage cyst decompression and stabilization of fracture. Allogenic bone graft fills the cavity and healing of lesion by osteointegration. This method may be considered with advantage of minimally invasive technique in treatment of benign cystic lesions of bone, and the level of evidence was therapeutic level V. PMID:23819089

  20. A simple method for α determination

    International Nuclear Information System (INIS)

    Ho Manh Dung; Seung Yeon Cho

    2003-01-01

    The a term is a primary parameter that is used to indicate the deviation of the epithermal neutron distribution in the k 0 -standardization method of neutron activation analysis, k 0 -NAA. The calculation of a using a mathematical procedure is a challenge for some researchers. The calculation of a by the 'bare-triple monitor' method is possible using the dedicated commercial software KAYZERO R /SOLCOI R . However, when this software is not available in the laboratory it is possible to carry out the calculation of a applying a simple iterative linear regression using any spreadsheets. This approach is described. The experimental data used in the example were obtained by the irradiation of a set of suitable monitors in the NAA no.1 irradiation channel of the HANARO research reactor (KAERI, Korea). The results obtained by this iterative linear regression method agree well with the results calculated by the validated mathematical method. (author)

  1. Comparison of chlorzoxazone one-sample methods to estimate CYP2E1 activity in humans

    DEFF Research Database (Denmark)

    Kramer, Iza; Dalhoff, Kim; Clemmesen, Jens O

    2003-01-01

    OBJECTIVE: Comparison of a one-sample with a multi-sample method (the metabolic fractional clearance) to estimate CYP2E1 activity in humans. METHODS: Healthy, male Caucasians ( n=19) were included. The multi-sample fractional clearance (Cl(fe)) of chlorzoxazone was compared with one...... estimates, Cl(est) at 3 h or 6 h, and MR at 3 h, can serve as reliable markers of CYP2E1 activity. The one-sample clearance method is an accurate, renal function-independent measure of the intrinsic activity; it is simple to use and easily applicable to humans.......-time-point clearance estimation (Cl(est)) at 3, 4, 5 and 6 h. Furthermore, the metabolite/drug ratios (MRs) estimated from one-time-point samples at 1, 2, 3, 4, 5 and 6 h were compared with Cl(fe). RESULTS: The concordance between Cl(est) and Cl(fe) was highest at 6 h. The minimal mean prediction error (MPE) of Cl...

  2. Non-Destructive Lichen Biomass Estimation in Northwestern Alaska: A Comparison of Methods

    Science.gov (United States)

    Rosso, Abbey; Neitlich, Peter; Smith, Robert J.

    2014-01-01

    Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa “community” samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m−2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska. PMID:25079228

  3. Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.

    Directory of Open Access Journals (Sweden)

    Abbey Rosso

    Full Text Available Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144 at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count, among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4% using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2. Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.

  4. Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.

    Science.gov (United States)

    Rosso, Abbey; Neitlich, Peter; Smith, Robert J

    2014-01-01

    Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.

  5. Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-01-01

    Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes

  6. Simple equation method for nonlinear partial differential equations and its applications

    Directory of Open Access Journals (Sweden)

    Taher A. Nofal

    2016-04-01

    Full Text Available In this article, we focus on the exact solution of the some nonlinear partial differential equations (NLPDEs such as, Kodomtsev–Petviashvili (KP equation, the (2 + 1-dimensional breaking soliton equation and the modified generalized Vakhnenko equation by using the simple equation method. In the simple equation method the trial condition is the Bernoulli equation or the Riccati equation. It has been shown that the method provides a powerful mathematical tool for solving nonlinear wave equations in mathematical physics and engineering problems.

  7. MO-E-17A-04: Size-Specific Dose Estimate (SSDE) Provides a Simple Method to Calculate Organ Dose for Pediatric CT Examinations

    International Nuclear Information System (INIS)

    Moore, B; Brady, S; Kaufman, R; Mirro, A

    2014-01-01

    Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE

  8. MO-E-17A-04: Size-Specific Dose Estimate (SSDE) Provides a Simple Method to Calculate Organ Dose for Pediatric CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, B; Brady, S; Kaufman, R [St Jude Children' s Research Hospital, Memphis, TN (United States); Mirro, A [Washington University, St. Louis, MO (United States)

    2014-06-15

    Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.

  9. Simple approximation for estimating centerline gamma absorbed dose rates due to a continuous Gaussian plume

    International Nuclear Information System (INIS)

    Overcamp, T.J.; Fjeld, R.A.

    1987-01-01

    A simple approximation for estimating the centerline gamma absorbed dose rates due to a continuous Gaussian plume was developed. To simplify the integration of the dose integral, this approach makes use of the Gaussian cloud concentration distribution. The solution is expressed in terms of the I1 and I2 integrals which were developed for estimating long-term dose due to a sector-averaged Gaussian plume. Estimates of tissue absorbed dose rates for the new approach and for the uniform cloud model were compared to numerical integration of the dose integral over a Gaussian plume distribution

  10. The use of maturity method in estimating concrete strength

    International Nuclear Information System (INIS)

    Salama, A.E.; Abd El-Baky, S.M.; Ali, E.E.; Ghanem, G.M.

    2005-01-01

    Prediction of the early age strength of concrete is essential for modernized concrete for construction as well as for manufacturing of structural parts. Safe and economic scheduling of such critical operations as form removal and re shoring, application of post-tensioning or other mechanical treatment, and in process transportation and rapid delivery of products all should be based upon a good grasp of the strength development of the concrete in use. For many years, it has been proposed that the strength of concrete can be related to a simple mathematical function of time and temperature so that strength could be assessed by calculation without mechanical testing. Such functions are used to compute what is called the m aturity o f concrete, and the computed value is believed to obtain a correlation with the strength of concrete. With its simplicity and low cost, the application of maturity concept as in situ testing method has received wide attention and found its use in engineering practice. This research work investigates the use of M aturity method' in estimating the concrete strength. An experimental program is designed to estimate the concrete strength by using the maturity method. Using different concrete mixes, with available local materials. Ordinary Portland Cement, crushed stone, silica fume, fly ash and admixtures with different contents are used . All the specimens were exposed to different curing temperatures (10, 25 and 40 degree C), in order to get a simplified expression of maturity that fits in with the influence of temperature. Mix designs and charts obtained from this research can be used as guide information for estimating concrete strength by using the maturity method

  11. UV Spectrophotometric Method for theEstimation of Itopride Hydrochloride in Pharmaceutical Formulation

    Directory of Open Access Journals (Sweden)

    K. R. Gupta

    2010-01-01

    Full Text Available Three simple, precise and economical UV methods have been developed for the estimation of itopride hydrochloride in pharmaceutical formulations. Itopride hydrochloride in distilled water shows the maximum absorbance at 258.0 nm (Method A and in first order derivative spectra of the same shows sharp peak at 247.0 nm, when n = 1 (Method B. Method C utilises area under curve (AUC in the wavelength range from 262.0-254.0 nm for analysis of itopride hydrochloride. The drug was found to obey Beer-Lambert’s law in the concentration range of 5-50 μg/mL for all three proposed methods. Results of the analysis were validated statistically and recovery studies were found to be satisfactory.

  12. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  13. A method for estimation of fatigue properties from hardness of materials through construction of expert system

    International Nuclear Information System (INIS)

    Jeon, Woo Soo; Song, Ji Ho

    2001-01-01

    An expert system for estimation of fatigue properties from simple tensile data of material is developed, considering nearly all important estimation methods proposed so far, i.e., 7 estimation methods. The expert system is developed to utilize for the case of only hardness data available. The knowledge base is constructed with production rules and frames using an expert system shell, UNIK. Forward chaining is employed as a reasoning method. The expert system has three functions including the function to update the knowledge base. The performance of the expert system is tested using the 54 ε-N curves consisting of 381 ε-N data points obtained for 22 materials. It is found that the expert system developed has excellent performance especially for steel materials, and reasonably good for aluminum alloys

  14. Simple Calculation Programs for Biology Methods in Molecular ...

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Methods in Molecular Biology. GMAP: A program for mapping potential restriction sites. RE sites in ambiguous and non-ambiguous DNA sequence; Minimum number of silent mutations required for introducing a RE sites; Set ...

  15. A simple visual estimation of food consumption in carnivores.

    Directory of Open Access Journals (Sweden)

    Katherine R Potgieter

    Full Text Available Belly-size ratings or belly scores are frequently used in carnivore research as a method of rating whether and how much an animal has eaten. This method provides only a rough ordinal measure of fullness and does not quantify the amount of food an animal has consumed. Here we present a method for estimating the amount of meat consumed by individual African wild dogs Lycaon pictus. We fed 0.5 kg pieces of meat to wild dogs being temporarily held in enclosures and measured the corresponding change in belly size using lateral side photographs taken perpendicular to the animal. The ratio of belly depth to body length was positively related to the mass of meat consumed and provided a useful estimate of the consumption. Similar relationships could be calculated to determine amounts consumed by other carnivores, thus providing a useful tool in the study of feeding behaviour.

  16. A simple daily soil-water balance model for estimating the spatial and temporal distribution of groundwater recharge in temperate humid areas

    Science.gov (United States)

    Dripps, W.R.; Bradbury, K.R.

    2007-01-01

    Quantifying the spatial and temporal distribution of natural groundwater recharge is usually a prerequisite for effective groundwater modeling and management. As flow models become increasingly utilized for management decisions, there is an increased need for simple, practical methods to delineate recharge zones and quantify recharge rates. Existing models for estimating recharge distributions are data intensive, require extensive parameterization, and take a significant investment of time in order to establish. The Wisconsin Geological and Natural History Survey (WGNHS) has developed a simple daily soil-water balance (SWB) model that uses readily available soil, land cover, topographic, and climatic data in conjunction with a geographic information system (GIS) to estimate the temporal and spatial distribution of groundwater recharge at the watershed scale for temperate humid areas. To demonstrate the methodology and the applicability and performance of the model, two case studies are presented: one for the forested Trout Lake watershed of north central Wisconsin, USA and the other for the urban-agricultural Pheasant Branch Creek watershed of south central Wisconsin, USA. Overall, the SWB model performs well and presents modelers and planners with a practical tool for providing recharge estimates for modeling and water resource planning purposes in humid areas. ?? Springer-Verlag 2007.

  17. Monostatic Radar Cross Section Estimation of Missile Shaped Object Using Physical Optics Method

    Science.gov (United States)

    Sasi Bhushana Rao, G.; Nambari, Swathi; Kota, Srikanth; Ranga Rao, K. S.

    2017-08-01

    Stealth Technology manages many signatures for a target in which most radar systems use radar cross section (RCS) for discriminating targets and classifying them with regard to Stealth. During a war target’s RCS has to be very small to make target invisible to enemy radar. In this study, Radar Cross Section of perfectly conducting objects like cylinder, truncated cone (frustum) and circular flat plate is estimated with respect to parameters like size, frequency and aspect angle. Due to the difficulties in exactly predicting the RCS, approximate methods become the alternative. Majority of approximate methods are valid in optical region and where optical region has its own strengths and weaknesses. Therefore, the analysis given in this study is purely based on far field monostatic RCS measurements in the optical region. Computation is done using Physical Optics (PO) method for determining RCS of simple models. In this study not only the RCS of simple models but also missile shaped and rocket shaped models obtained from the cascaded objects with backscatter has been computed using Matlab simulation. Rectangular plots are obtained for RCS in dbsm versus aspect angle for simple and missile shaped objects using Matlab simulation. Treatment of RCS, in this study is based on Narrow Band.

  18. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  19. MÉTODO SIMPLES PARA ESTIMAR ENCURTAMENTO PELO FRIO EM CARNE BOVINA A SIMPLE METHOD TO ESTIMATE COLD SHORTENING IN BEEF

    Directory of Open Access Journals (Sweden)

    Riana Jordão Barrozo Heinemann

    2002-04-01

    Full Text Available É bem conhecido o fato de que o encurtamento pelo frio pode influenciar negativamente a textura da carne. Por isso, a determinação do grau de contração do tecido muscular é um recurso analítico de grande importância quando se estuda a otimização dos procedimentos industriais. Neste trabalho, foram avaliadas comparativamente duas metodologias de microscopia. Para isso, os músculos Biceps femoris, Longissimus dorsi e Semimembranosus obtidos de nove carcaças bovinas com três diferentes graus de acabamento foram analisados de forma pareada por ambos os métodos. O músculo Longissimus dorsi apresentou menor comprimento de sarcômero e o m. Semimembranosus o maior (p0,05, revelando a possibilidade de emprego do método mais simples.The negative influence of cold shortening on meat texture is well known. Because of that, the determination of the muscle contraction extent represents an important analytical tool for the optimization of the industrial procedures. In this work, two methodologies to evaluate cold shortening were compared. Biceps femoris, Longissimus dorsi and Semimembranosus muscles from 9 cattle carcasses with three different fat thickness grades were paired analyzed by both methodologies. Longissimus dorsi muscle showed the shortest sarcomere length while Semimembranosus m. showed the longest one (p0.05, which suggests the possibility of using the simpler method for cold shortening evaluation.

  20. A SIMPLE METHOD FOR THE EXTRACTION AND QUANTIFICATION OF PHOTOPIGMENTS FROM SYMBIODINIUM SPP.

    Science.gov (United States)

    John E. Rogers and Dragoslav Marcovich. Submitted. Simple Method for the Extraction and Quantification of Photopigments from Symbiodinium spp.. Limnol. Oceanogr. Methods. 19 p. (ERL,GB 1192). We have developed a simple, mild extraction procedure using methanol which, when...

  1. A simple flow-concentration modelling method for integrating water ...

    African Journals Online (AJOL)

    A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...

  2. A Simple Method for Retrieving Understory NDVI in Sparse Needleleaf Forests in Alaska Using MODIS BRDF Data

    Directory of Open Access Journals (Sweden)

    Wei Yang

    2014-12-01

    Full Text Available Global products of leaf area index (LAI usually show large uncertainties in sparsely vegetated areas because the understory contribution is not negligible in reflectance modeling for the case of low to intermediate canopy cover. Therefore, many efforts have been made to include understory properties in LAI estimation algorithms. Compared with the conventional data bank method, estimation of forest understory properties from satellite data is superior in studies at a global or continental scale over long periods. However, implementation of the current remote sensing method based on multi-angular observations is complicated. As an alternative, a simple method to retrieve understory NDVI (NDVIu for sparse boreal forests was proposed in this study. The method is based on the fact that the bidirectional variation in NDVIu is smaller than that in canopy-level NDVI. To retrieve NDVIu for a certain pixel, linear extrapolation was applied using pixels within a 5 × 5 target-pixel-centered window. The NDVI values were reconstructed from the MODIS BRDF data corresponding to eight different solar-view angles. NDVIu was estimated as the average of the NDVI values corresponding to the position in which the stand NDVI had the smallest angular variation. Validation by a noise-free simulation data set yielded high agreement between estimated and true NDVIu, with R2 and RMSE of 0.99 and 0.03, respectively. Using the MODIS BRDF data, we achieved an estimate of NDVIu close to the in situ measured value (0.61 vs. 0.66 for estimate and measurement, respectively and reasonable seasonal patterns of NDVIu in 2010 to 2013. The results imply a potential application of the retrieved NDVIu to improve the estimation of overstory LAI for sparse boreal forests and ultimately to benefit studies on carbon cycle modeling over high-latitude areas.

  3. A simple and secure method to fix laparoscopic trocars in children.

    Science.gov (United States)

    Yip, K F; Tam, P K H; Li, M K W

    2006-04-01

    We introduce a simple method of fixing trocars to the abdominal wall in children. Before anchoring the trocar, a piece of Tegaderm polyurethrane adhesive (3M Healthcare, St. Paul, Minnesota) is attached to the trocar. A silk stitch is anchored to neighboring skin, and then transfixed over the shaft of the trocar through the adhesive. Both inward and outward movement of the trocar can be restrained. This method is simple, fast, secure, and can be applied to trocars of any size.

  4. The efficiency of the centroid method compared to a simple average

    DEFF Research Database (Denmark)

    Eskildsen, Jacob Kjær; Kristensen, Kai; Nielsen, Rikke

    Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot.......Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot....

  5. A fast and simple method to estimate relative, hyphal tensile-strength of filamentous fungi used to assess the effect of autophagy

    DEFF Research Database (Denmark)

    Quintanilla, Daniela; Chelius, Cynthia; Iambamrung, Sirasa

    2018-01-01

    Fungal hyphal strength is an important phenotype which can have a profound impact on bioprocess behavior. Until now, there is not an efficient method which allows its characterization. Currently available methods are very time consuming; thus, compromising their applicability in strain selection...... and process development. To overcome this issue, a method for fast and easy, statistically-verified quantification of relative hyphal tensile strength was developed. It involves off-line fragmentation in a high shear mixer followed by quantification of fragment size using laser diffraction. Particle size...... distribution (PSD) is determined, with analysis time on the order of minutes. Plots of PSD 90th percentile versus time allow estimation of the specific fragmentation rate. This novel method is demonstrated by estimating relative hyphal strength during growth in control conditions and rapamycin...

  6. A simple finite element method for boundary value problems with a Riemann–Liouville derivative

    KAUST Repository

    Jin, Bangti; Lazarov, Raytcho; Lu, Xiliang; Zhou, Zhi

    2016-01-01

    © 2015 Elsevier B.V. All rights reserved. We consider a boundary value problem involving a Riemann-Liouville fractional derivative of order α∈(3/2,2) on the unit interval (0,1). The standard Galerkin finite element approximation converges slowly due to the presence of singularity term xα-1 in the solution representation. In this work, we develop a simple technique, by transforming it into a second-order two-point boundary value problem with nonlocal low order terms, whose solution can reconstruct directly the solution to the original problem. The stability of the variational formulation, and the optimal regularity pickup of the solution are analyzed. A novel Galerkin finite element method with piecewise linear or quadratic finite elements is developed, and L2(D) error estimates are provided. The approach is then applied to the corresponding fractional Sturm-Liouville problem, and error estimates of the eigenvalue approximations are given. Extensive numerical results fully confirm our theoretical study.

  7. A simple finite element method for boundary value problems with a Riemann–Liouville derivative

    KAUST Repository

    Jin, Bangti

    2016-02-01

    © 2015 Elsevier B.V. All rights reserved. We consider a boundary value problem involving a Riemann-Liouville fractional derivative of order α∈(3/2,2) on the unit interval (0,1). The standard Galerkin finite element approximation converges slowly due to the presence of singularity term xα-1 in the solution representation. In this work, we develop a simple technique, by transforming it into a second-order two-point boundary value problem with nonlocal low order terms, whose solution can reconstruct directly the solution to the original problem. The stability of the variational formulation, and the optimal regularity pickup of the solution are analyzed. A novel Galerkin finite element method with piecewise linear or quadratic finite elements is developed, and L2(D) error estimates are provided. The approach is then applied to the corresponding fractional Sturm-Liouville problem, and error estimates of the eigenvalue approximations are given. Extensive numerical results fully confirm our theoretical study.

  8. Comparison of performance of some common Hartmann-Shack centroid estimation methods

    Science.gov (United States)

    Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.

    2016-03-01

    The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.

  9. A simple statistical method for catch comparison studies

    DEFF Research Database (Denmark)

    Holst, René; Revill, Andrew

    2009-01-01

    For analysing catch comparison data, we propose a simple method based on Generalised Linear Mixed Models (GLMM) and use polynomial approximations to fit the proportions caught in the test codend. The method provides comparisons of fish catch at length by the two gears through a continuous curve...... with a realistic confidence band. We demonstrate the versatility of this method, on field data obtained from the first known testing in European waters of the Rhode Island (USA) 'Eliminator' trawl. These data are interesting as they include a range of species with different selective patterns. Crown Copyright (C...

  10. A simple estimation of the renal plasma flow

    International Nuclear Information System (INIS)

    Shinpo, Takako

    1987-01-01

    The renal plasma flow was determined conventionally by the excretive ratio to urine using a 131 I-Hippuran renogram. In this report, we proposed the renal clearance, the product of the disappearance rate coefficient and the maximum counts of the bladder, for the simple quantitative value of renal plasma flow. The disappearance rate coefficient was calculated by approximating the exponential function of the initial slope from the disappearance curve of the heart. The renal clearances was compared with the renal plasma flow calculated by the conventional method. The results gave a high correlation coefficient of r = 0.91. The renal clearances can be calculated easily and it offers useful renogram information. (author)

  11. Evaluation of ductile tearing in a cracked component with a simple method (Js)

    International Nuclear Information System (INIS)

    Moulin, D.; Drubay, B.; Clement, G.; Nedelec, M.

    1995-01-01

    In the nuclear industry, it is more and more usual to perform fracture assessment on detective structures made of ductile material with the help of elastoplastic' fracture mechanics relying on the parameter J. Several engineering methods have been developed in the past to calculate this parameter. These results were used to develop a practical procedure noted Js method which simply gives J as function of elastically calculated Je and a plastic correction factor. This method has been introduced in the A16 rule developed jointly by CEA-EdF and Novatome for fast breeder reactors in particular in order to evaluate the loading at crack instability taking into a account ductile tearing. The determination of initiation has already been presented. This determination of the loading at crack instability is examined through two simple but representative examples using the simplified estimation of J. Predicted loadings at crack instability are compared with experimental results. This study was carried out a part of cooperative program with the Institut de Protection et de Surete Nucleaire of the CEA. (author) 12 refs., 10 figs

  12. Simple feed-forward active control method for suppressing the shock response of a flexible cantilever beam

    International Nuclear Information System (INIS)

    Shin, Kihong; Pyo, Sangho; Lee, Young-Sup

    2009-01-01

    In this paper a 'simple' active control method (without using an error sensor and an adaptive algorithm) is proposed for reducing the residual vibration of a flexible cantilever beam excited by a shock impulse. It is assumed that the shock input can be measured and always occurs on the same point of the beam. In this case, it is shown that a much simpler active control strategy than conventional methods can be used if the system is well identified. The proposed method is verified experimentally with consideration of some practical aspects: the control performance with respect to the control point in time and the choice of frequency response function (FRF) estimators to cope with measurement noise. Experimental results show that a large attenuation of the residual vibration can be achieved using the proposed method. (technical note)

  13. A simple three dimensional wide-angle beam propagation method

    Science.gov (United States)

    Ma, Changbao; van Keuren, Edward

    2006-05-01

    The development of three dimensional (3-D) waveguide structures for chip scale planar lightwave circuits (PLCs) is hampered by the lack of effective 3-D wide-angle (WA) beam propagation methods (BPMs). We present a simple 3-D wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme along with a new 3-D wave equation splitting method. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation and comparing them with analytical solutions.

  14. An Inexpensive and Simple Method to Demonstrate Soil Water and Nutrient Flow

    Science.gov (United States)

    Nichols, K. A.; Samson-Liebig, S.

    2011-01-01

    Soil quality, soil health, and soil sustainability are concepts that are being widely used but are difficult to define and illustrate, especially to a non-technical audience. The objectives of this manuscript were to develop simple and inexpensive methodologies to both qualitatively and quantitatively estimate water infiltration rates (IR),…

  15. A Simple Method of Spectrum Processing for β-ray Measurement without Pretreatment

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Jun Woo; Kim, Hee Reyoung [UNIST, Ulsan (Korea, Republic of)

    2016-10-15

    Radioactivity analysis of β-emitting radionuclide is important because of its dangerousness of overexposure. γ-ray has been measured by conventional detector such as NaI(Tl) or high purity germanium (HPGe) detector. But β-ray is hard to detect by those detectors because of its short range. Therefore, liquid scintillation counter (LSC) has been used to measure the radioactivity of pure beta emitter but there is huge problem of organic waste production, though LSC has high efficiency for detection of low energy β-ray. To solve this problem, characterization of β-ray measurement in a plastic scintillator was carried out in this study. There have been some studies about plastic scintillator to measure the β-rays without liquid scintillation method. Plastic scintillator has benefits for detection of β-ray because it has relative low effective atomic number. β-ray and γ-ray spectra in cylindrical plastic scintillator was analyzed and a method of separation of β-ray spectrum was suggested. A simple method of β-ray spectrum separation was suggested. The method was verified by chi-square method to estimate the difference between calculated and measured spectrum. This method was successfully applied by using disc source. For future works, practical radioactive source will be used to acquire the pulse height spectrum. The method can be used for measurement of pure β emitter without pretreatment if this method is verified for practical purpose.

  16. Spectrophotometric estimation of ethamsylate and mefenamic Acid from a binary mixture by dual wavelength and simultaneous equation methods.

    Science.gov (United States)

    Goyal, Anju; Singhvi, I

    2008-01-01

    Two simple, accurate, economical and reproducible spectrophotometric methods for simultaneous estimation of two-component drug mixture of ethamsylate and mefenamic acid in combined tablet dosage form have been developed. The first developed method involves formation and solving of simultaneous equation using 287.6 nm and 313.2 nm as two wavelengths. Second developed method is based on two wavelength calculation. Two wavelengths selected for estimation of ethamsylate were 274.4 nm and 301.2 nm while that for mefenamic acid were 304.8 nm and 320.4 nm. Both the developed methods obey Beer's law in the concentration ranges employed for the respective methods. The results of analysis were validated statistically and by recovery studies.

  17. Simple methods of aligning four-circle diffractometers with crystal reflections

    Energy Technology Data Exchange (ETDEWEB)

    Mitsui, Y [Tokyo Univ. (Japan). Faculty of Pharmaceutical Sciences

    1979-08-01

    Simple methods of aligning four-circle diffractometers with crystal reflections are devised. They provide the methods to check (1) perpendicularity of chi plane to the incident beam, (2) zero point of 2theta and linearity of focus-chi center-receiving aperture and (3) zero point of chi.

  18. Simple PVT quantitative method of Kr under high pure N2 condition

    International Nuclear Information System (INIS)

    Li Xuesong; Zhang Zibin; Wei Guanyi; Chen Liyun; Zhai Lihua

    2005-01-01

    A simple PVT quantitative method of Kr in the high pure N 2 was studied. Pressure, volume and temperature of the sample gas were measured by three individual methods to obtain the sum sample with food uncertainty. The ratio of Kr/N 2 could measured by GAM 400 quadrupole mass spectrometer. So the quantity of Kr could be calculated with the two measured data above. This method can be suited for quantitative analysis of other simple composed noble gas sample with high pure carrying gas. (authors)

  19. A simple dissolved metals mixing method to produce high-purity MgTiO3 nanocrystals

    International Nuclear Information System (INIS)

    Pratapa, Suminar; Baqiya, Malik A.; Istianah,; Lestari, Rina; Angela, Riyan

    2014-01-01

    A simple dissolved metals mixing method has been effectively used to produce high-purity MgTiO 3 (MT) nanocrystals. The method involves the mixing of independently dissolved magnesium and titanium metal powders in hydrochloric acid followed by calcination. The phase purity and nanocrystallinity were determined by making use of laboratory x-ray diffraction data, to which Rietveld-based analyses were performed. Results showed that the method yielded only one type magnesium titanate powders, i.e. MgTiO 3 , with no Mg 2 TiO 4 or MgTi 2 O 5 phases. The presence of residual rutile or periclase was controlled by adding excessive Mg up to 5% (mol) in the stoichiometric mixing. The method also resulted in MT nanocrystals with estimated average crystallite size of 76±2 nm after calcination at 600°C and 150±4 nm (at 800°C). A transmission electron micrograph confirmed the formation of the nanocrystallites

  20. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Science.gov (United States)

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  1. Spectrophotometric methods for simultaneous estimation of pantoprazole and itopride hydrochloride in capsules

    Directory of Open Access Journals (Sweden)

    Krishna R. Gupta

    2010-12-01

    Full Text Available Three simple, accurate and economical methods for simultaneous estimation of pantoprazole and itopride hydrochloride in two component solid dosage forms have been developed. The proposed methods employ the application of simultaneous equation method (Method A, absorbance ratio method (Method B and multicomponent mode of analysis method (Method C. All these methods utilize distilled water as a solvent. In distilled water pantoprazole shows maximum absorbance at a wavelength of 289.0 nm while itopride hydrochloride shows maximum absorbance at a wavelength of 258.0 nm also the drugs show an isoabsorptive point at a wavelength of 270.0 nm. For multicomponent method, sampling wavelengths 289.0 nm, 270.0 nm and 239.5 nm were selected. All these methods showed linearity in the range from 4-20 µg/mL and 15-75 µg/mL for pantoprazole and itopride hydrochloride respectively. The results of analysis have been validated statistically and by recovery studies.

  2. A Simple Method Using a Topography Correction Coefficient for Estimating Daily Distribution of Solar Irradiance in Complex Terrain

    International Nuclear Information System (INIS)

    Yun, J.I.

    2009-01-01

    Accurate solar radiation data are critical to evaluate major physiological responses of plants. For most upland crops and orchard plants growing in complex terrain, however, it is not easy for farmers or agronomists to access solar irradiance data. Here we suggest a simple method using a sun-slope geometry based topographical coefficient to estimate daily solar irradiance on any sloping surfaces from global solar radiation measured at a nearby weather station. An hourly solar irradiance ratio (W i ) between sloping and horizontal surface is defined as multiplication of the relative solar intensity (k i ) and the slope irradiance ratio (r i ) at an hourly interval. The k i is the ratio of hourly solar radiation to the 24 hour cumulative radiation on a horizontal surface under clear sky conditions. The r i is the ratio of clear sky radiation on a given slope to that on a horizontal reference. Daily coefficient for slope correction is simply the sum of W i on each date. We calculated daily solar irradiance at 8 side slope locations circumventing a cone-shaped parasitic volcano (c.a., 570 m diameter for the bottom circle and 90 m bottom-to-top height) by multiplying these coefficients to the global solar radiation measured horizontally. Comparison with the measured slope irradiance from April 2007 to March 2008 resulted in the root mean square error (RMSE) of 1.61 MJ m −2 for the whole period but the RMSE for April to October (i.e., major cropping season in Korea) was much lower and satisfied the 5% error tolerance for radiation measurement. The RMSE was smallest in October regardless of slope aspect, and the aspect dependent variation of RMSE was greatest in November. Annual variation in RMSE was greatest on north and south facing slopes, followed by southwest, southeast, and northwest slopes in decreasing order. Once the coefficients are prepared, global solar radiation data from nearby stations can be easily converted to the solar irradiance map at landscape

  3. A comparison of two methods for estimating the technical costs of external beam radiation therapy

    International Nuclear Information System (INIS)

    Hayman, James A.; Lash, Kathy A.; Tao, May L.; Halman, Marc A.

    2000-01-01

    Purpose: To accurately assess the cost-effectiveness of treatment with external beam radiation, it is necessary to have accurate estimates of its cost. One of the most common methods for estimating technical costs has been to convert Medicare charges into costs using Medicare Cost-to-Charge Ratios (CCR). More recently, health care organizations have begun to invest in sophisticated cost-accounting systems (CAS) that are capable of providing procedure-specific cost estimates. The purpose of this study was to examine whether these competing approaches result in similar cost estimates for four typical courses of external beam radiation therapy (EBRT). Methods and Materials: Technical costs were estimated for the following treatment courses: 1) a palliative 'simple' course of 10 fractions using a single field without blocks; 2) a palliative 'complex' course of 10 fractions using two opposed fields with custom blocks; 3) a curative course of 30 fractions for breast cancer using tangent fields followed by an electron beam boost; and 4) a curative course of 35 fractions for prostate cancer using CT-planning and a 4-field technique. Costs were estimated using the CCR approach by multiplying the number of units of each procedure billed by its Medicare charge and CCR and then summing these costs. Procedure-specific cost estimates were obtained from a cost-accounting system, and overall costs were then estimated for the CAS approach by multiplying the number of units billed by the appropriate unit cost estimate and then summing these costs. All costs were estimated using data from 1997. The analysis was also repeated using data from another academic institution to estimate their costs using the CCR and CAS methods, as well as the appropriate relative value units (RVUs) and conversion factor from the 1997 Medicare Fee Schedule to estimate Medicare reimbursement for the four treatment courses. Results: The estimated technical costs for the CCR vs. CAS approaches for the four

  4. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  5. A simple method for generating exactly solvable quantum mechanical potentials

    CERN Document Server

    Williams, B W

    1993-01-01

    A simple transformation method permitting the generation of exactly solvable quantum mechanical potentials from special functions solving second-order differential equations is reviewed. This method is applied to Gegenbauer polynomials to generate an attractive radial potential. The relationship of this method to the determination of supersymmetric quantum mechanical superpotentials is discussed, and the superpotential for the radial potential is also derived. (author)

  6. Simple expressions to estimate the consequences of a RIA in a PWR

    International Nuclear Information System (INIS)

    Riverola Gurruchaga, J.

    2010-01-01

    The analysis of the reactivity insertion accidents (RIA) for the current reactor fleet is gaining increasing importance. Due to the reconsideration of the mechanisms of clad failure evidenced in experiments in the past two decades, a significant change in the regulatory environment is expected. The verification of the revised criteria of core coolability and clad integrity taking into consideration PCMI or ballooning phenomena will require the adoption of advanced calculation methods that take advantage of 3D kinetics and more realistic simulation basis than today. However, these methods entail using of relatively complex codes whose results are sometimes difficult to contrast with the results obtained by other authors and methods. In the present paper, we review the most important parameters related to those likely to be the acceptance criteria and presents simple expressions for fuel temperature, pulse width, and fuel enthalpy during the transient. These expressions have been derived from the Nordheim-Fuchs theoretical model, simplified adequately according to their fundamental parameters, such as ejected rod worth, delayed neutron fraction, heat flux peaking factor, and so on, y = f(ρ, β, Fq,..) And finally obtain regressions on the results obtained by the author with a complete conservative RELAP PARCS model and by other authors using advanced codes in the literature. These expressions are generally valid for typical PWR, with three and four loops, 12 and 14 feet active length, and up-to-date fuel design. Because of their simplicity, these expressions are no substitute for a complex analysis, but allow for estimates of expected values and analyze trends. Finally, examples of the application to real Spanish core reloads are provided. (authors)

  7. A recommended procedure for estimating the cosmic-ray spectral parameter of a simple power law

    CERN Document Server

    Howell, L W

    2002-01-01

    A simple power law model with single spectral index alpha sub 1 is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10 sup 1 sup 3 eV. Two procedures for estimating alpha sub 1 --the method of moments and maximum likelihood (ML)--are developed and their statistical performance are compared. The ML procedure is shown to be the superior approach and is then generalized for application to real cosmic-ray data sets. Several other important results, such as the relationship between collecting power and detector energy resolution and inclusion of a non-Gaussian detector response function, are presented. These results have many practical benefits in the design phase of a cosmic-ray detector as they permit instrument developers to make important trade studies in design parameters as a function of one of the science objectives.

  8. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

  9. Estimation of ibuprofen and famotidine in tablets by second order derivative spectrophotometery method

    Directory of Open Access Journals (Sweden)

    Dimal A. Shah

    2017-02-01

    Full Text Available A simple and accurate method for the analysis of ibuprofen (IBU and famotidine (FAM in their combined dosage form was developed using second order derivative spectrophotometery. IBU and FAM were quantified using second derivative responses at 272.8 nm and 290 nm in the spectra of their solutions in methanol. The calibration curves were linear in the concentration range of 100–600 μg/mL for IBU and 5–25 μg/mL for FAM. The method was validated and found to be accurate and precise. Developed method was successfully applied for the estimation of IBU and FAM in their combined dosage form.

  10. Simple statistical methods for software engineering data and patterns

    CERN Document Server

    Pandian, C Ravindranath

    2015-01-01

    Although there are countless books on statistics, few are dedicated to the application of statistical methods to software engineering. Simple Statistical Methods for Software Engineering: Data and Patterns fills that void. Instead of delving into overly complex statistics, the book details simpler solutions that are just as effective and connect with the intuition of problem solvers.Sharing valuable insights into software engineering problems and solutions, the book not only explains the required statistical methods, but also provides many examples, review questions, and case studies that prov

  11. Relationship of transpiration and evapotranspiration to solar radiation and spectral reflectance in soybean [Glycine max] canopies: A simple method for remote sensing of canopy transpiration

    International Nuclear Information System (INIS)

    Choi, E.N.; Inoue, Y.

    2004-01-01

    Abstract The study investigated diurnal and seasonal dynamics of evapotranspiration (ET) and transpiration (Tr) in a soybean canopy, as well as the relationships among ET, Tr, solar radiation and remotely sensed spectral reflectance. The eddy covariance method (ECM) and stem heat balance method (SHBM) were used for independent measurement of ET and Tr, respectively. Micrometeorological, soil, and spectral reflectance data were acquired for the entire growing season. The instantaneous values of canopy-Tr estimated by SHBM and ET by ECM were well synchronized with each other, and both were strongly affected by the solar radiation. The daily values canopy-Tr increased rapidly with increasing leaf area index (LAI), and got closer to the ET even at a low value of LAI such as 1.5-2. The daily values of ET were moderately correlated with global solar radiation (Rs), and more closely with the potential evapotranspiration (ETp), estimated by the 'radiation method.' This fact supported the effectiveness of the simple radiation method in estimation of evapotranspiration. The ratio of Tr/ET as well as the ratio of ground heat flux (G) to Rs (G/Rs) was closely related to LAI, and LAI was a key variable in determining the energy partitioning to soil and vegetation. It was clearly shown that a remotely sensed vegetation index such as SAVI (soil adjusted vegetation index) was effective for estimating LAI, and further useful for directly estimating energy partitioning to soil and vegetation. The G and Tr/ET were both well estimated by the vegetation index. It was concluded that the combination of a simple radiation method with remotely sensed information can provide useful information on energy partitioning and Tr/ET in vegetation canopies

  12. Development and Statistical Validation of Spectrophotometric Methods for the Estimation of Nabumetone in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    A. R. Rote

    2010-01-01

    Full Text Available Three new simple, economic spectrophotometric methods were developed and validated for the estimation of nabumetone in bulk and tablet dosage form. First method includes determination of nabumetone at absorption maxima 330 nm, second method applied was area under curve for analysis of nabumetone in the wavelength range of 326-334 nm and third method was First order derivative spectra with scaling factor 4. Beer law obeyed in the concentration range of 10-30 μg/mL for all three methods. The correlation coefficients were found to be 0.9997, 0.9998 and 0.9998 by absorption maxima, area under curve and first order derivative spectra. Results of analysis were validated statistically and by performing recovery studies. The mean percent recoveries were found satisfactory for all three methods. The developed methods were also compared statistically using one way ANOVA. The proposed methods have been successfully applied for the estimation of nabumetone in bulk and pharmaceutical tablet dosage form.

  13. The Application Research of Inverse Finite Element Method for Frame Deformation Estimation

    Directory of Open Access Journals (Sweden)

    Yong Zhao

    2017-01-01

    Full Text Available A frame deformation estimation algorithm is investigated for the purpose of real-time control and health monitoring of flexible lightweight aerospace structures. The inverse finite element method (iFEM for beam deformation estimation was recently proposed by Gherlone and his collaborators. The methodology uses a least squares principle involving section strains of Timoshenko theory for stretching, torsion, bending, and transverse shearing. The proposed methodology is based on stain-displacement relations only, without invoking force equilibrium. Thus, the displacement fields can be reconstructed without the knowledge of structural mode shapes, material properties, and applied loading. In this paper, the number of the locations where the section strains are evaluated in the iFEM is discussed firstly, and the algorithm is subsequently investigated through a simple supplied beam and an experimental aluminum wing-like frame model in the loading case of end-node force. The estimation results from the iFEM are compared with reference displacements from optical measurement and computational analysis, and the accuracy of the algorithm estimation is quantified by the root-mean-square error and percentage difference error.

  14. Accurate and simple method for quantification of hepatic fat content using magnetic resonance imaging: a prospective study in biopsy-proven nonalcoholic fatty liver disease.

    Science.gov (United States)

    Hatta, Tomoko; Fujinaga, Yasunari; Kadoya, Masumi; Ueda, Hitoshi; Murayama, Hiroaki; Kurozumi, Masahiro; Ueda, Kazuhiko; Komatsu, Michiharu; Nagaya, Tadanobu; Joshita, Satoru; Kodama, Ryo; Tanaka, Eiji; Uehara, Tsuyoshi; Sano, Kenji; Tanaka, Naoki

    2010-12-01

    To assess the degree of hepatic fat content, simple and noninvasive methods with high objectivity and reproducibility are required. Magnetic resonance imaging (MRI) is one such candidate, although its accuracy remains unclear. We aimed to validate an MRI method for quantifying hepatic fat content by calibrating MRI reading with a phantom and comparing MRI measurements in human subjects with estimates of liver fat content in liver biopsy specimens. The MRI method was performed by a combination of MRI calibration using a phantom and double-echo chemical shift gradient-echo sequence (double-echo fast low-angle shot sequence) that has been widely used on a 1.5-T scanner. Liver fat content in patients with nonalcoholic fatty liver disease (NAFLD, n = 26) was derived from a calibration curve generated by scanning the phantom. Liver fat was also estimated by optical image analysis. The correlation between the MRI measurements and liver histology findings was examined prospectively. Magnetic resonance imaging measurements showed a strong correlation with liver fat content estimated from the results of light microscopic examination (correlation coefficient 0.91, P hepatic steatosis. Moreover, the severity of lobular inflammation or fibrosis did not influence the MRI measurements. This MRI method is simple and noninvasive, has excellent ability to quantify hepatic fat content even in NAFLD patients with mild steatosis or advanced fibrosis, and can be performed easily without special devices.

  15. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  16. The modified simple equation method for solving some fractional ...

    Indian Academy of Sciences (India)

    ... and processes in various areas of natural science. Thus, many effective and powerful methods have been established and improved. In this study, we establish exact solutions of the time fractional biological population model equation and nonlinearfractional Klein–Gordon equation by using the modified simple equation ...

  17. A simple method of dosimetry for E-beam radiation

    International Nuclear Information System (INIS)

    Spencer, D.S.; Thalacker, V.P.; Chasman, J.N.; Siegel, S.

    1985-01-01

    A simple method utilizing a photochromic 'intensity label' for monitoring electron-beam sources was evaluated. The labels exhibit a color change upon exposure to UV or e-beam radiation. A correlation was found between absorbed energy and Gardner Color Index at low electron-beam doses. (author)

  18. Inventory estimation for nuclear fuel reprocessing systems

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.

    1987-01-01

    The accuracy of nuclear material accounting methods for nuclear fuel reprocessing facilities is limited by nuclear material inventory variations in the solvent extraction contactors, which affect the separation and purification of uranium and plutonium. Since in-line methods for measuring contactor inventory are not available, simple inventory estimation models are being developed for mixer-settler contactors operating at steady state with a view toward improving the accuracy of nuclear material accounting methods for reprocessing facilities. The authors investigated the following items: (1) improvements in the utility of the inventory estimation models, (2) extension of improvements to inventory estimation for transient nonsteady-state conditions during, for example, process upset or throughput variations, and (3) development of simple inventory estimation models for reprocessing systems using pulsed columns

  19. Simple and convenient method for culturing anaerobic bacteria.

    OpenAIRE

    Behbehani, M J; Jordan, H V; Santoro, D L

    1982-01-01

    A simple and convenient method for culturing anaerobic bacteria is described. Cultures can be grown in commercially available flasks normally used for preparation of sterile external solutions. A special disposable rubber flask closure maintains anaerobic conditions in the flask after autoclaving. Growth of a variety of anaerobic oral bacteria was comparable to that obtained after anaerobic incubation of broth cultures in Brewer Anaerobic Jars.

  20. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    Science.gov (United States)

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a

  1. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Science.gov (United States)

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  2. Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

    Science.gov (United States)

    Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.

    2018-01-01

    We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

  3. A simple method for one-loop renormalization in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, P.O. Box 64, FI-00014, University of Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@uis.no [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2013-08-01

    We present a simple method for deriving the renormalization counterterms from the components of the energy-momentum tensor in curved space-time. This method allows control over the finite parts of the counterterms and provides explicit expressions for each term separately. As an example, the method is used for the self-interacting scalar field in a Friedmann-Robertson-Walker metric in the adiabatic approximation, where we calculate the renormalized equation of motion for the field and the renormalized components of the energy-momentum tensor to fourth adiabatic order while including interactions to one-loop order. Within this formalism the trace anomaly, including contributions from interactions, is shown to have a simple derivation. We compare our results to those obtained by two standard methods, finding agreement with the Schwinger-DeWitt expansion but disagreement with adiabatic subtractions for interacting theories.

  4. A simple approach to estimate soil organic carbon and soil co/sub 2/ emission

    International Nuclear Information System (INIS)

    Abbas, F.

    2013-01-01

    SOC (Soil Organic Carbon) and soil CO/sub 2/ (Carbon Dioxide) emission are among the indicator of carbon sequestration and hence global climate change. Researchers in developed countries benefit from advance technologies to estimate C (Carbon) sequestration. However, access to the latest technologies has always been challenging in developing countries to conduct such estimates. This paper presents a simple and comprehensive approach for estimating SOC and soil CO/sub 2/ emission from arable- and forest soils. The approach includes various protocols that can be followed in laboratories of the research organizations or academic institutions equipped with basic research instruments and technology. The protocols involve soil sampling, sample analysis for selected properties, and the use of a worldwide tested Rothamsted carbon turnover model. With this approach, it is possible to quantify SOC and soil CO/sub 2/ emission over short- and long-term basis for global climate change assessment studies. (author)

  5. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  6. A simple method for DNA isolation from Xanthomonas spp.

    Directory of Open Access Journals (Sweden)

    Gomes Luiz Humberto

    2000-01-01

    Full Text Available A simple DNA isolation method was developed with routine chemicals that yields high quality and integrity preparations when compared to some of the most well known protocols. The method described does not require the use of lysing enzymes, water bath and the DNA was obtained within 40 minutes The amount of nucleic acid extracted (measured in terms of absorbancy at 260 nm from strains of Xanthomonas spp., Pseudomonas spp. and Erwinia spp. was two to five times higher than that of the most commonly used method.

  7. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    Science.gov (United States)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  8. A simple and rapid method for high-resolution visualization of single-ion tracks

    Directory of Open Access Journals (Sweden)

    Masaaki Omichi

    2014-11-01

    Full Text Available Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA-N, N’-methylene bisacrylamide (MBAAm blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic force microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.

  9. A simple and rapid method for high-resolution visualization of single-ion tracks

    Energy Technology Data Exchange (ETDEWEB)

    Omichi, Masaaki [Department of Applied Chemistry, Graduate School of Engineering, Osaka University, Osaka 565-0871 (Japan); Center for Collaborative Research, Anan National College of Technology, Anan, Tokushima 774-0017 (Japan); Choi, Wookjin; Sakamaki, Daisuke; Seki, Shu, E-mail: seki@chem.eng.osaka-u.ac.jp [Department of Applied Chemistry, Graduate School of Engineering, Osaka University, Osaka 565-0871 (Japan); Tsukuda, Satoshi [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai, Miyagi 980-8577 (Japan); Sugimoto, Masaki [Japan Atomic Energy Agency, Takasaki Advanced Radiation Research Institute, Gunma, Gunma 370-1292 (Japan)

    2014-11-15

    Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA)-N, N’-methylene bisacrylamide (MBAAm) blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic force microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.

  10. Tracing and quantifying groundwater inflow into lakes using a simple method for radon-222 analysis

    Directory of Open Access Journals (Sweden)

    T. Kluge

    2007-09-01

    Full Text Available Due to its high activities in groundwater, the radionuclide 222Rn is a sensitive natural tracer to detect and quantify groundwater inflow into lakes, provided the comparatively low activities in the lakes can be measured accurately. Here we present a simple method for radon measurements in the low-level range down to 3 Bq m−3, appropriate for groundwater-influenced lakes, together with a concept to derive inflow rates from the radon budget in lakes. The analytical method is based on a commercially available radon detector and combines the advantages of established procedures with regard to efficient sampling and sensitive analysis. Large volume (12 l water samples are taken in the field and analyzed in the laboratory by equilibration with a closed air loop and alpha spectrometry of radon in the gas phase. After successful laboratory tests, the method has been applied to a small dredging lake without surface in- or outflow in order to estimate the groundwater contribution to the hydrological budget. The inflow rate calculated from a 222Rn balance for the lake is around 530 m³ per day, which is comparable to the results of previous studies. In addition to the inflow rate, the vertical and horizontal radon distribution in the lake provides information on the spatial distribution of groundwater inflow to the lake. The simple measurement and sampling technique encourages further use of radon to examine groundwater-lake water interaction.

  11. Lowest-order constrained variational method for simple many-fermion systems

    International Nuclear Information System (INIS)

    Alexandrov, I.; Moszkowski, S.A.; Wong, C.W.

    1975-01-01

    The authors study the potential energy of many-fermion systems calculated by the lowest-order constrained variational (LOCV) method of Pandharipande. Two simple two-body interactions are used. For a simple hard-core potential in a dilute Fermi gas, they find that the Huang-Yang exclusion correction can be used to determine a healing distance. The result is close to the older Pandharipande prescription for the healing distance. For a hard core plus attractive exponential potential, the LOCV result agrees closely with the lowest-order separation method of Moszkowski and Scott. They find that the LOCV result has a shallow minimum as a function of the healing distance at the Moszkowski-Scott separation distance. The significance of the absence of a Brueckner dispersion correction in the LOCV result is discussed. (Auth.)

  12. Age-specific effective doses for pediatric MSCT examinations at a large children's hospital using DLP conversion coefficients: a simple estimation method

    International Nuclear Information System (INIS)

    Thomas, Karen E.; Wang, Bo

    2008-01-01

    There is a need for an easily accessible method for effective dose estimation in pediatric CT. To estimate effective doses for a variety of pediatric neurological and body CT examinations in five age groups using recently published age- and region-specific dose length product (DLP) to effective dose conversion coefficients. A retrospective review was performed of 1,431 consecutive CT scans over a 12-week period using age- and weight-adjusted CT protocols. Age- and region-specific DLP to effective dose conversion coefficients were applied to console-displayed DLP data. Effective dose estimates for single-phase head CT scans in neonatal, and 1-, 5-, 10- and 15-year-old age groups were 4.2, 3.6, 2.4, 2.0 and 1.4 mSv, respectively. For abdomen/pelvis CT scans the corresponding effective doses were 13.1, 11.1, 8.4, 8.9 and 5.9 mSv. The range of pediatric CT effective doses is wide, from ultralow dose protocols (<1 mSv) to extended-coverage body examinations (10-15 mSv). Age- and region-specific pediatric DLP to effective dose conversion coefficients provide an accessible and user-friendly method for estimating pediatric CT effective doses that is available to radiologists working without medical physics support. (orig.)

  13. A Novel Method for Age Estimation in Solar-Type Stars Through GALEX FUV Magnitudes

    Science.gov (United States)

    Ho, Kelly; Subramonian, Arjun; Smith, Graeme; Shouru Shieh

    2018-01-01

    Utilizing an inverse association known to exist between Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) magnitudes and the chromospheric activity of F, G, and K dwarfs, we explored a method of age estimation in solar-type stars through GALEX FUV magnitudes. Sample solar-type star data were collected from refereed publications and filtered by B-V and absolute visual magnitude to ensure similarities in temperature and luminosity to the Sun. We determined FUV-B and calculated a residual index Q for all the stars, using the temperature-induced upper bound on FUV-B as the fiducial. Plotting current age estimates for the stars against Q, we discovered a strong and significant association between the variables. By applying a log-linear transformation to the data to produce a strong correlation between Q and loge Age, we confirmed the association between Q and age to be exponential. Thus, least-squares regression was used to generate an exponential model relating Q to age in solar-type stars, which can be used by astronomers. The Q-method of stellar age estimation is simple and more efficient than existing spectroscopic methods and has applications to galactic archaeology and stellar chemical composition analysis.

  14. Better Fire Emissions Estimates for Tricky Species Illustrated with a Simple Empirical Burn-to-Sample Plume Mode

    Science.gov (United States)

    Chatfield, R. B.; Andreae, M. O.; Lareau, N.

    2017-12-01

    Methodologies for estimating emission factors (EFs) and broader emission relationship (ERs) (for e.g., O3 production or aerosol absorption) have been difficult to make accurate and convincing; this is largely due to non-fire effects on both CO2 and also fire-emitted trace species. We present a new view of these multiple effects as they affect downwind tracer samples observed by aircraft in NASA's ARCTAS and SEAC4RS airborne missions. This view leads to our method for estimates of ERs and EFs that allow spatially detailed views focusing on individual samples, a Mixed Effects Emission Ratio Technique (MERET). We concentrate on presenting a generalized viewpoint: a simple idealized model of a fire plume entraining air from near-flames upward and then outward to a sampling point, a view base on observations of typical situations. Actual evolution of a plume can depend intricately on the fully history of entrainment, entraining concentration levels of CO2 and tracer species, and mixing. Observations suggest that our simple plume model with just two (analyzed) values for entrained CO2 and one or potentially two values for environmental concentrations for each tracer can serve surprisingly well for mixed-effects regression estimates. Such details appears imperative for long-lived gases like CH4, CO, and N2O. In particular, it is difficult to distinguish fire-sourced emissions from air entrained near the flames, entrained in a way proportional to fire intensity. These entraining concentrations may vary significantly from those later in plume evolution. In addition, such detail also highlights behavior of emissions that react on the path to sampling, e.g. fire-sourced or entrained urban NOx. Some caveats regarding poor sampling situations, and some warning signs, based on this empirical plume description and on MERET analyses, are demonstrated. Some information is available when multiple tracers are analyzed. MERET estimates for ERs of short and these long-lived species are

  15. Derivative Spectrophotometric Method for Estimation of Antiretroviral Drugs in Fixed Dose Combinations

    Science.gov (United States)

    P.B., Mohite; R.B., Pandhare; S.G., Khanage

    2012-01-01

    Purpose: Lamivudine is cytosine and zidovudine is cytidine and is used as an antiretroviral agents. Both drugs are available in tablet dosage forms with a dose of 150 mg for LAM and 300 mg ZID respectively. Method: The method employed is based on first order derivative spectroscopy. Wavelengths 279 nm and 300 nm were selected for the estimation of the Lamovudine and Zidovudine respectively by taking the first order derivative spectra. The conc. of both drugs was determined by proposed method. The results of analysis have been validated statistically and by recovery studies as per ICH guidelines. Result: Both the drugs obey Beer’s law in the concentration range 10-50 μg mL-1,for LAM and ZID; with regression 0.9998 and 0.9999, intercept – 0.0677 and – 0.0043 and slope 0.0457 and 0.0391 for LAM and ZID, respectively.The accuracy and reproducibility results are close to 100% with 2% RSD. Conclusion: A simple, accurate, precise, sensitive and economical procedures for simultaneous estimation of Lamovudine and Zidovudine in tablet dosage form have been developed. PMID:24312779

  16. Development of a novel and simple method to evaluate disintegration of rapidly disintegrating tablets.

    Science.gov (United States)

    Hoashi, Yohei; Tozuka, Yuichi; Takeuchi, Hirofumi

    2013-01-01

    The purpose of this study was to develop and test a novel and simple method for evaluating the disintegration time of rapidly disintegrating tablets (RDTs) in vitro, since the conventional disintegration test described in the pharmacopoeia produces poor results due to the difference of its environmental conditions from those of an actual oral cavity. Six RDTs prepared in our laboratory and 5 types of commercial RDTs were used as model formulations. Using our original apparatus, a good correlation was observed between in vivo and in vitro disintegration times by adjusting the height from which the solution was dropped to 8 cm and the weight of the load to 10 or 20 g. Properties of RDTs, such as the pattern of their disintegrating process, can be assessed by verifying the load. These findings confirmed that our proposed method for an in vitro disintegration test apparatus is an excellent one for estimating disintegration time and the disintegration profile of RDTs.

  17. A simple method for the rapid assessment of the qualitative ESR response of fossil samples to laboratory irradiation

    International Nuclear Information System (INIS)

    Gruen, Rainer

    2006-01-01

    A simple and effective method is proposed for the analysis of the qualitative response of ESR spectra to dosing. The method comprises of the alignment of the spectra, and subtraction of the natural spectrum from those that were subsequently irradiated in the laboratory. In that way, the ESR response to environmental radiation can be compared to the ESR response to laboratory radiation. As illustrated on some tooth enamel and mollusk shell samples, both materials are frequently used in dating and accident dosimetry, the method is very effective for the identification of regions where the two radiation regimes generate qualitatively the same dose response as well as for the isolation of radiation insensitive signals that interfere with those used for dose estimation

  18. A simple method for validation and verification of pipettes mounted on automated liquid handlers

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Hansen, Anders Johannes; Frøslev, Tobias Guldberg

     We have implemented a simple method for validation and verification of the performance of pipettes mounted on automated liquid handlers as necessary for laboratories accredited under ISO 17025. An 8-step serial dilution of Orange G was prepared in quadruplicates in a flat bottom 96-well microtit...... available. In conclusion, we have set up a simple solution for the continuous validation of automated liquid handlers used for accredited work. The method is cheap, simple and easy to use for aqueous solutions but requires a spectrophotometer that can read microtiter plates....... We have implemented a simple method for validation and verification of the performance of pipettes mounted on automated liquid handlers as necessary for laboratories accredited under ISO 17025. An 8-step serial dilution of Orange G was prepared in quadruplicates in a flat bottom 96-well microtiter...

  19. Simple future weather files for estimating heating and cooling demand

    DEFF Research Database (Denmark)

    Cox, Rimante Andrasiunaite; Drews, Martin; Rode, Carsten

    2015-01-01

    useful estimates of future energy demand of a building. Experimental results based on both the degree-day method and dynamic simulations suggest that this is indeed the case. Specifically, heating demand estimates were found to be within a few per cent of one another, while estimates of cooling demand...... were slightly more varied. This variation was primarily due to the very few hours of cooling that were required in the region examined. Errors were found to be most likely when the air temperatures were close to the heating or cooling balance points, where the energy demand was modest and even...... relatively large errors might thus result in only modest absolute errors in energy demand....

  20. A simple immunoblotting method after separation of proteins in agarose gel

    DEFF Research Database (Denmark)

    Koch, C; Skjødt, K; Laursen, I

    1985-01-01

    A simple and sensitive method for immunoblotting of proteins after separation in agarose gels is described. It involves transfer of proteins onto nitrocellulose paper simply by diffusion through pressure, a transfer which only takes about 10 min. By this method we have demonstrated the existence ...

  1. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Science.gov (United States)

    Djenidi, L.; Antonia, R. A.

    2012-10-01

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynods number R λ is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 ≤ R λ ≤ 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of < \\varepsilon rangle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall.

  2. A simple method of fitting ill-conditioned polynomials to data

    International Nuclear Information System (INIS)

    Buckler, A.N.; Lawrence, J.

    1979-04-01

    A very simple transformation of the independent variable x is shown to cure the ill-conditioning when some polynomial series are fitted to given Y values. Numerical examples are given to illustrate the power of the method. (author)

  3. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  4. The description of a method for accurately estimating creatinine clearance in acute kidney injury.

    Science.gov (United States)

    Mellas, John

    2016-05-01

    practitioner with a new tool to estimate real time K in AKI with enough precision to predict the severity of the renal injury, including progression, stabilization, or improvement in azotemia. It is the author's belief that this simple method improves on RIFLE, AKIN, and KDIGO for estimating the degree of renal impairment in AKI and allows a more accurate estimate of K in AKI. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Research and Development of Methods for Estimating Physicochemical Properties of Organic Compounds of Environmental Concern. Part 2.

    Science.gov (United States)

    1981-06-01

    lar orbital considerations. The basis of light scattering in Raman spec- troscopy is the dipole moment induced in the molecule when hit by the...of 6 25-9 APPENDICES SIMPLE LINEAR REGRESSION B-1 Scatter Plot of Twenty-five Observations of Two Variables B-2 B-2 Violation of Regression Assumptions...parachor values com- puted by the Sugden method and the McGowan method, respectively. Example 12-2 Estimate the normal boiling point for nicotine . (1

  6. A simple Ultraviolet spectrophotometric method for the determination of etoricoxib in dosage formulations

    Directory of Open Access Journals (Sweden)

    Shipra Singh

    2012-01-01

    Full Text Available The present study was undertaken to develop a validated, rapid, simple, and low-cost ultraviolet (UV spectrophotometric method for estimating Etoricoxib (ETX in pharmaceutical formulations. The analysis was performed on λ max 233 nm using 0.1 M HCl as blank/diluent. The proposed method was validated on International Conference on Harmonization (ICH guidelines including parameters as linearity, accuracy, precision, reproducibility, and specificity. The proposed method was also used to access the content of the ETX in two commercial brands of Indian market. Beer′s law was obeyed in concentration range of 0.1-0.5 μg/ml, and the regression equation was Y = 0.418x + 0.018. The mean accuracy values for 0.1 μg/ml and 0.2 μg/ml concentration of ETX were found to be 99.76 ± 0.52% and 99.12 ± 0.84, respectively, and relative standard deviation (RSD of interday and intraday was less than 2%. The developed method was suitable and specific to the analysis of ETX even in the presence of common excipients. The method was applied on two different marketed brands and ETX contents were 98.5 ± 0.56 and 99.33 ± 0.44, respectively, of labeled claim. The proposed method was validated as per ICH guidelines and statistically good results were obtained. This method can be employed for routine analysis of ETX in bulk and commercial formulations.

  7. A simple Ultraviolet spectrophotometric method for the determination of etoricoxib in dosage formulations.

    Directory of Open Access Journals (Sweden)

    S Singh

    2012-01-01

    Full Text Available The present study was undertaken to develop a validated, rapid, simple, and low-cost ultraviolet (UV spectrophotometric method for estimating Etoricoxib (ETX in pharmaceutical formulations. The analysis was performed on Î max 233 nm using 0.1 M HCl as blank/diluent. The proposed method was validated on International Conference on Harmonization (ICH guidelines including parameters as linearity, accuracy, precision, reproducibility, and specificity. The proposed method was also used to access the content of the ETX in two commercial brands of Indian market. Beer′s law was obeyed in concentration range of 0.1-0.5 μg/ml, and the regression equation was Y = 0.418x + 0.018. The mean accuracy values for 0.1 μg/ml and 0.2 μg/ml concentration of ETX were found to be 99.76 ± 0.52% and 99.12 ± 0.84, respectively, and relative standard deviation (RSD of interday and intraday was less than 2%. The developed method was suitable and specific to the analysis of ETX even in the presence of common excipients. The method was applied on two different marketed brands and ETX contents were 98.5 ± 0.56 and 99.33 ± 0.44, respectively, of labeled claim. The proposed method was validated as per ICH guidelines and statistically good results were obtained. This method can be employed for routine analysis of ETX in bulk and commercial formulations.

  8. Exploring Simple Algorithms for Estimating Gross Primary Production in Forested Areas from Satellite Data

    Directory of Open Access Journals (Sweden)

    Ramakrishna R. Nemani

    2012-01-01

    Full Text Available Algorithms that use remotely-sensed vegetation indices to estimate gross primary production (GPP, a key component of the global carbon cycle, have gained a lot of popularity in the past decade. Yet despite the amount of research on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of different vegetation indices from the Moderate Resolution Imaging Spectroradiometer (MODIS in capturing the seasonal and the annual variability of GPP estimates from an optimal network of 21 FLUXNET forest towers sites. The tested indices include the Normalized Difference Vegetation Index (NDVI, Enhanced Vegetation Index (EVI, Leaf Area Index (LAI, and Fraction of Photosynthetically Active Radiation absorbed by plant canopies (FPAR. Our results indicated that single vegetation indices captured 50–80% of the variability of tower-estimated GPP, but no one index performed universally well in all situations. In particular, EVI outperformed the other MODIS products in tracking seasonal variations in tower-estimated GPP, but annual mean MODIS LAI was the best estimator of the spatial distribution of annual flux-tower GPP (GPP = 615 × LAI − 376, where GPP is in g C/m2/year. This simple algorithm rehabilitated earlier approaches linking ground measurements of LAI to flux-tower estimates of GPP and produced annual GPP estimates comparable to the MODIS 17 GPP product. As such, remote sensing-based estimates of GPP continue to offer a useful alternative to estimates from biophysical models, and the choice of the most appropriate approach depends on whether the estimates are required at annual or sub-annual temporal resolution.

  9. The simple modelling method for storm- and grey-water quality ...

    African Journals Online (AJOL)

    The simple modelling method for storm- and grey-water quality management applied to Alexandra settlement. ... objectives optimally consist of educational programmes, erosion and sediment control, street sweeping, removal of sanitation system overflows, impervious cover reduction, downspout disconnections, removal of ...

  10. Method-related estimates of sperm vitality.

    Science.gov (United States)

    Cooper, Trevor G; Hellenkemper, Barbara

    2009-01-01

    Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.

  11. A Simple Method for Identifying the Acromioclavicular Joint During Arthroscopic Procedures

    OpenAIRE

    Javed, Saqib; Heasley, Richard; Ravenscroft, Matt

    2013-01-01

    Arthroscopic acromioclavicular joint excision is performed via an anterior portal and is technically demanding. We present a simple method for identifying the acromioclavicular joint during arthroscopic procedures.

  12. Thermalization calorimetry: A simple method for investigating glass transition and crystallization of supercooled liquids

    Directory of Open Access Journals (Sweden)

    Bo Jakobsen

    2016-05-01

    Full Text Available We present a simple method for fast and cheap thermal analysis on supercooled glass-forming liquids. This “Thermalization Calorimetry” technique is based on monitoring the temperature and its rate of change during heating or cooling of a sample for which the thermal power input comes from heat conduction through an insulating material, i.e., is proportional to the temperature difference between sample and surroundings. The monitored signal reflects the sample’s specific heat and is sensitive to exo- and endothermic processes. The technique is useful for studying supercooled liquids and their crystallization, e.g., for locating the glass transition and melting point(s, as well as for investigating the stability against crystallization and estimating the relative change in specific heat between the solid and liquid phases at the glass transition.

  13. Development and Validation of UV-Visible Spectrophotometric Methods for Simultaneous Estimation of Thiocolchicoside and Dexketoprofen in Bulk and Tablet Dosage Form

    OpenAIRE

    M. T. Harde; S. B. Jadhav; D. L. Dharam; P. D. Chaudhari

    2012-01-01

    Development and validation of two simple, accurate, precise and economical UV Spectrophotometric methods for simultaneous estimation of Thiocolchicoside and Dexketoprofen in bulk and in tablet dosage form. The methods employed were Method-1 Absorbance correction method and Method-2 First order derivative spectroscopic method. In method-1 Absorbance is measured at two wavelengths 370nm at which Dexketoprofen has no absorbance and 255nm at which both the drug have considerable absorbance. In me...

  14. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  15. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  16. Standardization of HPTLC method for the estimation of oxytocin in edibles.

    Science.gov (United States)

    Rani, Roopa; Medhe, Sharad; Raj, Kumar Rohit; Srivastava, Manmohan

    2013-12-01

    Adulteration in food stuff has been regarded as a major social evil and is a mind-boggling problem in society. In this study, a rapid, reliable and cost effective High Performance thin layer Chromatography (HPTLC) has been established for the estimation of oxytocin (adulterant) in vegetables, fruits and milk samples. Oxytocin is one of the most frequently used adulterant added in vegetables and fruits for increasing the growth rate and also to enhance milk production from lactating animals. The standardization of the method was based on simulation parameters of mobile phase, stationary phase and saturation time. The mobile phase used was MeOH: Ammonia (pH 6.8), optimized stationary phase was silica gel and saturation time of 5 min. The method was validated by testing its linearity, accuracy, precision, repeatability and limits of detection and quantification. Thus, the proposed method is simple, rapid and specific and was successfully employed for quality and quantity monitoring of oxytocin content in edible products.

  17. Estimation of the flow resistances exerted in coronary arteries using a vessel length-based method.

    Science.gov (United States)

    Lee, Kyung Eun; Kwon, Soon-Sung; Ji, Yoon Cheol; Shin, Eun-Seok; Choi, Jin-Ho; Kim, Sung Joon; Shim, Eun Bo

    2016-08-01

    Flow resistances exerted in the coronary arteries are the key parameters for the image-based computer simulation of coronary hemodynamics. The resistances depend on the anatomical characteristics of the coronary system. A simple and reliable estimation of the resistances is a compulsory procedure to compute the fractional flow reserve (FFR) of stenosed coronary arteries, an important clinical index of coronary artery disease. The cardiac muscle volume reconstructed from computed tomography (CT) images has been used to assess the resistance of the feeding coronary artery (muscle volume-based method). In this study, we estimate the flow resistances exerted in coronary arteries by using a novel method. Based on a physiological observation that longer coronary arteries have more daughter branches feeding a larger mass of cardiac muscle, the method measures the vessel lengths from coronary angiogram or CT images (vessel length-based method) and predicts the coronary flow resistances. The underlying equations are derived from the physiological relation among flow rate, resistance, and vessel length. To validate the present estimation method, we calculate the coronary flow division over coronary major arteries for 50 patients using the vessel length-based method as well as the muscle volume-based one. These results are compared with the direct measurements in a clinical study. Further proving the usefulness of the present method, we compute the coronary FFR from the images of optical coherence tomography.

  18. A simplified dynamic method for field capacity estimation and its parameter analysis

    Institute of Scientific and Technical Information of China (English)

    Zhen-tao CONG; Hua-fang LÜ; Guang-heng NI

    2014-01-01

    This paper presents a simplified dynamic method based on the definition of field capacity. Two soil hydraulic characteristics models, the Brooks-Corey (BC) model and the van Genuchten (vG) model, and four soil data groups were used in this study. The relative drainage rate, which is a unique parameter and independent of the soil type in the simplified dynamic method, was analyzed using the pressure-based method with a matric potential of−1/3 bar and the flux-based method with a drainage flux of 0.005 cm/d. As a result, the relative drainage rate of the simplified dynamic method was determined to be 3% per day. This was verified by the similar field capacity results estimated with the three methods for most soils suitable for cultivating plants. In addition, the drainage time calculated with the simplified dynamic method was two to three days, which agrees with the classical definition of field capacity. We recommend the simplified dynamic method with a relative drainage rate of 3% per day due to its simple application and clearly physically-based concept.

  19. Standstill Estimation of Electrical Parameters in Induction Motors Using an Optimal Input Signal

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Vadstrup, P.

    1995-01-01

    The paper suggest a simple off-line method to obtain accurate estimates of the resistances and inductances of the induction motor.......The paper suggest a simple off-line method to obtain accurate estimates of the resistances and inductances of the induction motor....

  20. Estimation of gingival crevicular blood glucose level for the screening of diabetes mellitus: A simple yet reliable method.

    Science.gov (United States)

    Parihar, Sarita; Tripathi, Richik; Parihar, Ajit Vikram; Samadi, Fahad M; Chandra, Akhilesh; Bhavsar, Neeta

    2016-01-01

    This study was designed to assess the reliability of blood glucose level estimation in gingival crevicular blood(GCB) for screening diabetes mellitus. 70 patients were included in study. A randomized, double-blind clinical trial was performed. Among these, 39 patients were diabetic (including 4 patients who were diagnosed during the study) and rest 31 patients were non-diabetic. GCB obtained during routine periodontal examination was analyzed by glucometer to know blood glucose level. The same patient underwent for finger stick blood (FSB) glucose level estimation with glucometer and venous blood (VB) glucose level with standardized laboratory method as per American Diabetes Association Guidelines. 1 All the three blood glucose levels were compared. Periodontal parameters were also recorded including gingival index (GI) and probing pocket depth (PPD). A strong positive correlation ( r ) was observed between glucose levels of GCB with FSB and VB with the values of 0.986 and 0.972 in diabetic group and 0.820 and 0.721 in non-diabetic group. As well, the mean values of GI and PPD were more in diabetic group than non-diabetic group with the statistically significant difference ( p  blood glucose level as the values were closest to glucose levels estimated by VB. The technique is safe, easy to perform and non-invasive to the patient and can increase the frequency of diagnosing diabetes during routine periodontal therapy.

  1. Some simple applications of probability models to birth intervals

    International Nuclear Information System (INIS)

    Shrestha, G.

    1987-07-01

    An attempt has been made in this paper to apply some simple probability models to birth intervals under the assumption of constant fecundability and varying fecundability among women. The parameters of the probability models are estimated by using the method of moments and the method of maximum likelihood. (author). 9 refs, 2 tabs

  2. A simple mathematical method to estimate ammonia emission from in-house windrowing of poultry litter.

    Science.gov (United States)

    Ro, Kyoung S; Szogi, Ariel A; Moore, Philip A

    2018-05-12

    In-house windrowing between flocks is an emerging sanitary management practice to partially disinfect the built-up litter in broiler houses. However, this practice may also increase ammonia (NH 3 ) emission from the litter due to the increase in litter temperature. The objectives of this study were to develop mathematical models to estimate NH 3 emission rates from broiler houses practicing in-house windrowing between flocks. Equations to estimate mass-transfer areas form different shapes windrowed litter (triangular, rectangular, and semi-cylindrical prisms) were developed. Using these equations, the heights of windrows yielding the smallest mass-transfer area were estimated. Smaller mass-transfer area is preferred as it reduces both emission rates and heat loss. The heights yielding the minimum mass-transfer area were 0.8 and 0.5 m for triangular and rectangular windrows, respectively. Only one height (0.6 m) was theoretically possible for semi-cylindrical windrows because the base and the height were not independent. Mass-transfer areas were integrated with published process-based mathematical models to estimate the total house NH 3 emission rates during in-house windrowing of poultry litter. The NH 3 emission rate change calculated from the integrated model compared well with the observed values except for the very high NH 3 initial emission rate from mechanically disturbing the litter to form the windrows. This approach can be used to conveniently estimate broiler house NH 3 emission rates during in-house windrowing between flocks by simply measuring litter temperatures.

  3. A Simple DTC-SVM method for Matrix Converter Drives Using a Deadbeat Scheme

    DEFF Research Database (Denmark)

    Lee, Kyo-Beum; Blaabjerg, Frede; Lee, Kwang-Won

    2005-01-01

    In this paper, a simple direct torque control (DTC) method for sensorless matrix converter drives is proposed, which is characterized by a simple structure, minimal torque ripple and unity input power factor. Also a good sensorless speed-control performance in the low speed operation is obtained,...

  4. Population size estimation of men who have sex with men through the network scale-up method in Japan.

    Directory of Open Access Journals (Sweden)

    Satoshi Ezoe

    Full Text Available BACKGROUND: Men who have sex with men (MSM are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. METHODS AND FINDINGS: An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. CONCLUSIONS: The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.

  5. Raman Microspectrometry: An Alternative Method of Age Estimation from Dentin and Cementum

    Directory of Open Access Journals (Sweden)

    Karuna Kumari

    2017-10-01

    Full Text Available Introduction: Raman spectroscopy is simple, quick, sensitive and non destructive form of tissue examination that provides vital data about the structure, molecular composition and interactions within a sample. The human hard tissues like teeth and bone are able to resist decay for long even after other tissues are lost, thus have valuable forensic importance. Aim: To ascertain the known age of the teeth by analysing dentin and cementum using Raman microspectrometry and assess the accuracy of age estimation by comparison of dentin with cementum. Materials and Methods: The sound permanent extracted tooth specimens (40 of age ranging between 12-74 years were collected and sectioned longitudinally and different dentinal and cemental areas were analysed by Raman microspectrometry. The spectra of dentin and cementum were used as predictors of age estimation. For each sample, ratios were obtained for dentin and cementum areas, and Pearson’s correlation coefficient was calculated. Ratios, which had a correlation coefficient greater than 0.40 were used for further statistical analysis. This led to selection of ratios only for dentin areas, and it allowed us to develop a regression formula. Partial Least Square (PLS regression method was used for computing our model. Results: A significant correlation was observed between the actual chronological age and predicted age of the individual using dentinal areas of the tooth. A closest to the estimated result was achieved, with an error of three years between predicted and actual chorological age. Conclusion: Raman microspectrometry may be considered as an alternative to the conventional method of age estimation and contribute to the identification of individuals.

  6. 12 CFR 334.25 - Reasonable and simple methods of opting out.

    Science.gov (United States)

    2010-01-01

    ... STATEMENTS OF GENERAL POLICY FAIR CREDIT REPORTING Affiliate Marketing § 334.25 Reasonable and simple methods... or processed at an Internet Web site, if the consumer agrees to the electronic delivery of... opt-out under the Act, and the affiliate marketing opt-out under the Act, by a single method, such as...

  7. The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures

    Science.gov (United States)

    Stephenson, W. Kirk

    2009-01-01

    A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes. (Contains 4 notes.)

  8. An approximate method to estimate the minimum critical mass of fissile nuclides

    International Nuclear Information System (INIS)

    Wright, R.Q.; Jordan, W.C.

    1999-01-01

    When evaluating systems in criticality safety, it is important to approximate the answer before any analysis is performed. There is currently interest in establishing the minimum critical parameters for fissile actinides. The purpose is to describe the OB-1 method for estimating the minimum critical mass for thermal systems based on one-group calculations and 235 U spheres fully reflected by water. The observation is made that for water-moderated, well-thermalized systems, the transport and leakage from the system are dominated by water. Under these conditions two fissile mixtures will have nearly the same critical volume provided the infinite media multiplication factor (k ∞ ) for the two systems is the same. This observation allows for very simple estimates of critical concentration and mass as a function of the hydrogen-to-fissile (H/X) moderation ratio by comparison to the known 235 U system

  9. A Simple and Automatic Method for Locating Surgical Guide Hole

    Science.gov (United States)

    Li, Xun; Chen, Ming; Tang, Kai

    2017-12-01

    Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.

  10. Method of Factor Extraction and Simple Structure of Data from Diverse Scientific Areas.

    Science.gov (United States)

    Thorndike, Robert M.

    To study the applicability of simple structure logic for factorial data from scientific disciplines outside psychology, four correlation matrices from each of six scientific areas were factor analyzed by five factoring methods. Resulting factor matrices were compared on two objective criteria of simple structure before and after rotation.…

  11. A Simple Microsoft Excel Method to Predict Antibiotic Outbreaks and Underutilization.

    Science.gov (United States)

    Miglis, Cristina; Rhodes, Nathaniel J; Avedissian, Sean N; Zembower, Teresa R; Postelnick, Michael; Wunderink, Richard G; Sutton, Sarah H; Scheetz, Marc H

    2017-07-01

    Benchmarking strategies are needed to promote the appropriate use of antibiotics. We have adapted a simple regressive method in Microsoft Excel that is easily implementable and creates predictive indices. This method trends consumption over time and can identify periods of over- and underuse at the hospital level. Infect Control Hosp Epidemiol 2017;38:860-862.

  12. Evaporation estimation of rift valley lakes: comparison of models.

    Science.gov (United States)

    Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe

    2009-01-01

    Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  13. Evaporation Estimation of Rift Valley Lakes: Comparison of Models

    Directory of Open Access Journals (Sweden)

    Tibebe Dessalegne

    2009-12-01

    Full Text Available Evapotranspiration (ET accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  14. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  15. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  16. Simple mechanical parameters identification of induction machine using voltage sensor only

    International Nuclear Information System (INIS)

    Horen, Yoram; Strajnikov, Pavel; Kuperman, Alon

    2015-01-01

    Highlights: • A simple low cost algorithm for induction motor mechanical parameters estimation is proposed. • Voltage sensing only is performed; speed sensor is not required. • The method is suitable for both wound rotor and squirrel cage motors. - Abstract: A simple low cost algorithm for induction motor mechanical parameters estimation without speed sensor is presented in this paper. Estimation is carried out by recording stator terminal voltage during natural braking and subsequent offline curve fitting. The algorithm allows accurately reconstructing mechanical time constant as well as loading torque speed dependency. Although the mathematical basis of the presented method is developed for wound rotor motors, it is shown to be suitable for squirrel cage motors as well. The algorithm is first tested by reconstruction of simulation model parameters and then by processing measurement results of several motors. Simulation and experimental results support the validity of the proposed algorithm

  17. Estimation of citicoline sodium in tablets by difference spectrophotometric method

    Directory of Open Access Journals (Sweden)

    Sagar Suman Panda

    2013-01-01

    Full Text Available Aim: The present work deals with development and validation of a novel, precise, and accurate spectrophotometric method for the estimation of citicoline sodium (CTS in tablets. This spectrophotometric method is based on the principle that CTS shows two different forms that differs in the absorption spectra in basic and acidic medium. Materials and Methods: The present work was being carried out on Shimadzu 1800 Double Beam UV-visible spectrophotometer. Difference spectra were generated using 10 mm quartz cells over the range of 200-400 nm. Solvents used were 0.1 M NaOH and 0.1 M HCl. Results: The maxima and minima in the difference spectra of CTS were found to be 239 nm and 283 nm, respectively. Amplitude was calculated from the maxima and minima of spectrum. The drug follows linearity in the range of 1-50 μ/ml (R 2 = 0.999. The average % recovery from the tablet formulation was found to be 98.47%. The method was validated as per International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use: ICH Q2(R1 Validation of Analytical Procedures: Text and Methodology guidelines. Conclusion: This method is simple and inexpensive. Hence it can be applied for determination of the drug in pharmaceutical dosage forms.

  18. A model-based radiography restoration method based on simple scatter-degradation scheme for improving image visibility

    Science.gov (United States)

    Kim, K.; Kang, S.; Cho, H.; Kang, W.; Seo, C.; Park, C.; Lee, D.; Lim, H.; Lee, H.; Kim, G.; Park, S.; Park, J.; Kim, W.; Jeon, D.; Woo, T.; Oh, J.

    2018-02-01

    In conventional planar radiography, image visibility is often limited mainly due to the superimposition of the object structure under investigation and the artifacts caused by scattered x-rays and noise. Several methods, including computed tomography (CT) as a multiplanar imaging modality, air-gap and grid techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have extensively been investigated in attempt to overcome these difficulties. However, those methods typically require higher x-ray doses or special equipment. In this work, as another approach, we propose a new model-based radiography restoration method based on simple scatter-degradation scheme where the intensity of scattered x-rays and the transmission function of a given object are estimated from a single x-ray image to restore the original degraded image. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that the degradation of image characteristics by scattered x-rays and noise was effectively recovered by using the proposed method, which improves the image visibility in radiography considerably.

  19. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations.

    Science.gov (United States)

    Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris

    2013-01-23

    Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method's development.

  20. A method for estimating the diffuse attenuation coefficient (KdPAR)from paired temperature sensors

    Science.gov (United States)

    Read, Jordan S.; Rose, Kevin C.; Winslow, Luke A.; Read, Emily K.

    2015-01-01

    A new method for estimating the diffuse attenuation coefficient for photosynthetically active radiation (KdPAR) from paired temperature sensors was derived. We show that during cases where the attenuation of penetrating shortwave solar radiation is the dominant source of temperature changes, time series measurements of water temperatures at multiple depths (z1 and z2) are related to one another by a linear scaling factor (a). KdPAR can then be estimated by the simple equation KdPAR ln(a)/(z2/z1). A suggested workflow is presented that outlines procedures for calculating KdPAR according to this paired temperature sensor (PTS) method. This method is best suited for conditions when radiative temperature gains are large relative to physical noise. These conditions occur frequently on water bodies with low wind and/or high KdPARs but can be used for other types of lakes during time periods of low wind and/or where spatially redundant measurements of temperatures are available. The optimal vertical placement of temperature sensors according to a priori knowledge of KdPAR is also described. This information can be used to inform the design of future sensor deployments using the PTS method or for campaigns where characterizing sub-daily changes in temperatures is important. The PTS method provides a novel method to characterize light attenuation in aquatic ecosystems without expensive radiometric equipment or the user subjectivity inherent in Secchi depth measurements. This method also can enable the estimation of KdPAR at higher frequencies than many manual monitoring programs allow.

  1. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  2. Double-lock technique: a simple method to secure abdominal wall closure

    International Nuclear Information System (INIS)

    Jategaonkar, P.A.; Yadav, S.P.

    2013-01-01

    Secure closure of a laparotomy incision remains an important aspect of any abdominal operation with the aim to avoid the postoperative morbidity and hasten the patient's recovery. Depending on the operator's preference and experience, it may be done by the continuous or the interrupted methods either using a non-absorbable or delayed-absorbable suture. We describe a simple, secure and quick technique of abdominal wall closure which involves continuous suture inter-locked doubly after every third bite. This simple and easy to use mass closure technique can be easily mastered by any member of the surgical team and does not need any assistant. It amalgamates the advantages of both, the continuous and the interrupted methods of closures. To our knowledge, such a technique has not been reported in the literature. (author)

  3. A Combined State of Charge Estimation Method for Lithium-Ion Batteries Used in a Wide Ambient Temperature Range

    Directory of Open Access Journals (Sweden)

    Fei Feng

    2014-05-01

    Full Text Available Ambient temperature is a significant factor that influences the characteristics of lithium-ion batteries, which can produce adverse effects on state of charge (SOC estimation. In this paper, an integrated SOC algorithm that combines an advanced ampere-hour counting (Adv Ah method and multistate open-circuit voltage (multi OCV method, denoted as “Adv Ah + multi OCV”, is proposed. Ah counting is a simple and general method for estimating SOC. However, the available capacity and coulombic efficiency in this method are influenced by the operating states of batteries, such as temperature and current, thereby causing SOC estimation errors. To address this problem, an enhanced Ah counting method that can alter the available capacity and coulombic efficiency according to temperature is proposed during the SOC calculation. Moreover, the battery SOCs between different temperatures can be mutually converted in accordance with the capacity loss. To compensate for the accumulating errors in Ah counting caused by the low precision of current sensors and lack of accurate initial SOC, the OCV method is used for calibration and as a complement. Given the variation of available capacities at different temperatures, rated/non-rated OCV–SOCs are established to estimate the initial SOCs in accordance with the Ah counting SOCs. Two dynamic tests, namely, constant- and alternated-temperature tests, are employed to verify the combined method at different temperatures. The results indicate that our method can provide effective and accurate SOC estimation at different ambient temperatures.

  4. Optimized theory for simple and molecular fluids.

    Science.gov (United States)

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  5. Some new, simple and efficient stereological methods and their use in pathological research and diagnosis

    DEFF Research Database (Denmark)

    Gundersen, H J; Bendtsen, T F; Korbo, L

    1988-01-01

    Stereology is a set of simple and efficient methods for quantitation of three-dimensional microscopic structures which is specifically tuned to provide reliable data from sections. Within the last few years, a number of new methods has been developed which are of special interest to pathologists...... are invariably simple and easy....

  6. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...

  7. A simple and fast method for extraction and quantification of cryptophyte phycoerythrin

    DEFF Research Database (Denmark)

    Thoisen, Christina Vinum; Hansen, Benni Winding; Nielsen, Søren Laurentius

    2017-01-01

    The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive...... freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equipment. Filters with the cryptophyte were frozen (−80 °C) and added phosphate buffer for extraction at 4 °C...... followed by absorbance measurement. The cryptophyte Rhodomonas salina was used as a model organism. •Simple method for extraction and quantification of phycoerythrin from cryptophytes. •Minimal usage of equipment and chemicals, and low labor costs. •Applicable for industrial and biological purposes....

  8. Simple method to generate and fabricate stochastic porous scaffolds

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Nan, E-mail: y79nzw@163.com; Gao, Lilan; Zhou, Kuntao

    2015-11-01

    Considerable effort has been made to generate regular porous structures (RPSs) using function-based methods, although little effort has been made for constructing stochastic porous structures (SPSs) using the same methods. In this short communication, we propose a straightforward method for SPS construction that is simple in terms of methodology and the operations used. Using our method, we can obtain a SPS with functionally graded, heterogeneous and interconnected pores, target pore size and porosity distributions, which are useful for applications in tissue engineering. The resulting SPS models can be directly fabricated using additive manufacturing (AM) techniques. - Highlights: • Random porous structures are constructed based on their regular counterparts. • Functionally graded random pores can be constructed easily. • The scaffolds can be directly fabricated using additive manufacturing techniques.

  9. A simple method for principal strata effects when the outcome has been truncated due to death.

    Science.gov (United States)

    Chiba, Yasutaka; VanderWeele, Tyler J

    2011-04-01

    In randomized trials with follow-up, outcomes such as quality of life may be undefined for individuals who die before the follow-up is complete. In such settings, restricting analysis to those who survive can give rise to biased outcome comparisons. An alternative approach is to consider the "principal strata effect" or "survivor average causal effect" (SACE), defined as the effect of treatment on the outcome among the subpopulation that would have survived under either treatment arm. The authors describe a very simple technique that can be used to assess the SACE. They give both a sensitivity analysis technique and conditions under which a crude comparison provides a conservative estimate of the SACE. The method is illustrated using data from the ARDSnet (Acute Respiratory Distress Syndrome Network) clinical trial comparing low-volume ventilation and traditional ventilation methods for individuals with acute respiratory distress syndrome.

  10. A simple method to adapt time sampling of the analog signal

    International Nuclear Information System (INIS)

    Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.

    2004-01-01

    In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc

  11. Time Skew Estimator for Dual-Polarization QAM Transmitters

    DEFF Research Database (Denmark)

    Medeiros Diniz, Júlio César; Da Ros, Francesco; Jones, Rasmus Thomas

    2017-01-01

    A simple method for joint estimation of transmitter’s in-phase/quadrature and inter-polarization time skew is proposed and experimentally demonstrated. The method is based on clock tone extraction of a photodetected signal and genetic algorithm. The maximum estimation error was 0.5 ps....

  12. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien; Claudel, Christian G.

    2015-01-01

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  13. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien

    2015-12-30

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  14. Simple method for correct enumeration of Staphylococcus aureus

    DEFF Research Database (Denmark)

    Haaber, J.; Cohn, M. T.; Petersen, A.

    2016-01-01

    culture. When grown in such liquid cultures, the human pathogen Staphylococcus aureus is characterized by its aggregation of single cells into clusters of variable size. Here, we show that aggregation during growth in the laboratory standard medium tryptic soy broth (TSB) is common among clinical...... and laboratory S. aureus isolates and that aggregation may introduce significant bias when applying standard enumeration methods on S. aureus growing in laboratory batch cultures. We provide a simple and efficient sonication procedure, which can be applied prior to optical density measurements to give...

  15. Simple method for quantification of gadolinium magnetic resonance imaging contrast agents using ESR spectroscopy.

    Science.gov (United States)

    Takeshita, Keizo; Kinoshita, Shota; Okazaki, Shoko

    2012-01-01

    To develop an estimation method of gadolinium magnetic resonance imaging (MRI) contrast agents, the effect of concentration of Gd compounds on the ESR spectrum of nitroxyl radical was examined. A solution of either 4-oxo-2,2,6,6-tetramethylpiperidine-N-oxyl (TEMPONE) or 4-hydroxy-2,2,6,6-tetramethylpiperidine-N-oxyl (TEMPOL) was mixed with a solution of Gd compound and the ESR spectrum was recorded. Increased concentration of gadolinium-diethylenetriamine pentaacetic acid chelate (Gd-DTPA), an MRI contrast agent, increased the peak-to-peak line widths of ESR spectra of the nitroxyl radicals, in accordance with a decrease of their signal heights. A linear relationship was observed between concentration of Gd-DTPA and line width of ESR signal, up to approximately 50 mmol/L Gd-DTPA, with a high correlation coefficient. Response of TEMPONE was 1.4-times higher than that of TEMPOL as evaluated from the slopes of the lines. The response was slightly different among Gd compounds; the slopes of calibration curves for acua[N,N-bis[2-[(carboxymethyl)[(methylcarbamoyl)methyl]amino]ethyl]glycinato(3-)]gadolinium hydrate (Gd-DTPA-BMA) (6.22 μT·L/mmol) and gadolinium-1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid chelate (Gd-DOTA) (6.62 μT·L/mmol) were steeper than the slope for Gd-DTPA (5.45 μT·L/mmol), whereas the slope for gadolinium chloride (4.94 μT·L/mmol) was less steep than that for Gd-DTPA. This method is simple to apply. The results indicate that this method is useful for rough estimation of the concentration of Gd contrast agents if calibration is carried out with each standard compound. It was also found that the plot of the reciprocal square root of signal height against concentrations of contrast agents could be useful for the estimation if a constant volume of sample solution is taken and measured at the same position in the ESR cavity every time.

  16. Simple Screening Methods for Drought and Heat Tolerance in Cowpea

    International Nuclear Information System (INIS)

    Singh, B. B.

    2000-10-01

    Success in breeding for drought tolerance has not been as pronounced as for other traits. This is partly due to lack of simple, cheap and reliable screening methods to select drought tolerant plants/progenies from the segregating populations and partly due to complexity of factors involved in drought tolerance. Measuring drought tolerance through physiological parameters is expensive, time consuming and difficult to use for screening large numbers of lines and segregating populations. Since several factors/mechanisms (in shoot and root) operate independently and/or jointly to enable plants to cope with drought stress, drought tolerance appears as a complex trait. However, if these factors/mechanisms can be separated and studied individually, the components leading to drought tolerance will appear less complex and may be easy to manipulate by breeders. We have developed a simple box screening method for shoot drought tolerance in cowpea, which eliminates the effects of roots and permits non-destructive visual identification of shoot dehydration tolerance. We have also developed a 'root-box pin-board' method to study two dimensional root architecture of individual plants. Using these methods, we have identified two mechanisms of shoot drought tolerance in cowpea which are controlled by single dominant genes and major difference for root architecture among cowpea varieties. Combining deep and dense root system with shoot dehydration tolerance results into highly drought tolerant plants

  17. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  18. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    Directory of Open Access Journals (Sweden)

    Hendra Gunawan

    2014-06-01

    Full Text Available http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting of an effect of topography, an effect of intracrustal, and an isostatic compensation. Based on simulation results, Bouguer density estimates were then investigated for a gravity survey of 2005 on La Soufriere Volcano-Guadeloupe area (Antilles Islands. The Bouguer density based on the Parasnis approach is 2.71 g/cm3 for the whole area, except the edifice area where average topography density estimates are 2.21 g/cm3 where Bouguer density estimates from previous gravity survey of 1975 are 2.67 g/cm3. The Bouguer density in La Soufriere Volcano was uncertainly estimated to be 0.1 g/cm3. For the studied area, the density deduced from refraction seismic data is coherent with the recent Bouguer density estimates. New Bouguer anomaly map based on these Bouguer density values allows to a better geological intepretation.    

  19. A simple and efficient method for isolating small RNAs from different plant species

    Directory of Open Access Journals (Sweden)

    de Folter Stefan

    2011-02-01

    Full Text Available Abstract Background Small RNAs emerged over the last decade as key regulators in diverse biological processes in eukaryotic organisms. To identify and study small RNAs, good and efficient protocols are necessary to isolate them, which sometimes may be challenging due to the composition of specific tissues of certain plant species. Here we describe a simple and efficient method to isolate small RNAs from different plant species. Results We developed a simple and efficient method to isolate small RNAs from different plant species by first comparing different total RNA extraction protocols, followed by streamlining the best one, finally resulting in a small RNA extraction method that has no need of first total RNA extraction and is not based on the commercially available TRIzol® Reagent or columns. This small RNA extraction method not only works well for plant tissues with high polysaccharide content, like cactus, agave, banana, and tomato, but also for plant species like Arabidopsis or tobacco. Furthermore, the obtained small RNA samples were successfully used in northern blot assays. Conclusion Here we provide a simple and efficient method to isolate small RNAs from different plant species, such as cactus, agave, banana, tomato, Arabidopsis, and tobacco, and the small RNAs from this simplified and low cost method is suitable for downstream handling like northern blot assays.

  20. A simple method for determining split renal function from dynamic {sup 99m}Tc-MAG3 scintigraphic data

    Energy Technology Data Exchange (ETDEWEB)

    Wesolowski, Michal J.; Watson, Gage; Wanasundara, Surajith N.; Babyn, Paul [University of Saskatchewan, Department of Medical Imaging, Saskatoon, SK (Canada); Conrad, Gary R. [University of Kentucky College of Medicine, Department of Radiology, Lexington, KY (United States); Samal, Martin [Charles University Prague and the General University Hospital in Prague, Department of Nuclear Medicine, First Faculty of Medicine, Praha 2 (Czech Republic); Wesolowski, Carl A. [University of Saskatchewan, Department of Medical Imaging, Saskatoon, SK (Canada); Memorial University of Newfoundland, Department of Radiology, St. John' s, NL (Canada)

    2016-03-15

    Commonly used methods for determining split renal function (SRF) from dynamic scintigraphic data require extrarenal background subtraction and additional correction for intrarenal vascular activity. The use of these additional regions of interest (ROIs) can produce inaccurate results and be challenging, e.g. if the heart is out of the camera field of view. The purpose of this study was to evaluate a new method for determining SRF called the blood pool compensation (BPC) technique, which is simple to implement, does not require extrarenal background correction and intrinsically corrects for intrarenal vascular activity. In the BPC method SRF is derived from a parametric plot of the curves generated by one blood-pool and two renal ROIs. Data from 107 patients who underwent {sup 99m}Tc-MAG3 scintigraphy were used to determine SRF values. Values calculated using the BPC method were compared to those obtained with the integral (IN) and Patlak-Rutland (PR) techniques using Bland-Altman plotting and Passing-Bablok regression. The interobserver variability of the BPC technique was also assessed for two observers. The SRF values obtained with the BPC method did not differ significantly from those obtained with the PR method and showed no consistent bias, while SRF values obtained with the IN method showed significant differences with some bias in comparison to those obtained with either the PR or BPC method. No significant interobserver variability was found between two observers calculating SRF using the BPC method. The BPC method requires only three ROIs to produce reliable estimates of SRF, was simple to implement, and in this study yielded statistically equivalent results to the PR method with appreciable interobserver agreement. As such, it adds a new reliable method for quality control of monitoring relative kidney function. (orig.)

  1. A simple and fast method for extraction and quantification of cryptophyte phycoerythrin.

    Science.gov (United States)

    Thoisen, Christina; Hansen, Benni Winding; Nielsen, Søren Laurentius

    2017-01-01

    The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equipment. The cryptophyte cells on the filters were disrupted at -80 °C and added phosphate buffer for extraction at 4 °C followed by absorbance measurement. The cryptophyte Rhodomonas salina was used as a model organism. •Simple method for extraction and quantification of phycoerythrin from cryptophytes.•Minimal usage of equipment and chemicals, and low labor costs.•Applicable for industrial and biological purposes.

  2. A rapid and highly selective method for the estimation of pyro-, tri- and orthophosphates.

    Science.gov (United States)

    Kamat, D R; Savant, V V; Sathyanarayana, D N

    1995-03-01

    A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP(2)O(7)]4H(2)O and [Co(en)(2)H(2)P(3)O(10)]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.

  3. Reverse survival method of fertility estimation: An evaluation

    Directory of Open Access Journals (Sweden)

    Thomas Spoorenberg

    2014-07-01

    Full Text Available Background: For the most part, demographers have relied on the ever-growing body of sample surveys collecting full birth history to derive total fertility estimates in less statistically developed countries. Yet alternative methods of fertility estimation can return very consistent total fertility estimates by using only basic demographic information. Objective: This paper evaluates the consistency and sensitivity of the reverse survival method -- a fertility estimation method based on population data by age and sex collected in one census or a single-round survey. Methods: A simulated population was first projected over 15 years using a set of fertility and mortality age and sex patterns. The projected population was then reverse survived using the Excel template FE_reverse_4.xlsx, provided with Timæus and Moultrie (2012. Reverse survival fertility estimates were then compared for consistency to the total fertility rates used to project the population. The sensitivity was assessed by introducing a series of distortions in the projection of the population and comparing the difference implied in the resulting fertility estimates. Results: The reverse survival method produces total fertility estimates that are very consistent and hardly affected by erroneous assumptions on the age distribution of fertility or by the use of incorrect mortality levels, trends, and age patterns. The quality of the age and sex population data that is 'reverse survived' determines the consistency of the estimates. The contribution of the method for the estimation of past and present trends in total fertility is illustrated through its application to the population data of five countries characterized by distinct fertility levels and data quality issues. Conclusions: Notwithstanding its simplicity, the reverse survival method of fertility estimation has seldom been applied. The method can be applied to a large body of existing and easily available population data

  4. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  5. A simple encapsulation method for organic optoelectronic devices

    International Nuclear Information System (INIS)

    Sun Qian-Qian; An Qiao-Shi; Zhang Fu-Jun

    2014-01-01

    The performances of organic optoelectronic devices, such as organic light emitting diodes and polymer solar cells, have rapidly improved in the past decade. The stability of an organic optoelectronic device has become a key problem for further development. In this paper, we report one simple encapsulation method for organic optoelectronic devices with a parafilm, based on ternary polymer solar cells (PSCs). The power conversion efficiencies (PCE) of PSCs with and without encapsulation decrease from 2.93% to 2.17% and from 2.87% to 1.16% after 168-hours of degradation under an ambient environment, respectively. The stability of PSCs could be enhanced by encapsulation with a parafilm. The encapsulation method is a competitive choice for organic optoelectronic devices, owing to its low cost and compatibility with flexible devices. (atomic and molecular physics)

  6. Two methods for estimating limits to large-scale wind power generation.

    Science.gov (United States)

    Miller, Lee M; Brunsell, Nathaniel A; Mechem, David B; Gans, Fabian; Monaghan, Andrew J; Vautard, Robert; Keith, David W; Kleidon, Axel

    2015-09-08

    Wind turbines remove kinetic energy from the atmospheric flow, which reduces wind speeds and limits generation rates of large wind farms. These interactions can be approximated using a vertical kinetic energy (VKE) flux method, which predicts that the maximum power generation potential is 26% of the instantaneous downward transport of kinetic energy using the preturbine climatology. We compare the energy flux method to the Weather Research and Forecasting (WRF) regional atmospheric model equipped with a wind turbine parameterization over a 10(5) km2 region in the central United States. The WRF simulations yield a maximum generation of 1.1 We⋅m(-2), whereas the VKE method predicts the time series while underestimating the maximum generation rate by about 50%. Because VKE derives the generation limit from the preturbine climatology, potential changes in the vertical kinetic energy flux from the free atmosphere are not considered. Such changes are important at night when WRF estimates are about twice the VKE value because wind turbines interact with the decoupled nocturnal low-level jet in this region. Daytime estimates agree better to 20% because the wind turbines induce comparatively small changes to the downward kinetic energy flux. This combination of downward transport limits and wind speed reductions explains why large-scale wind power generation in windy regions is limited to about 1 We⋅m(-2), with VKE capturing this combination in a comparatively simple way.

  7. Lake and Reservoir Evaporation Estimation: Sensitivity Analysis and Ranking Existing Methods

    Directory of Open Access Journals (Sweden)

    maysam majidi

    2016-02-01

    Full Text Available Introduction: Water when harvested is commonly stored in dams, but approximately up to half of it may be lost due to evaporation leading to a huge waste of our resources. Estimating evaporation from lakes and reservoirs is not a simple task as there are a number of factors that can affect the evaporation rate, notably the climate and physiography of the water body and its surroundings. Several methods are currently used to predict evaporation from meteorological data in open water reservoirs. Based on the accuracy and simplicity of the application, each of these methods has advantages and disadvantages. Although evaporation pan method is well known to have significant uncertainties both in magnitude and timing, it is extensively used in Iran because of its simplicity. Evaporation pan provides a measurement of the combined effect of temperature, humidity, wind speed and solar radiation on the evaporation. However, they may not be adequate for the reservoir operations/development and water accounting strategies for managing drinking water in arid and semi-arid conditions which require accurate evaporation estimates. However, there has not been a consensus on which methods were better to employ due to the lack of important long-term measured data such as temperature profile, radiation and heat fluxes in most lakes and reservoirs in Iran. Consequently, we initiated this research to find the best cost−effective evaporation method with possibly fewer data requirements in our study area, i.e. the Doosti dam reservoir which is located in a semi-arid region of Iran. Materials and Methods: Our study site was the Doosti dam reservoir located between Iran and Turkmenistan borders, which was constructed by the Ministry of Water and Land Reclamation of the Republic of Turkmenistan and the Khorasan Razavi Regional Water Board of the Islamic Republic of Iran. Meteorological data including maximum and minimum air temperature and evaporation from class A pan

  8. Estimating front-wave velocity of infectious diseases: a simple, efficient method applied to bluetongue.

    Science.gov (United States)

    Pioz, Maryline; Guis, Hélène; Calavas, Didier; Durand, Benoît; Abrial, David; Ducrot, Christian

    2011-04-20

    Understanding the spatial dynamics of an infectious disease is critical when attempting to predict where and how fast the disease will spread. We illustrate an approach using a trend-surface analysis (TSA) model combined with a spatial error simultaneous autoregressive model (SAR(err) model) to estimate the speed of diffusion of bluetongue (BT), an infectious disease of ruminants caused by bluetongue virus (BTV) and transmitted by Culicoides. In a first step to gain further insight into the spatial transmission characteristics of BTV serotype 8, we used 2007-2008 clinical case reports in France and TSA modelling to identify the major directions and speed of disease diffusion. We accounted for spatial autocorrelation by combining TSA with a SAR(err) model, which led to a trend SAR(err) model. Overall, BT spread from north-eastern to south-western France. The average trend SAR(err)-estimated velocity across the country was 5.6 km/day. However, velocities differed between areas and time periods, varying between 2.1 and 9.3 km/day. For more than 83% of the contaminated municipalities, the trend SAR(err)-estimated velocity was less than 7 km/day. Our study was a first step in describing the diffusion process for BT in France. To our knowledge, it is the first to show that BT spread in France was primarily local and consistent with the active flight of Culicoides and local movements of farm animals. Models such as the trend SAR(err) models are powerful tools to provide information on direction and speed of disease diffusion when the only data available are date and location of cases.

  9. A statistical method to estimate low-energy hadronic cross sections

    Science.gov (United States)

    Balassa, Gábor; Kovács, Péter; Wolf, György

    2018-02-01

    In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.

  10. Human factors estimation methods in nuclear power plant

    International Nuclear Information System (INIS)

    Takano, Kenichi; Yoshino, Kenji; Nagasaka, Akihiko

    1986-01-01

    The diffinitions and models of mental work-loads are investigated, consequently the most simple and reasonable one is the single channel model, and the channel has limited capacity. The capacity depends on the time related to the brain information processings, like as the recognizations by eyes or ears etc., and the judgements by memory or experience etc., and the actions. In this paper the mental work load is diffined by the relative needed time of such information processing compared to total capacity. Based on the above diffinitions, the model experiment is carried out, the main task is simple additional task of the two digits displayed on a CRT and varying the additional speed from 10 cycle/min. - 60 cycle/min. Four techniques to measure the mental work-load, (1) the task time analysis method, (2) the physiological method, (3) the secondary task method, (4) the subjective method, are examined in the respects of the sensitivity and validity. The measured values gained by the physiological method and the secondary task method and subjective method are compared to those of the task time analysis results, because the task time analysis method is most faithfull to the diffinitions. (author)

  11. Determination of Urine Albumin by New Simple High-Performance Liquid Chromatography Method.

    Science.gov (United States)

    Klapkova, Eva; Fortova, Magdalena; Prusa, Richard; Moravcova, Libuse; Kotaska, Karel

    2016-11-01

    A simple high-performance liquid chromatography (HPLC) method was developed for the determination of albumin in patients' urine samples without coeluting proteins and was compared with the immunoturbidimetric determination of albumin. Urine albumin is important biomarker in diabetic patients, but part of it is immuno-nonreactive. Albumin was determined by high-performance liquid chromatography (HPLC), UV detection at 280 nm, Zorbax 300SB-C3 column. Immunoturbidimetric analysis was performed using commercial kit on automatic biochemistry analyzer COBAS INTEGRA ® 400, Roche Diagnostics GmbH, Manheim, Germany. The HLPC method was fully validated. No significant interference with other proteins (transferrin, α-1-acid glycoprotein, α-1-antichymotrypsin, antitrypsin, hemopexin) was found. The results from 301 urine samples were compared with immunochemical determination. We found a statistically significant difference between these methods (P = 0.0001, Mann-Whitney test). New simple HPLC method was developed for the determination of urine albumin without coeluting proteins. Our data indicate that the HPLC method is highly specific and more sensitive than immunoturbidimetry. © 2016 Wiley Periodicals, Inc.

  12. A simple method suitable to study de novo root organogenesis

    Directory of Open Access Journals (Sweden)

    Xiaodong eChen

    2014-05-01

    Full Text Available De novo root organogenesis is the process in which adventitious roots regenerate from detached or wounded plant tissues or organs. In tissue culture, appropriate types and concentrations of plant hormones in the medium are critical for inducing adventitious roots. However, in natural conditions, regeneration from detached organs is likely to rely on endogenous hormones. To investigate the actions of endogenous hormones and the molecular mechanisms guiding de novo root organogenesis, we developed a simple method to imitate natural conditions for adventitious root formation by culturing Arabidopsis thaliana leaf explants on B5 medium without additive hormones. Here we show that the ability of the leaf explants to regenerate roots depends on the age of the leaf and on certain nutrients in the medium. Based on these observations, we provide examples of how this method can be used in different situations, and how it can be optimized. This simple method could be used to investigate the effects of various physiological and molecular changes on the regeneration of adventitious roots. It is also useful for tracing cell lineage during the regeneration process by differential interference contrast observation of -glucuronidase staining, and by live imaging of proteins labeled with fluorescent tags.

  13. Statistically Efficient Methods for Pitch and DOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    , it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...

  14. Synthesis of cerium oxide (CeO{sub 2}) nanoparticles using simple CO-precipitation method

    Energy Technology Data Exchange (ETDEWEB)

    Farahmandjou, M.; Zarinkamar, M.; Firoozabadi, T. P., E-mail: farahamndjou@iauvaramin.ac.ir [Islamis Azad University, Varamin-Phisva Branch, Department of Physics, Varamin (Iran, Islamic Republic of)

    2016-11-01

    Synthesis of cerium oxide (CeO{sub 2}) nanoparticles was studied by new and simple co-precipitation method. The cerium oxide nanoparticles were synthesized using cerium nitrate and potassium carbonate precursors. Their physicochemical properties were characterized by high resolution transmission electron microscopy (HRTEM), scanning electron microscopy (Sem), X-ray diffraction (XRD), energy dispersive spectroscopy (EDS), Fourier transform infrared spectroscopy (Ftir) and UV-Vis spectrophotometer. XRD pattern showed the cubic structure of the cerium oxide nanoparticles. The average particle size of CeO{sub 2} was around 20 nm as estimated by XRD technique and direct HRTEM observations. The surface morphological studies from Sem and Tem depicted spherical particles with formation of clusters. The sharp peaks in Ftir spectrum determined the existence of CeO{sub 2} stretching mode and the absorbance peak of UV-Vis spectrum showed the bandgap energy of 3.26 eV. (Author)

  15. Estimating return levels from maxima of non-stationary random sequences using the Generalized PWM method

    Directory of Open Access Journals (Sweden)

    P. Ribereau

    2008-12-01

    Full Text Available Since the pioneering work of Landwehr et al. (1979, Hosking et al. (1985 and their collaborators, the Probability Weighted Moments (PWM method has been very popular, simple and efficient to estimate the parameters of the Generalized Extreme Value (GEV distribution when modeling the distribution of maxima (e.g., annual maxima of precipitations in the Identically and Independently Distributed (IID context. When the IID assumption is not satisfied, a flexible alternative, the Maximum Likelihood Estimation (MLE approach offers an elegant way to handle non-stationarities by letting the GEV parameters to be time dependent. Despite its qualities, the MLE applied to the GEV distribution does not always provide accurate return level estimates, especially for small sample sizes or heavy tails. These drawbacks are particularly true in some non-stationary situations. To reduce these negative effects, we propose to extend the PWM method to a more general framework that enables us to model temporal covariates and provide accurate GEV-based return levels. Theoretical properties of our estimators are discussed. Small and moderate sample sizes simulations in a non-stationary context are analyzed and two brief applications to annual maxima of CO2 and seasonal maxima of cumulated daily precipitations are presented.

  16. SIMPLE METHOD FOR ESTIMATING POLYCHLORINATED BIPHENYL CONCENTRATIONS ON SOILS AND SEDIMENTS USING SUBCRITICAL WATER EXTRACTION COUPLED WITH SOLID-PHASE MICROEXTRACTION. (R825368)

    Science.gov (United States)

    A rapid method for estimating polychlorinated biphenyl (PCB) concentrations in contaminated soils and sediments has been developed by coupling static subcritical water extraction with solid-phase microextraction (SPME). Soil, water, and internal standards are placed in a seale...

  17. A simple transformation independent method for outlier definition.

    Science.gov (United States)

    Johansen, Martin Berg; Christensen, Peter Astrup

    2018-04-10

    Definition and elimination of outliers is a key element for medical laboratories establishing or verifying reference intervals (RIs). Especially as inclusion of just a few outlying observations may seriously affect the determination of the reference limits. Many methods have been developed for definition of outliers. Several of these methods are developed for the normal distribution and often data require transformation before outlier elimination. We have developed a non-parametric transformation independent outlier definition. The new method relies on drawing reproducible histograms. This is done by using defined bin sizes above and below the median. The method is compared to the method recommended by CLSI/IFCC, which uses Box-Cox transformation (BCT) and Tukey's fences for outlier definition. The comparison is done on eight simulated distributions and an indirect clinical datasets. The comparison on simulated distributions shows that without outliers added the recommended method in general defines fewer outliers. However, when outliers are added on one side the proposed method often produces better results. With outliers on both sides the methods are equally good. Furthermore, it is found that the presence of outliers affects the BCT, and subsequently affects the determined limits of current recommended methods. This is especially seen in skewed distributions. The proposed outlier definition reproduced current RI limits on clinical data containing outliers. We find our simple transformation independent outlier detection method as good as or better than the currently recommended methods.

  18. Using container weights to determine irrigation needs: A simple method

    Science.gov (United States)

    R. Kasten Dumroese; Mark E. Montville; Jeremiah R. Pinto

    2015-01-01

    Proper irrigation can reduce water use, water waste, and incidence of disease. Knowing when to irrigate plants in container nurseries can be determined by weighing containers. This simple method is quantifiable, which is a benefit when more than one worker is responsible for irrigation. Irrigation is necessary when the container weighs some target as a proportion of...

  19. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  20. A Comparison of Simple Methods to Incorporate Material Temperature Dependency in the Green's Function Method for Estimating Transient Thermal Stresses in Thick-Walled Power Plant Components.

    Science.gov (United States)

    Rouse, James; Hyde, Christopher

    2016-01-06

    The threat of thermal fatigue is an increasing concern for thermal power plant operators due to the increasing tendency to adopt "two-shifting" operating procedures. Thermal plants are likely to remain part of the energy portfolio for the foreseeable future and are under societal pressures to generate in a highly flexible and efficient manner. The Green's function method offers a flexible approach to determine reference elastic solutions for transient thermal stress problems. In order to simplify integration, it is often assumed that Green's functions (derived from finite element unit temperature step solutions) are temperature independent (this is not the case due to the temperature dependency of material parameters). The present work offers a simple method to approximate a material's temperature dependency using multiple reference unit solutions and an interpolation procedure. Thermal stress histories are predicted and compared for realistic temperature cycles using distinct techniques. The proposed interpolation method generally performs as well as (if not better) than the optimum single Green's function or the previously-suggested weighting function technique (particularly for large temperature increments). Coefficients of determination are typically above 0 . 96 , and peak stress differences between true and predicted datasets are always less than 10 MPa.

  1. Coalescent methods for estimating phylogenetic trees.

    Science.gov (United States)

    Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V

    2009-10-01

    We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.

  2. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  3. Radiation transport benchmarks for simple geometries with void regions using the spherical harmonics method

    International Nuclear Information System (INIS)

    Kobayashi, K.

    2009-01-01

    In 2001, an international cooperation on the 3D radiation transport benchmarks for simple geometries with void region was performed under the leadership of E. Sartori of OECD/NEA. There were contributions from eight institutions, where 6 contributions were by the discrete ordinate method and only two were by the spherical harmonics method. The 3D spherical harmonics program FFT3 by the finite Fourier transformation method has been improved for this presentation, and benchmark solutions for the 2D and 3D simple geometries with void region by the FFT2 and FFT3 are given showing fairly good accuracy. (authors)

  4. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    Science.gov (United States)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  5. Simple measurement of 14C in the environment using gel suspension method

    International Nuclear Information System (INIS)

    Wakabayashi, Genichiro; Oura, Hirotaka; Nagao, Kenjiro; Okai, Tomio; Matoba, Masaru; Kakiuchi, Hideki; Momoshima, Noriyuki; Kawamura, Hidehisa

    1999-01-01

    A gel suspension method using N-lauroyl-L-glutamic-α, γ-dibutylamide as gelling agent and calcium carbonate as sample was studied and it was proved a more simple measurement method of 14 C in environment than the ordinary method. 100, 20 and 7 ml of sample could introduce 3.6, 0.72 and 0.252 g of carbon, respectively. When 100 ml and 20 ml of vial introduced the maximum carbon, the lower limit of detection was about 0.3 dpm/g-C and 0.5 dpm/g-C, respectively. These values showed that this method was able to determine 14 C in the environment. The value of sample has been constant for two years or more. This fact indicated the sample prepared by this method was good for repeat measurement and long-term storage. Many samples prepared by the same calcium carbonate showed almost same values. The concentrations of 14 C in the growth rings of a tree and in rice in the environment were determined and the results agreed with the values in the references. From these above results, this method is more simple measurement method of 14 C in the environment than the ordinary method and can apply to determine 14 C in and around the nuclear installation. (S.Y.)

  6. A simple and rapid method for measurement of 10B-para-boronophenylalanine in the blood for boron neutron capture therapy using fluorescence spectrophotometry

    International Nuclear Information System (INIS)

    Kashino, Genro; Fukutani, Satoshi; Suzuki, Minoru

    2009-01-01

    10 B deriving from 10 B-para-boronophenylalanine (BPA) and 10 B-borocaptate sodium (BSH) have been detected in blood samples of patients undergoing boron neutron capture therapy (BNCT) using prompt gamma ray spectrometer or Inductively Coupled Plasma (ICP) method, respectively. However, the concentration of each compound cannot be ascertained because boron atoms in both molecules are the target in these assays. Here, we propose a simple and rapid method to measure only BPA by detecting fluorescence based on the characteristics of phenylalanine. 10 B concentrations of blood samples from human or mice were estimated by the fluorescence intensities at 275 nm of a BPA excited by light of wavelength 257 nm using a fluorescence spectrophotometer. The relationship between fluorescence to increased BPA concentration showed a positive linear correlation. Moreover, we established an adequate condition for BPA measurement in blood samples containing BPA, and the estimated 10 B concentrations of blood samples derived from BPA treated mice were similar between the values obtained by our method and those by ICP method. This new assay will be useful to estimate BPA concentration in blood samples obtained from patients undergoing BNCT especially in a combination use of BSH and BPA. (author)

  7. Electrochemical Impedance Spectroscopy—A Simple Method for the Characterization of Polymer Inclusion Membranes Containing Aliquat 336

    Science.gov (United States)

    O'Rourke, Michelle; Duffy, Noel; De Marco, Roland; Potter, Ian

    2011-01-01

    Electrochemical impedance spectroscopy (EIS) has been used to estimate the non-frequency dependent (static) dielectric constants of base polymers such as poly(vinyl chloride) (PVC), cellulose triacetate (CTA) and polystyrene (PS). Polymer inclusion membranes (PIMs) containing different amounts of PVC or CTA, along with the room temperature ionic liquid Aliquat 336 and plasticizers such as trisbutoxyethyl phosphate (TBEP), dioctyl sebecate (DOS) and 2-nitrophenyloctyl ether (NPOE) have been investigated. In this study, the complex and abstract method of EIS has been applied in a simple and easy to use way, so as to make the method accessible to membrane scientists and engineers who may not possess the detailed knowledge of electrochemistry and interfacial science needed for a rigorous interpretation of EIS results. The EIS data reported herein are internally consistent with a percolation threshold in the dielectric constant at high concentrations of Aliquat 336, which illustrates the suitability of the EIS technique since membrane percolation with ion exchangers is a well-known phenomenon. PMID:24957616

  8. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  9. A simple enzymic method for the synthesis of [32P]phosphoenolpyruvate

    International Nuclear Information System (INIS)

    Parra, F.

    1982-01-01

    A rapid and simple enzymic method is described for the synthesis of [ 32 P]phosphoenolpyruvate from [ 32 P]Psub(i), with a reproducible yield of 74%. The final product was shown to be a good substrate for pyruvate kinase (EC 2.7.1.40). (author)

  10. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  11. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  12. When are genetic methods useful for estimating contemporary abundance and detecting population trends?

    Science.gov (United States)

    David A. Tallmon; Dave Gregovich; Robin S. Waples; C. Scott Baker; Jennifer Jackson; Barbara L. Taylor; Eric Archer; Karen K. Martien; Fred W. Allendorf; Michael K. Schwartz

    2010-01-01

    The utility of microsatellite markers for inferring population size and trend has not been rigorously examined, even though these markers are commonly used to monitor the demography of natural populations. We assessed the ability of a linkage disequilibrium estimator of effective population size (Ne) and a simple capture-recapture estimator of abundance (N) to quantify...

  13. Utility of Penman-Monteith, Priestley-Taylor, reference evapotranspiration, and pan evaporation methods to estimate pasture evapotranspiration

    Science.gov (United States)

    Sumner, D.M.; Jacobs, J.M.

    2005-01-01

    Actual evapotranspiration (ETa) was measured at 30-min resolution over a 19-month period (September 28, 2000-April 23, 2002) from a nonirrigated pasture site in Florida, USA, using eddy correlation methods. The relative magnitude of measured ETa (about 66% of long-term annual precipitation at the study site) indicates the importance of accurate ET a estimates for water resources planning. The time and cost associated with direct measurements of ETa and the rarity of historical measurements of ETa make the use of methods relying on more easily obtainable data desirable. Several such methods (Penman-Monteith (PM), modified Priestley-Taylor (PT), reference evapotranspiration (ET 0), and pan evaporation (Ep)) were related to measured ETa using regression methods to estimate PM bulk surface conductance, PT ??, ET0 vegetation coefficient, and Ep pan coefficient. The PT method, where the PT ?? is a function of green-leaf area index (LAI) and solar radiation, provided the best relation with ET a (standard error (SE) for daily ETa of 0.11 mm). The PM method, in which the bulk surface conductance was a function of net radiation and vapor-pressure deficit, was slightly less effective (SE=0.15 mm) than the PT method. Vegetation coefficients for the ET0 method (SE=0.29 mm) were found to be a simple function of LAI. Pan coefficients for the Ep method (SE=0.40 mm) were found to be a function of LAI and Ep. Historical or future meteorological, LAI, and pan evaporation data from the study site could be used, along with the relations developed within this study, to provide estimates of ETa in the absence of direct measurements of ETa. Additionally, relations among PM, PT, and ET0 methods and ETa can provide estimates of ETa in other, environmentally similar, pasture settings for which meteorological and LAI data can be obtained or estimated. ?? 2004 Elsevier B.V. All rights reserved.

  14. Validation of a simple method for predicting the disinfection performance in a flow-through contactor.

    Science.gov (United States)

    Pfeiffer, Valentin; Barbeau, Benoit

    2014-02-01

    Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  16. On the Methods for Estimating the Corneoscleral Limbus.

    Science.gov (United States)

    Jesus, Danilo A; Iskander, D Robert

    2017-08-01

    The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.

  17. A simple method for generation of back-ground-free gamma-ray spectra

    International Nuclear Information System (INIS)

    Kawarasaki, Y.

    1976-01-01

    A simple and versatile method of generating background-free γ-ray spectra is presented. This method is equivalent to the generation of a continuous background baseline over the entire energy range of spectra corresponding to the original ones obtained with a Ge(Li) detector. These background curves can not be generally expressed in a single and simple analytic form nor in the form of a power series. These background-free spectra thus obtained make it feasible to assign many tiny peaks at the stage of visual inspection of the spectra, which is difficult to do with the original ones. The automatic peak-finding and peak area calculation procedures are both applicable to these background-free spectra. Examples of the application are illustrated. The effect of the peak-shape distortion is also discussed. (Auth.)

  18. New Stability Indicating RP-HPLC Method for the Estimation of Cefpirome Sulphate in Bulk and Pharmaceutical Dosage Forms

    OpenAIRE

    Rao, Kareti Srinivasa; Kumar, Keshar Nargesh; Joydeep, Datta

    2011-01-01

    A simple stability indicating reversed-phase HPLC method was developed and subsequently validated for estimation of Cefpirome sulphate (CPS) present in pharmaceutical dosage forms. The proposed RP-HPLC method utilizes a LiChroCART-Lichrosphere100, C18 RP column (250 mm ? 4mm ? 5 ?m) in an isocratic separation mode with mobile phase consisting of methanol and water in the proportion of 50:50 % (v/v), at a flow rate 1ml/min, and the effluent was monitored at 270 nm. The retention time of CPS wa...

  19. An automated background estimation procedure for gamma ray spectra

    International Nuclear Information System (INIS)

    Tervo, R.J.; Kennett, T.J.; Prestwich, W.V.

    1983-01-01

    An objective and simple method has been developed to estimate the background continuum in Ge gamma ray spectra. Requiring no special procedures, the method is readily automated. Based upon the inherent statistical properties of the experimental data itself, nodes, which reflect background samples are located and used to produce an estimate of the continuum. A simple procedure to interpolate between nodes is reported and a range of rather typical experimental data is presented. All information necessary to implemented this technique is given including the relevant properties of various factors involved in its development. (orig.)

  20. Comparison of methods for estimating carbon in harvested wood products

    International Nuclear Information System (INIS)

    Claudia Dias, Ana; Louro, Margarida; Arroja, Luis; Capela, Isabel

    2009-01-01

    There is a great diversity of methods for estimating carbon storage in harvested wood products (HWP) and, therefore, it is extremely important to agree internationally on the methods to be used in national greenhouse gas inventories. This study compares three methods for estimating carbon accumulation in HWP: the method suggested by Winjum et al. (Winjum method), the tier 2 method proposed by the IPCC Good Practice Guidance for Land Use, Land-Use Change and Forestry (GPG LULUCF) (GPG tier 2 method) and a method consistent with GPG LULUCF tier 3 methods (GPG tier 3 method). Carbon accumulation in HWP was estimated for Portugal under three accounting approaches: stock-change, production and atmospheric-flow. The uncertainty in the estimates was also evaluated using Monte Carlo simulation. The estimates of carbon accumulation in HWP obtained with the Winjum method differed substantially from the estimates obtained with the other methods, because this method tends to overestimate carbon accumulation with the stock-change and the production approaches and tends to underestimate carbon accumulation with the atmospheric-flow approach. The estimates of carbon accumulation provided by the GPG methods were similar, but the GPG tier 3 method reported the lowest uncertainties. For the GPG methods, the atmospheric-flow approach produced the largest estimates of carbon accumulation, followed by the production approach and the stock-change approach, by this order. A sensitivity analysis showed that using the ''best'' available data on production and trade of HWP produces larger estimates of carbon accumulation than using data from the Food and Agriculture Organization. (author)

  1. Simple Stacking Methods for Silicon Micro Fuel Cells

    Directory of Open Access Journals (Sweden)

    Gianmario Scotti

    2014-08-01

    Full Text Available We present two simple methods, with parallel and serial gas flows, for the stacking of microfabricated silicon fuel cells with integrated current collectors, flow fields and gas diffusion layers. The gas diffusion layer is implemented using black silicon. In the two stacking methods proposed in this work, the fluidic apertures and gas flow topology are rotationally symmetric and enable us to stack fuel cells without an increase in the number of electrical or fluidic ports or interconnects. Thanks to this simplicity and the structural compactness of each cell, the obtained stacks are very thin (~1.6 mm for a two-cell stack. We have fabricated two-cell stacks with two different gas flow topologies and obtained an open-circuit voltage (OCV of 1.6 V and a power density of 63 mW·cm−2, proving the viability of the design.

  2. Development of a simple method for the immobilization of anti-thyroxine antibody on polystyrene tubes for use in the measurement of total thyroxine in serum

    International Nuclear Information System (INIS)

    Rani Gnanasekar; Shalaka Paradkar; Vijay Kadwad; Ketaki Bapat; Grace Samuel; Sachdev, S.S.; Sivaprasad, N.

    2015-01-01

    We describe a simple method for the immobilisation of anti-thyroxine antibody on to the surface of polystyrene tubes and a simple assay format for the quantitative estimation of total thyroxine in serum. The immobilisation of anti-thyroxine antibody was achieved through passive adsorption of normal rabbit gamma globulin and anti-rabbit antibody raised in goat, as immune bridges. This procedure ensured minimum utilisation of primary and secondary antibody as neat sera without precipitation or affinity purification. The developed assay system using these antibody coated tubes covers a range of 0-240 ng/mL of thyroxine with intra and inter assay variations of less than 10 %. (author)

  3. Simple method of obtaining the band strengths in the electronic spectra of diatomic molecules

    International Nuclear Information System (INIS)

    Gowda, L.S.; Balaji, V.N.

    1977-01-01

    It is shown that relative band strengths of diatomic molecules for which the product of Franck-Condon factor and r-centroid is approximately equal to 1 for (0,0) band can be determined by a simple method which is in good agreement with the smoothed array of experimental values. Such values for the Swan bands of the C 2 molecule are compared with the band strengths of the simple method. It is noted that the Swan bands are one of the outstanding features of R- and N-type stars and of the heads of comets

  4. A Simple Combinatorial Codon Mutagenesis Method for Targeted Protein Engineering.

    Science.gov (United States)

    Belsare, Ketaki D; Andorfer, Mary C; Cardenas, Frida S; Chael, Julia R; Park, Hyun June; Lewis, Jared C

    2017-03-17

    Directed evolution is a powerful tool for optimizing enzymes, and mutagenesis methods that improve enzyme library quality can significantly expedite the evolution process. Here, we report a simple method for targeted combinatorial codon mutagenesis (CCM). To demonstrate the utility of this method for protein engineering, CCM libraries were constructed for cytochrome P450 BM3 , pfu prolyl oligopeptidase, and the flavin-dependent halogenase RebH; 10-26 sites were targeted for codon mutagenesis in each of these enzymes, and libraries with a tunable average of 1-7 codon mutations per gene were generated. Each of these libraries provided improved enzymes for their respective transformations, which highlights the generality, simplicity, and tunability of CCM for targeted protein engineering.

  5. Evaluation of non cyanide methods for hemoglobin estimation

    Directory of Open Access Journals (Sweden)

    Vinaya B Shah

    2011-01-01

    Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers

  6. The simple method to co-register planar image with photograph

    International Nuclear Information System (INIS)

    Jang, Sung June; Kim, Seok Ki; Kang, Keon Wook

    2005-01-01

    Generally scintigraphic image presents the highly specific functional information. Sometimes, there can be limited information of patients anatomical landmark required to identify the lesion in planar nuclear medicine image. In this study, we applied the simple fusion method of planar scintigraphy and plain photography and validated the techniques with our own software. We used three fiducial marks which were comprised with Tc-99m. We obtained planar image with single head gamma camera (ARGUS ADAC laboratory, USA) and photograph using a general digital camera (CANON JAPAN). The coordinates of three marks were obtained in photograph and planar scintigraphy image. Based on these points, we took affine transformation and then fused these two images. To evaluate the precision, we compared with different depth. To find out the depth of lesion, the images were acquired in different angles and we compared the real depth and the geometrically calculated depth. At the same depth with mark, the each discordance was less than 1 mm. When the photograph were taken at the distance with 1 m and 2 m, the point 30 cm off the center were discordant in 5 mm and 2 mm each. We used this method in the localization of the remnant thyroid tissue on I-131 whole body scan with photo image. The simple method to co-register planar image with photography was reliable and easy to use. By this method, we could localize the lesion on the planar scintigraphy more accurately with other planar images (i.e. photograph) and predict the depth of the lesion without tomographic image

  7. Multiscale methods coupling atomistic and continuum mechanics: analysis of a simple case

    OpenAIRE

    Blanc , Xavier; Le Bris , Claude; Legoll , Frédéric

    2007-01-01

    International audience; The description and computation of fine scale localized phenomena arising in a material (during nanoindentation, for instance) is a challenging problem that has given birth to many multiscale methods. In this work, we propose an analysis of a simple one-dimensional method that couples two scales, the atomistic one and the continuum mechanics one. The method includes an adaptive criterion in order to split the computational domain into two subdomains, that are described...

  8. A simple method for affinity purification of radiolabeled monoclonal antibodies

    Energy Technology Data Exchange (ETDEWEB)

    Juweid, M; Sato, J; Paik, C; Onay-Basaran, S; Weinstein, J N; Neumann, R D [National Cancer Inst., Bethesda, MD (United States)

    1993-04-01

    A simple method is described for affinity purification of radiolabeled antibodies using glutaraldehyde-fixed tumor target cells. The cell-bound antibody fraction is removed from the cells by an acid wash and then immediately subjected to buffer-exchange chromatography. The method was applied to the D3 murine monoclonal antibody which binds to a 290 kDa antigen on the surface of Line 10 guinea pig carcinoma cells. No alteration in the molecular size profile was detected after acid washing. Purification resulted in a significant increase in immunoreactivity by an average of 14 [+-] 47% (SD; range 4-30%). (author).

  9. Analysis of a Kalman filter based method for on-line estimation of atmospheric dispersion parameters using radiation monitoring data

    DEFF Research Database (Denmark)

    Drews, Martin; Lauritzen, Bent; Madsen, Henrik

    2005-01-01

    A Kalman filter method is discussed for on-line estimation of radioactive release and atmospheric dispersion from a time series of off-site radiation monitoring data. The method is based on a state space approach, where a stochastic system equation describes the dynamics of the plume model...... parameters, and the observables are linked to the state variables through a static measurement equation. The method is analysed for three simple state space models using experimental data obtained at a nuclear research reactor. Compared to direct measurements of the atmospheric dispersion, the Kalman filter...... estimates are found to agree well with the measured parameters, provided that the radiation measurements are spread out in the cross-wind direction. For less optimal detector placement it proves difficult to distinguish variations in the source term and plume height; yet the Kalman filter yields consistent...

  10. Is simple nephrectomy truly simple? Comparison with the radical alternative.

    Science.gov (United States)

    Connolly, S S; O'Brien, M Frank; Kunni, I M; Phelan, E; Conroy, R; Thornhill, J A; Grainger, R

    2011-03-01

    The Oxford English dictionary defines the term "simple" as "easily done" and "uncomplicated". We tested the validity of this terminology in relation to open nephrectomy surgery. Retrospective review of 215 patients undergoing open, simple (n = 89) or radical (n = 126) nephrectomy in a single university-affiliated institution between 1998 and 2002. Operative time (OT), estimated blood loss (EBL), operative complications (OC) and length of stay in hospital (LOS) were analysed. Statistical analysis employed Fisher's exact test and Stata Release 8.2. Simple nephrectomy was associated with shorter OT (mean 126 vs. 144 min; p = 0.002), reduced EBL (mean 729 vs. 859 cc; p = 0.472), lower OC (9 vs. 17%; 0.087), and more brief LOS (mean 6 vs. 8 days; p < 0.001). All parameters suggest favourable outcome for the simple nephrectomy group, supporting the use of this terminology. This implies "simple" nephrectomies are truly easier to perform with less complication than their radical counterpart.

  11. Simple and Inexpensive Methods Development for Determination of Venlafaxine Hydrochloride from Its Solid Dosage Forms by Visible Spectrophotometry

    Directory of Open Access Journals (Sweden)

    K. Raghubabu

    2012-01-01

    Full Text Available Two simple, sensitive and cost effective visible spectrophotometric methods (M1 and M2 have been developed for the determination of venlafaxine hydrochloride from bulk and tablet dosage forms. The method M1 is based on the formation of green colored coordination complex by the drug with cobalt thiocyanate which is quantitatively extractable into nitro benzene with an absorption maximum of 626.4 nm. The method M2 involves internal salt formation of aconitic anhydride, dehydration product of citric acid [CIA] with acetic anhydride [Ac2O] to form colored chromogen with an absorption maximum of 561.2 nm. The calibration graph is linear over the concentration range of 10-50 µg/mL and 8-24 µg/mL for method M1 and M2 respectively. The proposed methods are applied to commercial available tablets and the results are statistically compared with those obtained by the reference method and validated by recovery studies. The results are found satisfactory and reproducible. These methods are applied successfully for the estimation of the venlafaxine hydrochloride in the presence of other ingredients that are usually present in dosage forms.

  12. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...

  13. Integral-equation based methods for parameter estimation in output pulses of radiation detectors: Application in nuclear medicine and spectroscopy

    Science.gov (United States)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-04-01

    Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

  14. A simple method of screening for metabolic bone disease

    International Nuclear Information System (INIS)

    Broughton, R.B.K.; Evans, W.D.

    1982-01-01

    The purpose of this investigation was to find a simple method -to be used as an adjunct to the conventional bone scintigram- that could differentiate between decreased bone metabolism or mass, i.e., osteoporosis -normal bone- and the group of conditions of increased bone metabolism or mass namely, osteomalacia, renal osteodystrophy, hyperparathyroidism and Paget's disease. The Fogelman's method using the bone to soft tissue ratios from region of interest analysis at 4 hours post injection, was adopted. An initial experience in measuring a value for the count rate density in lumbar vertebrae at 1 hr post injection during conventional bone scintigraphy appears to give a clear indication of the overall rate of bone metabolism. The advantage over whole body retention methods is that the scan performed at the end of the metabolic study will reveal localized bone disease that may otherwise not be anticipated

  15. Exploring simple assessment methods for lighting quality with architecture and design students

    DEFF Research Database (Denmark)

    Madsen, Merete

    2006-01-01

    that cannot be assessed by simple equations or rules-of-thumb. Balancing the many an often contradictory aspects of energy efficiency and high quality lighting design is a complex undertaking not just for students. The work described in this paper is one result of an academic staff exchange between...... the Schools of Architecture in Copenhagen and Victoria University of Wellington (New Zealand). The authors explore two approaches to teaching students simple assessment methods that can contribute to making more informed decisions about the luminous environment and its quality. One approach deals...... with the assessment of luminance ratios in relation to computer work and presents in that context some results from an experiment undertaken to introduce the concept of luminance ratios and preferred luminance ranges to architeture students. In the other approach a Danish method for assissing the luminance...

  16. On the reliability of a simple method for scoring phenotypes to estimate heritability: A case study with pupal color in Heliconius erato phyllis , Fabricius 1775 (Lepidoptera, Nymphalidae

    Directory of Open Access Journals (Sweden)

    Adriano Andrejew Ferreira

    2009-01-01

    Full Text Available In this paper, two methods for assessing the degree of melanization of pupal exuviae from the butterfly Heliconius erato phyllis , Fabricius 1775 (Lepidoptera, Nymphalidae, Heliconiini are compared. In the first method, which was qualitative, the exuviae were classified by scoring the degree of melanization, whereas in the second method, which was quantitative, the exuviae were classified by optical density followed by analysis with appropriate software. The heritability (h 2 of the degree of melanization was estimated by regression and analysis of variance. The estimates of h 2 were similar with both methods, indicating that the qualitative method could be particularly suitable for field work. The low estimates obtained for heritability may have resulted from the small sample size ( n = 7-18 broods, including the parents or from the allocation-priority hypothesis in which pupal color would be a lower priority trait compared to morphological traits and adequate larval development.

  17. 12 CFR 717.25 - Reasonable and simple methods of opting out.

    Science.gov (United States)

    2010-01-01

    ... simple methods for exercising an opt-out right do not include— (i) Requiring the consumer to write his or... out. (a) In general. You must not use eligibility information about a consumer that you receive from an affiliate to make a solicitation to the consumer about your products or services, unless the...

  18. A simple red-ox titrimetric method for the evaluation of photo ...

    Indian Academy of Sciences (India)

    Unknown

    tal conditions in a relatively short duration in R&D labora- tories having basic analytical facilities. The method suggested here could also be adopted to study the photo- catalytic activity of other transition metal oxide based catalysts. For establishing this technique, we have moni- tored a simple one-electron transfer red-ox ...

  19. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  20. A simple method for the prevention of endometrial autolysis in hysterectomy specimens

    OpenAIRE

    Houghton, J P; Roddy, S; Carroll, S; McCluggage, W G

    2004-01-01

    Aims: Uteri are among the most common surgical pathology specimens. Assessment of the endometrium is often difficult because of pronounced tissue autolysis. This study describes a simple method to prevent endometrial autolysis and aid in interpretation of the endometrium.

  1. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  2. A simple method of fabricating mask-free microfluidic devices for biological analysis.

    KAUST Repository

    Yi, Xin; Kodzius, Rimantas; Gong, Xiuqing; Xiao, Kang; Wen, Weijia

    2010-01-01

    We report a simple, low-cost, rapid, and mask-free method to fabricate two-dimensional (2D) and three-dimensional (3D) microfluidic chip for biological analysis researches. In this fabrication process, a laser system is used to cut through paper

  3. Thermodynamic estimation: Ionic materials

    International Nuclear Information System (INIS)

    Glasser, Leslie

    2013-01-01

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  4. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  5. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  6. A simple method to evaluate the composition of tissue-equivalent phantom materials

    International Nuclear Information System (INIS)

    Geske, G.

    1977-01-01

    A description is given of a method to calculate the composition of phantom materials with given density and radiation-physical parameters mixed of components, of which are known their chemical composition and their effective specific volumes. By an example of a simple composition with three components the method is illustrated. The results of this example and some experimental details that must be considered are discussed. (orig.) [de

  7. Methods to estimate the genetic risk

    International Nuclear Information System (INIS)

    Ehling, U.H.

    1989-01-01

    The estimation of the radiation-induced genetic risk to human populations is based on the extrapolation of results from animal experiments. Radiation-induced mutations are stochastic events. The probability of the event depends on the dose; the degree of the damage dose not. There are two main approaches in making genetic risk estimates. One of these, termed the direct method, expresses risk in terms of expected frequencies of genetic changes induced per unit dose. The other, referred to as the doubling dose method or the indirect method, expresses risk in relation to the observed incidence of genetic disorders now present in man. The advantage of the indirect method is that not only can Mendelian mutations be quantified, but also other types of genetic disorders. The disadvantages of the method are the uncertainties in determining the current incidence of genetic disorders in human and, in addition, the estimasion of the genetic component of congenital anomalies, anomalies expressed later and constitutional and degenerative diseases. Using the direct method we estimated that 20-50 dominant radiation-induced mutations would be expected in 19 000 offspring born to parents exposed in Hiroshima and Nagasaki, but only a small proportion of these mutants would have been detected with the techniques used for the population study. These methods were used to predict the genetic damage from the fallout of the reactor accident at Chernobyl in the vicinity of Southern Germany. The lack of knowledge for the interaction of chemicals with ionizing radiation and the discrepancy between the high safety standards for radiation protection and the low level of knowledge for the toxicological evaluation of chemical mutagens will be emphasized. (author)

  8. A method of estimating log weights.

    Science.gov (United States)

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  9. A new approach for estimation of the axial velocity using ultrasound

    DEFF Research Database (Denmark)

    Munk, Peter; Jensen, Jørgen Arendt

    2000-01-01

    for the data segment. The benefit of this method is an estimate of the mean axial velocity which is independent of the center frequency of the propagating ultrasound pulse. The estimate will only depend on fs and fprf. Results of the estimation method is presented based on both simple generated RF harmonic...

  10. Validity of a Simple Method for Measuring Force-Velocity-Power Profile in Countermovement Jump.

    Science.gov (United States)

    Jiménez-Reyes, Pedro; Samozino, Pierre; Pareja-Blanco, Fernando; Conceição, Filipe; Cuadrado-Peñafiel, Víctor; González-Badillo, Juan José; Morin, Jean-Benoît

    2017-01-01

    To analyze the reliability and validity of a simple computation method to evaluate force (F), velocity (v), and power (P) output during a countermovement jump (CMJ) suitable for use in field conditions and to verify the validity of this computation method to compute the CMJ force-velocity (F-v) profile (including unloaded and loaded jumps) in trained athletes. Sixteen high-level male sprinters and jumpers performed maximal CMJs under 6 different load conditions (0-87 kg). A force plate sampling at 1000 Hz was used to record vertical ground-reaction force and derive vertical-displacement data during CMJ trials. For each condition, mean F, v, and P of the push-off phase were determined from both force-plate data (reference method) and simple computation measures based on body mass, jump height (from flight time), and push-off distance and used to establish the linear F-v relationship for each individual. Mean absolute bias values were 0.9% (± 1.6%), 4.7% (± 6.2%), 3.7% (± 4.8%), and 5% (± 6.8%) for F, v, P, and slope of the F-v relationship (S Fv ), respectively. Both methods showed high correlations for F-v-profile-related variables (r = .985-.991). Finally, all variables computed from the simple method showed high reliability, with ICC >.980 and CV push-off distance, and jump height are known.

  11. A Fast LMMSE Channel Estimation Method for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Zhou Wen

    2009-01-01

    Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.

  12. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Science.gov (United States)

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  13. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  14. Evaluation of three paediatric weight estimation methods in Singapore.

    Science.gov (United States)

    Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong

    2013-04-01

    Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (p₁₀) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; p₁₀ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; p₁₀ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; p₁₀ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  15. A simple method of chaos control for a class of chaotic discrete-time systems

    International Nuclear Information System (INIS)

    Jiang Guoping; Zheng Weixing

    2005-01-01

    In this paper, a simple method is proposed for chaos control for a class of discrete-time chaotic systems. The proposed method is built upon the state feedback control and the characteristic of ergodicity of chaos. The feedback gain matrix of the controller is designed using a simple criterion, so that control parameters can be selected via the pole placement technique of linear control theory. The new controller has a feature that it only uses the state variable for control and does not require the target equilibrium point in the feedback path. Moreover, the proposed control method cannot only overcome the so-called 'odd eigenvalues number limitation' of delayed feedback control, but also control the chaotic systems to the specified equilibrium points. The effectiveness of the proposed method is demonstrated by a two-dimensional discrete-time chaotic system

  16. 12 CFR 571.25 - Reasonable and simple methods of opting out.

    Science.gov (United States)

    2010-01-01

    ... CREDIT REPORTING Affiliate Marketing § 571.25 Reasonable and simple methods of opting out. (a) In general... out, such as a form that can be electronically mailed or processed at an Internet Web site, if the... (15 U.S.C. 6801 et seq.), the affiliate sharing opt-out under the Act, and the affiliate marketing opt...

  17. 16 CFR 680.25 - Reasonable and simple methods of opting out.

    Science.gov (United States)

    2010-01-01

    ... AFFILIATE MARKETING § 680.25 Reasonable and simple methods of opting out. (a) In general. You must not use... a form that can be electronically mailed or processed at an Internet Web site, if the consumer..., 15 U.S.C. 6801 et seq., the affiliate sharing opt-out under the Act, and the affiliate marketing opt...

  18. A Simple Method to Measure Nematodes' Propulsive Thrust and the Nematode Ratchet.

    Science.gov (United States)

    Bau, Haim; Yuan, Jinzhou; Raizen, David

    2015-11-01

    Since the propulsive thrust of micro organisms provides a more sensitive indicator of the animal's health and response to drugs than motility, a simple, high throughput, direct measurement of the thrust is desired. Taking advantage of the nematode C. elegans being heavier than water, we devised a simple method to determine the propulsive thrust of the animals by monitoring their velocity when swimming along an inclined plane. We find that the swimming velocity is a linear function of the sin of the inclination angle. This method allows us to determine, among other things, the animas' propulsive thrust as a function of genotype, drugs, and age. Furthermore, taking advantage of the animals' inability to swim over a stiff incline, we constructed a sawteeth ratchet-like track that restricts the animals to swim in a predetermined direction. This research was supported, in part, by NIH NIA Grant 5R03AG042690-02.

  19. The generation of simple compliance boundaries for mobile communication base station antennas using formulae for SAR estimation.

    Science.gov (United States)

    Thors, B; Hansson, B; Törnevik, C

    2009-07-07

    In this paper, a procedure is proposed for generating simple and practical compliance boundaries for mobile communication base station antennas. The procedure is based on a set of formulae for estimating the specific absorption rate (SAR) in certain directions around a class of common base station antennas. The formulae, given for both whole-body and localized SAR, require as input the frequency, the transmitted power and knowledge of antenna-related parameters such as dimensions, directivity and half-power beamwidths. With knowledge of the SAR in three key directions it is demonstrated how simple and practical compliance boundaries can be generated outside of which the exposure levels do not exceed certain limit values. The conservativeness of the proposed procedure is discussed based on results from numerical radio frequency (RF) exposure simulations with human body phantoms from the recently developed Virtual Family.

  20. Joint Pitch and DOA Estimation Using the ESPRIT method

    DEFF Research Database (Denmark)

    Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom

    2015-01-01

    In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...

  1. Ecological toxicity estimation of solid waste products of Tekely Ore Mining and Processing Enterprise of OJSC 'Kaztsink' using biological testing methods

    International Nuclear Information System (INIS)

    Vetrinskaya, N.I.; Goldobina, E.A.; Kosmukhambetov, A.R.; Kulikova, O.V.; Ismailova, Zh.B.; Gurikova, N.D.; Kozlova, N.V.

    2001-01-01

    Results are examined of solid waste products estimation using methods of biological testing at testing-objects of different phylogenetic development levels (simple aqua animals, algae, supreme water plants). Correlation is found between lead and zinc content in the extract of leaching out and exact reaction of all under-test objects. Conclusion is made that performing of the complex express economical analysis is necessary using methods of biological testing of industrial waste products monitoring and other man-made pollutants. (author)

  2. Use of simple transport equations to estimate waste package performance requirements

    International Nuclear Information System (INIS)

    Wood, B.J.

    1982-01-01

    A method of developing waste package performance requirements for specific nuclides is described. The method is based on: Federal regulations concerning permissible concentrations in solution at the point of discharge to the accessible environment; a simple and conservative transport model; baseline and potential worst-case release scenarios. Use of the transport model enables calculation of maximum permissible release rates within a repository in basalt for each of the scenarios. The maximum permissible release rates correspond to performance requirements for the engineered barrier system. The repository was assumed to be constructed in a basalt layer. For the cases considered, including a well drilled into an aquifer 1750 m from the repository center, little significant advantage is obtained from a 1000-yr as opposed to a 100-yr waste package. A 1000-yr waste package is of importance only for nuclides with half-lives much less than 100 yr which travel to the accessible environment in much less than 1000 yr. Such short travel times are extremely unlikely for a mined repository. Among the actinides, the most stringent maximum permissible release rates are for 236 U and 234 U. A simple solubility calculation suggests, however, that these performance requirements can be readily met by the engineered barrier system. Under the reducing conditions likely to occur in a repository located in basalt, uranium would be sufficiently insoluble that no solution could contain more than about 0.01% of the maximum permissible concentration at saturation. The performance requirements derived from the one-dimensional modeling approach are conservative by at least one to two orders of magnitude. More quantitative three-dimensional modeling at specific sites should enable relaxation of the performance criteria derived in this study. 12 references, 8 figures, 8 tables

  3. A simple and fast method for extraction and quantification of cryptophyte phycoerythrin

    OpenAIRE

    Thoisen, Christina; Hansen, Benni Winding; Nielsen, S?ren Laurentius

    2017-01-01

    The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equip...

  4. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    Science.gov (United States)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  5. Present status and future of simple measuring methods; Suishitsu kan`i sokutei gijutsu no genjo to hatten no hoko

    Energy Technology Data Exchange (ETDEWEB)

    Urano, K.; Ishii, S. [Yokohama National University, Yokohama (Japan)

    1998-05-10

    This paper discusses simple measuring methods for water quality. There are various purposes for measuring water quality. It is not necessary to measure the water quality in an official way by taking a lot of labor and cost. Simple measuring methods are often adopted. By applying simple methods, measuring points and frequency of main discharge processes can be increased, resulting in the detailed evaluation and monitoring. Easy safety check for earth filling waste and soil pollutants is realized. Daily inspection and atmospheric measurements at accidents and disasters are easily promoted. The simple methods require easy and rapid operation, sensibility, accuracy and reproducibility suitable for the purpose, small size apparatus, low cost for measurements, and safety against used harmful matters. Coloring reaction is utilized for most of the measuring principles of simple methods, which include tests using test papers, pack tests, colorimetric tests, and tests using photoelectric colorimeter. The bacteria detection includes titration methods using dropping bottles, tablets and syringes, and tests using test papers. A measuring kit for the enzyme immunity method is commercialized. Cooperation of experts, measuring operators at the site and citizens is also indispensable. 7 refs., 3 figs., 2 tabs.

  6. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  7. Keeping it simple: flowering plants tend to retain, and revert to, simple leaves.

    Science.gov (United States)

    Geeta, R; Dávalos, Liliana M; Levy, André; Bohs, Lynn; Lavin, Mathew; Mummenhoff, Klaus; Sinha, Neelima; Wojciechowski, Martin F

    2012-01-01

    • A wide range of factors (developmental, physiological, ecological) with unpredictable interactions control variation in leaf form. Here, we examined the distribution of leaf morphologies (simple and complex forms) across angiosperms in a phylogenetic context to detect patterns in the directions of changes in leaf shape. • Seven datasets (diverse angiosperms and six nested clades, Sapindales, Apiales, Papaveraceae, Fabaceae, Lepidium, Solanum) were analysed using maximum likelihood and parsimony methods to estimate asymmetries in rates of change among character states. • Simple leaves are most frequent among angiosperm lineages today, were inferred to be ancestral in angiosperms and tended to be retained in evolution (stasis). Complex leaves slowly originated ('gains') and quickly reverted to simple leaves ('losses') multiple times, with a significantly greater rate of losses than gains. Lobed leaves may be a labile intermediate step between different forms. The nested clades showed mixed trends; Solanum, like the angiosperms in general, had higher rates of losses than gains, but the other clades had higher rates of gains than losses. • The angiosperm-wide pattern could be taken as a null model to test leaf evolution patterns in particular clades, in which patterns of variation suggest clade-specific processes that have yet to be investigated fully. © 2011 The Authors. New Phytologist © 2011 New Phytologist Trust.

  8. A simple model for the estimation of rain-induced attenuation along earth-space paths at millimeter wavelengths

    Science.gov (United States)

    Stutzman, W. L.; Dishman, W. K.

    1982-01-01

    A simple attenuation model (SAM) is presented for estimating rain-induced attenuation along an earth-space path. The rain model uses an effective spatial rain distribution which is uniform for low rain rates and which has an exponentially shaped horizontal rain profile for high rain rates. When compared to other models, the SAM performed well in the important region of low percentages of time, and had the lowest percent standard deviation of all percent time values tested.

  9. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    Science.gov (United States)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  10. Simple method for absolute calibration of geophones, seismometers, and other inertial vibration sensors

    International Nuclear Information System (INIS)

    Kann, Frank van; Winterflood, John

    2005-01-01

    A simple but powerful method is presented for calibrating geophones, seismometers, and other inertial vibration sensors, including passive accelerometers. The method requires no cumbersome or expensive fixtures such as shaker platforms and can be performed using a standard instrument commonly available in the field. An absolute calibration is obtained using the reciprocity property of the device, based on the standard mathematical model for such inertial sensors. It requires only simple electrical measurement of the impedance of the sensor as a function of frequency to determine the parameters of the model and hence the sensitivity function. The method is particularly convenient if one of these parameters, namely the suspended mass is known. In this case, no additional mechanical apparatus is required and only a single set of impedance measurements yields the desired calibration function. Moreover, this measurement can be made with the device in situ. However, the novel and most powerful aspect of the method is its ability to accurately determine the effective suspended mass. For this, the impedance measurement is made with the device hanging from a simple spring or flexible cord (depending on the orientation of its sensitive axis). To complete the calibration, the device is weighed to determine its total mass. All the required calibration parameters, including the suspended mass, are then determined from a least-squares fit to the impedance as a function of frequency. A demonstration using both a 4.5 Hz geophone and a 1 Hz seismometer shows that the method can yield accurate absolute calibrations with an error of 0.1% or better, assuming no a priori knowledge of any parameters

  11. A Comparison of Simple Methods to Incorporate Material Temperature Dependency in the Green’s Function Method for Estimating Transient Thermal Stresses in Thick-Walled Power Plant Components

    Directory of Open Access Journals (Sweden)

    James Rouse

    2016-01-01

    Full Text Available The threat of thermal fatigue is an increasing concern for thermal power plant operators due to the increasing tendency to adopt “two-shifting” operating procedures. Thermal plants are likely to remain part of the energy portfolio for the foreseeable future and are under societal pressures to generate in a highly flexible and efficient manner. The Green’s function method offers a flexible approach to determine reference elastic solutions for transient thermal stress problems. In order to simplify integration, it is often assumed that Green’s functions (derived from finite element unit temperature step solutions are temperature independent (this is not the case due to the temperature dependency of material parameters. The present work offers a simple method to approximate a material’s temperature dependency using multiple reference unit solutions and an interpolation procedure. Thermal stress histories are predicted and compared for realistic temperature cycles using distinct techniques. The proposed interpolation method generally performs as well as (if not better than the optimum single Green’s function or the previously-suggested weighting function technique (particularly for large temperature increments. Coefficients of determination are typically above 0 . 96 , and peak stress differences between true and predicted datasets are always less than 10 MPa.

  12. Simple, reliable, and nondestructive method for the measurement of vacuum pressure without specialized equipment.

    Science.gov (United States)

    Yuan, Jin-Peng; Ji, Zhong-Hua; Zhao, Yan-Ting; Chang, Xue-Fang; Xiao, Lian-Tuan; Jia, Suo-Tang

    2013-09-01

    We present a simple, reliable, and nondestructive method for the measurement of vacuum pressure in a magneto-optical trap. The vacuum pressure is verified to be proportional to the collision rate constant between cold atoms and the background gas with a coefficient k, which can be calculated by means of the simple ideal gas law. The rate constant for loss due to collisions with all background gases can be derived from the total collision loss rate by a series of loading curves of cold atoms under different trapping laser intensities. The presented method is also applicable for other cold atomic systems and meets the miniaturization requirement of commercial applications.

  13. Assessment of Westinghouse Hanford Company methods for estimating radionuclide release from ground disposal of waste water at the N Reactor sites

    International Nuclear Information System (INIS)

    1988-09-01

    This report summarizes the results of an independent assessment by Golder Associates, Inc. of the methods used by Westinghouse Hanford Company (Westinghouse Hanford) and its predecessors to estimate the annual offsite release of radionuclides from ground disposal of cooling and other process waters from the N Reactor at the Hanford Site. This assessment was performed by evaluating the present and past disposal practices and radionuclide migration data within the context of the hydrology, geology, and physical layout of the N Reactor disposal site. The conclusions and recommendations are based upon the available data and simple analytical calculations. Recommendations are provided for conducting more refined analyses and for continued field data collection in support of estimating annual offsite releases. Recommendations are also provided for simple operational and structural measures that should reduce the quantities of radionuclides leaving the site. 5 refs., 9 figs., 1 tab

  14. An engineering method to estimate the junction temperatures of light-emitting diodes in multiple LED application

    International Nuclear Information System (INIS)

    Fu, Xing; Hu, Run; Luo, Xiaobing

    2014-01-01

    Acquiring the junction temperature of light emitting diode (LED) is essential for performance evaluation. But it is hard to get in the multiple LED applications. In this paper, an engineering method is presented to estimate the junction temperatures of LEDs in multiple LED applications. This method is mainly based on an analytical model, and it can be easily applied with some simple measurements. Simulations and experiments were conducted to prove the feasibility of the method, and the deviations among the results obtained by the present method with those by simulation as well as experiments are less than 2% and 3%, respectively. In the final part of this study, the engineering method was used to analyze the thermal resistances of a street lamp. The material of lead frame was found to affect the system thermal resistance mostly, and the choice of solder material strongly depended on the material of the lead frame.

  15. A simple bacterial turbidimetric method for detection of some radurized foods

    International Nuclear Information System (INIS)

    Gautam, S.; Sharma, Arun; Thomas, Paul

    1998-01-01

    A simple and quick method for detection of irradiated food is proposed. The method is based on the principle of microbial contribution to the development of turbidity in a clear medium. It employs measurement of absorbance at 600 nm of the medium after the test commodity has been suspended and shaken in it for a fixed interval. The differences in the bacterial turbidity from irradiated and nonirradiated samples are quite marked so as to allow identification of the irradiated foods like fish, lamb meat, chicken and mushroom. (author)

  16. Accurate and simple measurement method of complex decay schemes radionuclide activity

    International Nuclear Information System (INIS)

    Legrand, J.; Clement, C.; Bac, C.

    1975-01-01

    A simple method for the measurement of the activity is described. It consists of using a well-type sodium iodide crystal whose efficiency mith monoenergetic photon rays has been computed or measured. For each radionuclide with a complex decay scheme a total efficiency is computed; it is shown that the efficiency is very high, near 100%. The associated incertainty is low, in spite of the important uncertainties on the different parameters used in the computation. The method has been applied to the measurement of the 152 Eu primary reference [fr

  17. A SIMPLE AND EFFECTIVE CURSIVE WORD SEGMENTATION METHOD

    NARCIS (Netherlands)

    nicchiotti, G.; Rimassa, S.; Scagliola, C.

    2004-01-01

    A simple procedure for cursive word oversegmentation is presented, which is based on the analysis of the handwritten profiles and on the extraction of ``white holes\\'\\'. It follows the policy of using simple rules on complex data and sophisticated rules on simpler data. Experimental results show

  18. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  19. Population Estimation with Mark and Recapture Method Program

    International Nuclear Information System (INIS)

    Limohpasmanee, W.; Kaewchoung, W.

    1998-01-01

    Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied

  20. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  1. A novel method for the angiographic estimation of the percentage of spleen volume embolized during partial splenic embolization

    International Nuclear Information System (INIS)

    Ou, Ming-Ching; Chuang, Ming-Tsung; Lin, Xi-Zhang; Tsai, Hong-Ming; Chen, Shu-Yuan; Liu, Yi-Sheng

    2013-01-01

    Purpose: To evaluate the efficacy of estimating the volume of spleen embolized in partial splenic embolization (PSE) by measuring the diameters of the splenic artery and its branches. Materials and methods: A total of 43 liver cirrhosis patients (mean age, 62.19 ± 9.65 years) with thrombocytopenia were included. Among these, 24 patients underwent a follow-up CT scan which showed a correlation between angiographic estimation and measured embolized splenic volume. Estimated splenic embolization volume was calculated by a method based on diameters of the splenic artery and its branches. The diameters of each of the splenic arteries and branches were measured via 2D angiographic images. Embolization was performed with gelatin sponges. Patients underwent follow-up with serial measurement of blood counts and liver function tests. The actual volume of embolized spleen was determined by computed tomography (CT) measuring the volumes of embolized and non-embolized spleen two months after PSE. Results: PSE was performed without immediate major complications. The mean WBC count significantly increased from 3.81 ± 1.69 × 10 3 /mm 3 before PSE to 8.56 ± 3.14 × 10 3 /mm 3 at 1 week after PSE (P < 0.001). Mean platelet count significantly increased from 62.00 ± 22.62 × 10 3 /mm 3 before PSE to 95.40 ± 46.29 × 10 3 /mm 3 1 week after PSE (P < 0.001). The measured embolization ratio was positively correlated with estimated embolization ratio (Spearman's rho [ρ] = 0.687, P < 0.001). The mean difference between the actual embolization ratio and the estimated embolization ratio was 16.16 ± 8.96%. Conclusions: The method provides a simple method to quantitatively estimate embolized splenic volume with a correlation of measured embolization ratio to estimated embolization ratio of Spearman's ρ = 0.687

  2. Study on color difference estimation method of medicine biochemical analysis

    Science.gov (United States)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  3. Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition

    Science.gov (United States)

    Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.

    2016-12-01

    Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.

  4. A Study of Simple α Source Preparation Using a Micro-coprecipitation Method

    International Nuclear Information System (INIS)

    Lee, Myung Ho; Park, Taehong; Song, Byung Chul; Park, Jong Ho; Song, Kyuseok

    2012-01-01

    This study presents a rapid and simple α source preparation method for a radioactive waste sample. The recovery of 239 Pu, 232 U and 243 Am using a micro-coprecipitation method was over 95%. The α-peak resolution of Pu and Am isotopes through the micro-coprecipitation method is enough to discriminate the Pu and Am isotopes from other Pu and Am isotopes. The determination of the Pu and Am isotopes using the micro-coprecipitation method was applied to the radioactive waste sample, so that the activity concentrations of the Pu and Am isotopes using the micro-coprecipitation method in the radioactive waste sample were similar to those using the electrodeposition method

  5. A new method for robust video watermarking resistant against key estimation attacks

    Science.gov (United States)

    Mitekin, Vitaly

    2015-12-01

    This paper presents a new method for high-capacity robust digital video watermarking and algorithms of embedding and extraction of watermark based on this method. Proposed method uses password-based two-dimensional pseudonoise arrays for watermark embedding, making brute-force attacks aimed at steganographic key retrieval mostly impractical. Proposed algorithm for 2-dimensional "noise-like" watermarking patterns generation also allows to significantly decrease watermark collision probability ( i.e. probability of correct watermark detection and extraction using incorrect steganographic key or password).. Experimental research provided in this work also shows that simple correlation-based watermark detection procedure can be used, providing watermark robustness against lossy compression and watermark estimation attacks. At the same time, without decreasing robustness of embedded watermark, average complexity of the brute-force key retrieval attack can be increased to 1014 watermark extraction attempts (compared to 104-106 for a known robust watermarking schemes). Experimental results also shows that for lowest embedding intensity watermark preserves it's robustness against lossy compression of host video and at the same time preserves higher video quality (PSNR up to 51dB) compared to known wavelet-based and DCT-based watermarking algorithms.

  6. A Simple Estimation of Coupling Loss Factors for Two Flexible Subsystems Connected via Discrete Interfaces

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2016-01-01

    Full Text Available A simple formula is proposed to estimate the Statistical Energy Analysis (SEA coupling loss factors (CLFs for two flexible subsystems connected via discrete interfaces. First, the dynamic interactions between two discretely connected subsystems are described as a set of intermodal coupling stiffness terms. It is then found that if both subsystems are of high modal density and meanwhile the interface points all act independently, the intermodal dynamic couplings become dominated by only those between different subsystem mode sets. If ensemble- and frequency-averaged, the intermodal coupling stiffness terms can simply reduce to a function of the characteristic dynamic properties of each subsystem and the subsystem mass, as well as the number of interface points. The results can thus be accommodated within the theoretical frame of conventional SEA theory to yield a simple CLF formula. Meanwhile, the approach allows the weak coupling region between the two SEA subsystems to be distinguished simply and explicitly. The consistency and difference of the present technique with and from the traditional wave-based SEA solutions are discussed. Finally, numerical examples are given to illustrate the good performance of the present technique.

  7. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  8. TWO METHODS FOR REMOTE ESTIMATION OF COMPLETE URBAN SURFACE TEMPERATURE

    Directory of Open Access Journals (Sweden)

    L. Jiang

    2017-09-01

    Full Text Available Complete urban surface temperature (TC is a key parameter for evaluating the energy exchange between the urban surface and atmosphere. At the present stage, the estimation of TC still needs detailed 3D structure information of the urban surface, however, it is often difficult to obtain the geometric structure and composition of the corresponding temperature of urban surface, so that there is still lack of concise and efficient method for estimating the TC by remote sensing. Based on the four typical urban surface scale models, combined with the Envi-met model, thermal radiant directionality forward modeling and kernel model, we analyzed a complete day and night cycle hourly component temperature and radiation temperature in each direction of two seasons of summer and winter, and calculated hemispherical integral temperature and TC. The conclusion is obtained by examining the relationship of directional radiation temperature, hemispherical integral temperature and TC: (1 There is an optimal angle of radiation temperature approaching the TC in a single observation direction when viewing zenith angle is 45–60°, the viewing azimuth near the vertical surface of the sun main plane, the average absolute difference is about 1.1 K in the daytime. (2 There are several (3–5 times directional temperatures of different view angle, under the situation of using the thermal radiation directionality kernel model can more accurately calculate the hemispherical integral temperature close to TC, the mean absolute error is about 1.0 K in the daytime. This study proposed simple and effective strategies for estimating TC by remote sensing, which are expected to improve the quantitative level of remote sensing of urban thermal environment.

  9. New simple method for fast and accurate measurement of volumes

    International Nuclear Information System (INIS)

    Frattolillo, Antonio

    2006-01-01

    A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The

  10. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Energy Technology Data Exchange (ETDEWEB)

    Djenidi, L.; Antonia, R.A. [The University of Newcastle, School of Engineering, Newcastle, NSW (Australia)

    2012-10-15

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate left angle {epsilon}right angle in a variety of turbulent flows. The method relies on the validity of the first similarity hypothesis of Kolmogorov (C R (Doklady) Acad Sci R R SS, NS 30:301-305, 1941) (or K41) which implies that spectra of velocity fluctuations scale on the kinematic viscosity {nu} and left angle {epsilon}right angle at large Reynolds numbers. However, the evidence, based on the DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynolds number R{sub {lambda}} is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 {<=} R{sub {lambda}}{<=} 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of left angle {epsilon}right angle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall. (orig.)

  11. Infrared thermography method for fast estimation of phase diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Palomo Del Barrio, Elena [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Cadoret, Régis [Centre National de la Recherche Scientifique, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Daranlot, Julien [Solvay, Laboratoire du Futur, 178 Av du Dr Schweitzer, 33608 Pessac (France); Achchaq, Fouzia, E-mail: fouzia.achchaq@u-bordeaux.fr [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France)

    2016-02-10

    Highlights: • Infrared thermography is proposed to determine phase diagrams in record time. • Phase boundaries are detected by means of emissivity changes during heating. • Transition lines are identified by using Singular Value Decomposition techniques. • Different binary systems have been used for validation purposes. - Abstract: Phase change materials (PCM) are widely used today in thermal energy storage applications. Pure PCMs are rarely used because of non adapted melting points. Instead of them, mixtures are preferred. The search of suitable mixtures, preferably eutectics, is often a tedious and time consuming task which requires the determination of phase diagrams. In order to accelerate this screening step, a new method for estimating phase diagrams in record time (1–3 h) has been established and validated. A sample composed by small droplets of mixtures with different compositions (as many as necessary to have a good coverage of the phase diagram) deposited on a flat substrate is first prepared and cooled down to ambient temperature so that all droplets crystallize. The plate is then heated at constant heating rate up to a sufficiently high temperature for melting all the small crystals. The heating process is imaged by using an infrared camera. An appropriate method based on singular values decomposition technique has been developed to analyze the recorded images and to determine the transition lines of the phase diagram. The method has been applied to determine several simple eutectic phase diagrams and the reached results have been validated by comparison with the phase diagrams obtained by Differential Scanning Calorimeter measurements and by thermodynamic modelling.

  12. Simultaneous Estimation of Sitagliptin and Metformin in ...

    African Journals Online (AJOL)

    A rapid, simple, specific and precise high-performance thin-layer chromatography (HPTLC) method was developed for the simultaneous estimation of sitagliptin (STG) and metformin (MET) content in a fixed dose pharmaceutical formulation and also in bulk drug. In the developed method, aluminium backed silica gel 60 ...

  13. A simple micro-photometric method for urinary iodine determination.

    Science.gov (United States)

    Grimm, Gabriele; Lindorfer, Heidelinde; Kieweg, Heidi; Marculescu, Rodrig; Hoffmann, Martha; Gessl, Alois; Sager, Manfred; Bieglmayer, Christian

    2011-10-01

    Urinary iodide concentration (UIC) is useful to evaluate nutritional iodine status. In clinical settings UIC helps to exclude blocking of the thyroid gland by excessive endogenous iodine, if diagnostic or therapeutic administration of radio-iodine is indicated. Therefore, this study established a simple test for the measurement of UIC. UIC was analyzed in urine samples of 200 patients. Samples were pre-treated at 95°C for 45 min with ammonium persulfate in a thermal cycler, followed by a photometric Sandell-Kolthoff reaction (SK) carried out in microtiter plates. For method comparison, UIC was analyzed in 30 samples by inductivity coupled plasma mass spectro-metry (ICP-MS) as a reference method. Incubation conditions were optimized concerning recovery. The photometric test correlated well to the reference method (SK=0.91*ICP-MS+1, r=0.962) and presented with a functional sensitivity of 20 μg/L. UIC of patient samples ranged from photometric test provides satisfactory results and can be performed with the basic equipment of a clinical laboratory.

  14. A simple two-step method to fabricate highly transparent ITO/polymer nanocomposite films

    International Nuclear Information System (INIS)

    Liu, Haitao; Zeng, Xiaofei; Kong, Xiangrong; Bian, Shuguang; Chen, Jianfeng

    2012-01-01

    Highlights: ► A simple two-step method without further surface modification step was employed. ► ITO nanoparticles were easily to be uniformly dispersed in polymer matrix. ► ITO/polymer nanocomposite film had high transparency and UV/IR blocking properties. - Abstract: Transparent functional indium tin oxide (ITO)/polymer nanocomposite films were fabricated via a simple approach with two steps. Firstly, the functional monodisperse ITO nanoparticles were synthesized via a facile nonaqueous solvothermal method using bifunctional chemical agent (N-methyl-pyrrolidone, NMP) as the reaction solvent and surface modifier. Secondly, the ITO/acrylics polyurethane (PUA) nanocomposite films were fabricated by a simple sol-solution mixing method without any further surface modification step as often employed traditionally. Flower-like ITO nanoclusters with about 45 nm in diameter were mono-dispersed in ethyl acetate and each nanocluster was assembled by nearly spherical nanoparticles with primary size of 7–9 nm in diameter. The ITO nanoclusters exhibited an excellent dispersibility in polymer matrix of PUA, remaining their original size without any further agglomeration. When the loading content of ITO nanoclusters reached to 5 wt%, the transparent functional nanocomposite film featured a high transparency more than 85% in the visible light region (at 550 nm), meanwhile cutting off near-infrared radiation about 50% at 1500 nm and blocking UV ray about 45% at 350 nm. It could be potential for transparent functional coating materials applications.

  15. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  16. A single-probe heat pulse method for estimating sap velocity in trees.

    Science.gov (United States)

    López-Bernal, Álvaro; Testi, Luca; Villalobos, Francisco J

    2017-10-01

    Available sap flow methods are still far from being simple, cheap and reliable enough to be used beyond very specific research purposes. This study presents and tests a new single-probe heat pulse (SPHP) method for monitoring sap velocity in trees using a single-probe sensor, rather than the multi-probe arrangements used up to now. Based on the fundamental conduction-convection principles of heat transport in sapwood, convective velocity (V h ) is estimated from the temperature increase in the heater after the application of a heat pulse (ΔT). The method was validated against measurements performed with the compensation heat pulse (CHP) technique in field trees of six different species. To do so, a dedicated three-probe sensor capable of simultaneously applying both methods was produced and used. Experimental measurements in the six species showed an excellent agreement between SPHP and CHP outputs for moderate to high flow rates, confirming the applicability of the method. In relation to other sap flow methods, SPHP presents several significant advantages: it requires low power inputs, it uses technically simpler and potentially cheaper instrumentation, the physical damage to the tree is minimal and artefacts caused by incorrect probe spacing and alignment are removed. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  17. The simple procedure for the fluxgate magnetometers calibration

    Science.gov (United States)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the

  18. A simple and rapid method of purification of impure plutonium oxide

    International Nuclear Information System (INIS)

    Michael, K.M.; Rakshe, P.R.; Dharmpurikar, G.R.; Thite, B.S.; Lokhande, Manisha; Sinalkar, Nitin; Dakshinamoorthy, A.; Munshi, S.K.; Dey, P.K.

    2007-01-01

    Impure plutonium oxides are conventionally purified by dissolution in HNO 3 in presence of HF followed by ion exchange separation and oxalate precipitation. The method is tedious and use of HF enhances corrosion of the plant equipment's. A simple and rapid method has been developed for the purification of the oxide by leaching with various reagents like DM water, NaOH and oxalic acid. A combination of DM water followed by hot leaching with 0.4 M oxalic acid could bring down the impurity levels in the oxide to the desired level required for fuel fabrication. (author)

  19. A simple method for percutaneous resection of osteoid osteoma

    International Nuclear Information System (INIS)

    Kamrani, Reza S.; Kiani, K.; Mazlouman, Shahriar J.

    2007-01-01

    To introduce a method that can be performed with minimal equipments available to most orthopedic surgeons and precludes the extensive anesthetic and ablative requirements. A percutaneous lead tunnel was first established in the cortex next to the nidus under computerized tomography guidance with local anesthesia; then the nidus was curetted in the operating room through the lead tunnel. The study was performed in Shariati Hospital in Tehran, Iran, from September 2002 to December 2005. Nineteen patients were treated with this method with 94.7% cure rate. The diagnosis was histologically confirmed in 16 cases (84.2%). Failure occurred in one patient. The patients had a mean follow-up of 13.5 months with no recurrence of symptoms with mean hospitalization time of 1.6 days. This technique is simple, minimally invasive and effective. It needs no especial equipments and provides the material for tissue diagnosis. (author)

  20. A simple and accurate method for the quality control of the I.I.-DR apparatus using the CCD camera

    International Nuclear Information System (INIS)

    Igarashi, Hitoshi; Shiraishi, Akihisa; Kuraishi, Masahiko

    2000-01-01

    With the advancing development of CCD cameras, the I.I.-DR apparatus has been introduced into the x-ray fluoroscopy television system. Consequently, quality control of the system has become a complicated task. We developed a simple, accurate method for quality control of the I.I.-DR apparatus using the CCD camera. Experiments were separately performed for the imager system [laser imager, DDX (dynamic digital x-ray system)] and the imaging system (I.I., ND-filter, IRIS, CCD camera). Quality control of the imager system was done by simply examining both input and output characteristics with a sliding pattern. Quality control of the imaging system was also conducted by estimating AVE (the average volume element), which was obtained using a phantom under the constant conditions. The results indicated that this simplified method is useful as a weekly quality control check of the I.I.-DR apparatus using the CCD camera. (author)

  1. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  2. An easy 'one tube' method to estimate viability of Cryptosporidium oocysts using real-time qPCR

    NARCIS (Netherlands)

    Paziewska-Harris, A.; Schoone, G.; Schallig, H. D. F. H.

    2016-01-01

    Viability estimation of the highly resistant oocysts of Cryptosporidium remains a key issue for the monitoring and control of this pathogen. We present here a simple 'one tube' quantitative PCR (qPCR) protocol for viability estimation using a DNA extraction protocol which preferentially solubilizes

  3. Investment Volatility: A Critique of Standard Beta Estimation and a Simple Way Forward

    OpenAIRE

    Chris Tofallis

    2011-01-01

    Beta is a widely used quantity in investment analysis. We review the common interpretations that are applied to beta in finance and show that the standard method of estimation - least squares regression - is inconsistent with these interpretations. We present the case for an alternative beta estimator which is more appropriate, as well as being easier to understand and to calculate. Unlike regression, the line fit we propose treats both variables in the same way. Remarkably, it provides a slo...

  4. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  5. Methods for the estimation of uranium ore reserves

    International Nuclear Information System (INIS)

    1985-01-01

    The Manual is designed mainly to provide assistance in uranium ore reserve estimation methods to mining engineers and geologists with limited experience in estimating reserves, especially to those working in developing countries. This Manual deals with the general principles of evaluation of metalliferous deposits but also takes into account the radioactivity of uranium ores. The methods presented have been generally accepted in the international uranium industry

  6. A simple scintigraphic method for continuous monitoring of gastric emptying

    Energy Technology Data Exchange (ETDEWEB)

    Lipp, R.W.; Hammer, H.F.; Schnedl, W.; Dobnig, H.; Passath, A.; Leb, G.; Krejs, G.J. (Graz Univ. (Austria). Div. of Nuclear Medicine and Endocrinology)

    1993-03-01

    A new and simple scintigraphic method for the measurement of gastric emptying was developed and validated. The test meal consists of 200 g potato mash mixed with 0.5 g Dowex 2X8 particles (mesh 20-50) labelled with 37 MBq (1 mCi) technetium-99m. After ingestion of the meal, sequential dynamic 15-s anteroposterior exposures in the supine position are obtained for 90 min. A second recording sequence of 20 min is added after a 30-min interval. The results can be displayed as immediate cine-replay, as time-activity diagrams and/or as acitivty retention values. Complicated mathematical fittings are not necessary. The method lends itself equally to the testing of in- and outpatients. (orig.).

  7. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  8. A simple optical method for measuring the vibration amplitude of a speaker

    OpenAIRE

    UEDA, Masahiro; YAMAGUCHI, Toshihiko; KAKIUCHI, Hiroki; SUGA, Hiroshi

    1999-01-01

    A simple optical method has been proposed for measuring the vibration amplitude of a speaker vibrating with a frequency of approximately 10 kHz. The method is based on a multiple reflection between a vibrating speaker plane and a mirror parallel to that speaker plane. The multiple reflection can magnify a dispersion of the laser beam caused by the vibration, and easily make a measurement of the amplitude. The measuring sensitivity ranges between sub-microns and 1 mm. A preliminary experim...

  9. A simple method to downscale daily wind statistics to hourly wind data

    OpenAIRE

    Guo, Zhongling

    2013-01-01

    Wind is the principal driver in the wind erosion models. The hourly wind speed data were generally required for precisely wind erosion modeling. In this study, a simple method to generate hourly wind speed data from daily wind statistics (daily average and maximum wind speeds together or daily average wind speed only) was established. A typical windy location with 3285 days (9 years) measured hourly wind speed data were used to validate the downscaling method. The results showed that the over...

  10. Simulation Opportunity Index, A Simple and Effective Method to Boost the Hydrocarbon Recovery

    KAUST Repository

    Saputra, Wardana

    2016-01-01

    This paper describes how the SOI software helps as a simple, fast, and accurate way to obtain the higher hydrocarbon production than that of trial-error method and previous studies in two different fields located in offshore Indonesia. On one hand, the proposed method could save money by minimizing the required number of wells. On the other hand, it could maximize profit by maximizing recovery.

  11. Online and Batch Supervised Background Estimation via L1 Regression

    KAUST Repository

    Dutta, Aritra

    2017-11-23

    We propose a surprisingly simple model for supervised video background estimation. Our model is based on $\\\\ell_1$ regression. As existing methods for $\\\\ell_1$ regression do not scale to high-resolution videos, we propose several simple and scalable methods for solving the problem, including iteratively reweighted least squares, a homotopy method, and stochastic gradient descent. We show through extensive experiments that our model and methods match or outperform the state-of-the-art online and batch methods in virtually all quantitative and qualitative measures.

  12. Online and Batch Supervised Background Estimation via L1 Regression

    KAUST Repository

    Dutta, Aritra; Richtarik, Peter

    2017-01-01

    We propose a surprisingly simple model for supervised video background estimation. Our model is based on $\\ell_1$ regression. As existing methods for $\\ell_1$ regression do not scale to high-resolution videos, we propose several simple and scalable methods for solving the problem, including iteratively reweighted least squares, a homotopy method, and stochastic gradient descent. We show through extensive experiments that our model and methods match or outperform the state-of-the-art online and batch methods in virtually all quantitative and qualitative measures.

  13. Assessment of the breast volume by a new simple formula

    Directory of Open Access Journals (Sweden)

    El-Oteify Mahmoud

    2006-01-01

    Full Text Available Background: With the recent introduction of improved techniques for plastic surgery of the breast and increased public awareness toward these procedures, plastic surgeons are continuously trying to improve their methods and results to reach perfection. Assessment of the breast volume is an important issue prior to the use of breast implants in any aesthetic or reconstructive breast surgery. Previous methods to measure breast volume have included use of a simple bra and breast cup size, cumbersome fluid displacement, appliances and approximate visual estimation. Objectives: In this work we have tried to develop an easy method for assessment of the breast volume for both the patient and the the surgeon through a simple mathematical formula. Materials and Methods: Fifty two volunteers were included in this study. For every one, general parameters including age, weight and height were recorded. Local breast measurements and water volume displacement were also recorded. Results: The collected data were statistically correlated. Using the analyzed data, the breast volume was calculated through a simple and direct formula on the basis of the breast circumference. Conclusion: Our method has, as its principle, the use of an accurate and simple formula, which is based only on one measurement. This is easy for both the patient and the plastic surgeon. This equation is not only a significant technical advantage for the surgeon, but also provides a universal standardization of the breast volume.

  14. The simple solutions concept: a useful approach to estimate deviation from ideality in solvent extraction

    International Nuclear Information System (INIS)

    Sorel, C.; Pacary, V.

    2010-01-01

    The solvent extraction systems devoted to uranium purification from crude ore to spent fuel involve concentrated solutions in which deviation from ideality can not be neglected. The Simple Solution Concept based on the behaviour of isopiestic solutions has been applied to quantify the activity coefficients of metals and acids in the aqueous phase in equilibrium with the organic phase. This approach has been validated on various solvent extraction systems such as trialkylphosphates, malonamides or acidic extracting agents both on batch experiments and counter-current tests. Moreover, this concept has been successfully used to estimate the aqueous density which is useful to quantify the variation of volume and to assess critical parameters such as the number density of nuclides. (author)

  15. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    Science.gov (United States)

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  16. Wrist arthrography: a simple method

    Energy Technology Data Exchange (ETDEWEB)

    Berna-Serna, Juan D.; Reus, Manuel; Alonso, Jose [Virgen de la Arrixaca University Hospital, Department of Radiology, El Palmar (Murcia) (Spain); Martinez, Francisco; Domenech-Ratto, Gines [University of Murcia, Department of Human Anatomy, Faculty of Medicine, Murcia (Spain)

    2006-02-01

    A technique of wrist arthrography is presented using an adhesive marker-plate with radiopaque coordinates to identify precisely sites for puncture arthrography of the wrist and to obviate the need for fluoroscopic guidance. Radiocarpal joint arthrography was performed successfully in all 24 cases, 14 in the cadaveric wrists and 10 in the live patients. The arthrographic procedure described in this study is simple, safe, and rapid, and has the advantage of precise localisation of the site for puncture without need for fluoroscopic guidance. (orig.)

  17. Evaluation and reliability of bone histological age estimation methods

    African Journals Online (AJOL)

    Human age estimation at death plays a vital role in forensic anthropology and bioarchaeology. Researchers used morphological and histological methods to estimate human age from their skeletal remains. This paper discussed different histological methods that used human long bones and ribs to determine age ...

  18. Filtration Isolation of Nucleic Acids: A Simple and Rapid DNA Extraction Method.

    Science.gov (United States)

    McFall, Sally M; Neto, Mário F; Reed, Jennifer L; Wagner, Robin L

    2016-08-06

    FINA, filtration isolation of nucleic acids, is a novel extraction method which utilizes vertical filtration via a separation membrane and absorbent pad to extract cellular DNA from whole blood in less than 2 min. The blood specimen is treated with detergent, mixed briefly and applied by pipet to the separation membrane. The lysate wicks into the blotting pad due to capillary action, capturing the genomic DNA on the surface of the separation membrane. The extracted DNA is retained on the membrane during a simple wash step wherein PCR inhibitors are wicked into the absorbent blotting pad. The membrane containing the entrapped DNA is then added to the PCR reaction without further purification. This simple method does not require laboratory equipment and can be easily implemented with inexpensive laboratory supplies. Here we describe a protocol for highly sensitive detection and quantitation of HIV-1 proviral DNA from 100 µl whole blood as a model for early infant diagnosis of HIV that could readily be adapted to other genetic targets.

  19. Sensitive and simple method for measuring wire tensions

    International Nuclear Information System (INIS)

    Atac, M.; Mishina, M.

    1982-08-01

    Measuring tension of wires in drift chambers and multiwire proportional chambers after construction is an important process because sometimes wires get loose after soldering, crimping or glueing. One needs to sort out wires which have tensions below a required minimum value to prevent electrostatic instabilities. There have been several methods reported on this subject in which the wires were excited either with sinusoidal current under magnetic field or with sinusoidal voltage electrostatically coupled to the wire, searching for a resonating frequency with which the wires vibrate mechanically. Then the vibration is detected either visually, optically or with magnetic pick-up directly touching the wires. Any of these is only applicable to the usual multiwire chamber which has open access to the wire plane. They also need fairly large excitation currents to induce a detectable vibration to the wires. Here we report a very simple method that can be used for any type of wire chamber or proportional tube system for measuring wire tension. Only a very small current is required for the wire excitation to obtain a large enough signal because it detects the induced emf voltage across a wire. A sine-wave oscillator and a digital voltmeter are sufficient devices aside from a permanent magnet to provide the magnetic field around the wire. A useful application of this method to a large system is suggested

  20. Interior Gradient Estimates for Nonuniformly Parabolic Equations II

    Directory of Open Access Journals (Sweden)

    Lieberman Gary M

    2007-01-01

    Full Text Available We prove interior gradient estimates for a large class of parabolic equations in divergence form. Using some simple ideas, we prove these estimates for several types of equations that are not amenable to previous methods. In particular, we have no restrictions on the maximum eigenvalue of the coefficient matrix and we obtain interior gradient estimates for so-called false mean curvature equation.

  1. A simple criterion for determining the static friction force between nanowires and flat substrates using the most-bent-state method.

    Science.gov (United States)

    Hou, Lizhen; Wang, Shiliang; Huang, Han

    2015-04-24

    A simple criterion was developed to assess the appropriateness of the currently available models that estimate the static friction force between nanowires and substrates using the 'most-bent-state' method. Our experimental testing of the static friction force between Al2O3 nanowires and Si substrate verified our theoretical analysis, as well as the establishment of the criterion. It was found that the models are valid only for the bent nanowires with the ratio of wire length over the minimum curvature radius [Formula: see text] no greater than 1. For the cases with [Formula: see text] greater than 1, the static friction force was overestimated as it neglected the effect of its tangential component.

  2. Study on Top-Down Estimation Method of Software Project Planning

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun-guang; L(U) Ting-jie; ZHAO Yu-mei

    2006-01-01

    This paper studies a new software project planning method under some actual project data in order to make software project plans more effective. From the perspective of system theory, our new method regards a software project plan as an associative unit for study. During a top-down estimation of a software project, Program Evaluation and Review Technique (PERT) method and analogy method are combined to estimate its size, then effort estimation and specific schedules are obtained according to distributions of the phase effort. This allows a set of practical and feasible planning methods to be constructed. Actual data indicate that this set of methods can lead to effective software project planning.

  3. A simple headspace equilibration method for measuring dissolved methane

    Science.gov (United States)

    Magen, C; Lapham, L.L.; Pohlman, John W.; Marshall, Kristin N.; Bosman, S.; Casso, Michael; Chanton, J.P.

    2014-01-01

    Dissolved methane concentrations in the ocean are close to equilibrium with the atmosphere. Because methane is only sparingly soluble in seawater, measuring it without contamination is challenging for samples collected and processed in the presence of air. Several methods for analyzing dissolved methane are described in the literature, yet none has conducted a thorough assessment of the method yield, contamination issues during collection, transport and storage, and the effect of temperature changes and preservative. Previous extraction methods transfer methane from water to gas by either a "sparge and trap" or a "headspace equilibration" technique. The gas is then analyzed for methane by gas chromatography. Here, we revisit the headspace equilibration technique and describe a simple, inexpensive, and reliable method to measure methane in fresh and seawater, regardless of concentration. Within the range of concentrations typically found in surface seawaters (2-1000 nmol L-1), the yield of the method nears 100% of what is expected from solubility calculation following the addition of known amount of methane. In addition to being sensitive (detection limit of 0.1 ppmv, or 0.74 nmol L-1), this method requires less than 10 min per sample, and does not use highly toxic chemicals. It can be conducted with minimum materials and does not require the use of a gas chromatograph at the collection site. It can therefore be used in various remote working environments and conditions.

  4. Machine learning plus optical flow: a simple and sensitive method to detect cardioactive drugs

    Science.gov (United States)

    Lee, Eugene K.; Kurokawa, Yosuke K.; Tu, Robin; George, Steven C.; Khine, Michelle

    2015-07-01

    Current preclinical screening methods do not adequately detect cardiotoxicity. Using human induced pluripotent stem cell-derived cardiomyocytes (iPS-CMs), more physiologically relevant preclinical or patient-specific screening to detect potential cardiotoxic effects of drug candidates may be possible. However, one of the persistent challenges for developing a high-throughput drug screening platform using iPS-CMs is the need to develop a simple and reliable method to measure key electrophysiological and contractile parameters. To address this need, we have developed a platform that combines machine learning paired with brightfield optical flow as a simple and robust tool that can automate the detection of cardiomyocyte drug effects. Using three cardioactive drugs of different mechanisms, including those with primarily electrophysiological effects, we demonstrate the general applicability of this screening method to detect subtle changes in cardiomyocyte contraction. Requiring only brightfield images of cardiomyocyte contractions, we detect changes in cardiomyocyte contraction comparable to - and even superior to - fluorescence readouts. This automated method serves as a widely applicable screening tool to characterize the effects of drugs on cardiomyocyte function.

  5. Advantage of multiple spot urine collections for estimating daily sodium excretion: comparison with two 24-h urine collections as reference.

    Science.gov (United States)

    Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi

    2016-02-01

    Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62  mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.

  6. Development of the simple evaluation method of the soil biomass by the ATP measurement

    Czech Academy of Sciences Publication Activity Database

    Urashima, Y.; Nakajima, M.; Kaneda, Satoshi; Murakami, T.

    2007-01-01

    Roč. 78, č. 2 (2007), s. 187-190 ISSN 0029-0610 Institutional research plan: CEZ:AV0Z60660521 Keywords : simple evaluation method * soil biomass * ATP measurement Subject RIV: EH - Ecology, Behaviour

  7. Development and Validation of Spectrophotometric Methods for Simultaneous Estimation of Valsartan and Hydrochlorothiazide in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    Monika L. Jadhav

    2014-01-01

    Full Text Available Two UV-spectrophotometric methods have been developed and validated for simultaneous estimation of valsartan and hydrochlorothiazide in a tablet dosage form. The first method employed solving of simultaneous equations based on the measurement of absorbance at two wavelengths, 249.4 nm and 272.6 nm, λmax for valsartan and hydrochlorothiazide, respectively. The second method was absorbance ratio method, which involves formation of Q-absorbance equation at 258.4 nm (isoabsorptive point and also at 272.6 nm (λmax of hydrochlorothiazide. The methods were found to be linear between the range of 5–30 µg/mL for valsartan and 4–24 μg/mL for hydrochlorothiazide using 0.1 N NaOH as solvent. The mean percentage recovery was found to be 100.20% and 100.19% for the simultaneous equation method and 98.56% and 97.96% for the absorbance ratio method, for valsartan and hydrochlorothiazide, respectively, at three different levels of standard additions. The precision (intraday, interday of methods was found within limits (RSD<2%. It could be concluded from the results obtained in the present investigation that the two methods for simultaneous estimation of valsartan and hydrochlorothiazide in tablet dosage form are simple, rapid, accurate, precise and economical and can be used, successfully, in the quality control of pharmaceutical formulations and other routine laboratory analysis.

  8. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  9. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  10. A comparison of analysis methods to estimate contingency strength.

    Science.gov (United States)

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  11. Plant-available soil water capacity: estimation methods and implications

    Directory of Open Access Journals (Sweden)

    Bruno Montoani Silva

    2014-04-01

    Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.

  12. A simple and inexpensive method for genomic restriction mapping analysis

    International Nuclear Information System (INIS)

    Huang, C.H.; Lam, V.M.S.; Tam, J.W.O.

    1988-01-01

    The Southern blotting procedure for the transfer of DNA fragments from agarose gels to nitrocellulose membranes has revolutionized nucleic acid detection methods, and it forms the cornerstone of research in molecular biology. Basically, the method involves the denaturation of DNA fragments that have been separated on an agarose gel, the immobilization of the fragments by transfer to a nitrocellulose membrane, and the identification of the fragments of interest through hybridization to /sup 32/P-labeled probes and autoradiography. While the method is sensitive and applicable to both genomic and cloned DNA, it suffers from the disadvantages of being time consuming and expensive, and fragments of greater than 15 kb are difficult to transfer. Moreover, although theoretically the nitrocellulose membrane can be washed and hybridized repeatedly using different probes, in practice, the membrane becomes brittle and difficult to handle after a few cycles. A direct hybridization method for pure DNA clones was developed in 1975 but has not been widely exploited. The authors report here a modification of their procedure as applied to genomic DNA. The method is simple, rapid, and inexpensive, and it does not involve transfer to nitrocellulose membranes

  13. Nonparametric methods for volatility density estimation

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2009-01-01

    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on

  14. Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation

    OpenAIRE

    Pillai S; Singhvi I

    2008-01-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydro...

  15. A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2001-01-01

    There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.

  16. A Simple Method for Measuring the Verticality of Small-Diameter Driven Wells

    DEFF Research Database (Denmark)

    Kjeldsen, Peter; Skov, Bent

    1994-01-01

    The presence of stones, solid waste, and other obstructions can deflect small-diameter driven wells during installation, leading to deviations of the well from its intended position. This could lead to erroneous results, especially for measurements of ground water levels by water level meters....... A simple method was developed to measure deviations from the intended positions of well screens and determine correction factors required for proper measurement of ground water levels in nonvertical wells. The method is based upon measurement of the hydrostatic pressure in the bottom of a water column...... ground water flow directions....

  17. A Nonmonotone Trust Region Method for Nonlinear Programming with Simple Bound Constraints

    International Nuclear Information System (INIS)

    Chen, Z.-W.; Han, J.-Y.; Xu, D.-C.

    2001-01-01

    In this paper we propose a nonmonotone trust region algorithm for optimization with simple bound constraints. Under mild conditions, we prove the global convergence of the algorithm. For the monotone case it is also proved that the correct active set can be identified in a finite number of iterations if the strict complementarity slackness condition holds, and so the proposed algorithm reduces finally to an unconstrained minimization method in a finite number of iterations, allowing a fast asymptotic rate of convergence. Numerical experiments show that the method is efficient

  18. Finite Element Method Application in Areal Rainfall Estimation Case Study; Mashhad Plain Basin

    Directory of Open Access Journals (Sweden)

    M. Irani

    2016-10-01

    7.08 software environment. The finite element method is a numerical procedure for obtaining solutions to many of the problems encountered in engineering analysis. First, it utilizes discrete elements to obtain the joint displacements and member forces of a structural framework and estimate areal precipitation. Second, it uses the continuum elements to obtain approximate solutions to heat transfer, fluid mechanics, and solid mechanics problems. Galerkin’s method is used to develop the finite element equations for the field problems. It uses the same functions for Ni(x that was used in the approximating equations. This approach is the basis of finite element method for problems involving first-derivative terms. This method yields the same result as the variational method when applied to differential equations that are self-adjoints. Galerkin’s method is almost simple and eliminates bias by representing the relief by suitable mathematical model and incorporating this into the integration. In this paper, two powerful techniques were introduced which was applied in Galerkin’s method: The use of interpolation functions to transform the shape of the element to a perfect square. The use of Gaussian quadrature to calculate rainfall depth numerically . In this study, Mashhad plain is divided to 40 elements which are quadrilateral. In each element, the rain gauge was situated on the node of the stations. The coordinates are given according to UTM, where x and y are the horizontal and z, the vertical (altitude coordinate. It was necessary at the outset to number the corner nodes in a set manner and for the purpose of this paper, an anticlockwise convention was adopted. Results and Discussion: This paper represented the estimation of mean precipitation (daily, monthly and annual in Mashhad plain by Galerkin’s method which was compared with arithmetic mean, Thiessen, kriging and IDW. The values of Galerkin’s method by Matlab7.08 software and Thiessen, kriging and IDW by

  19. New method for estimating clustering of DNA lesions induced by physical/chemical mutagens using fluorescence anisotropy.

    Science.gov (United States)

    Akamatsu, Ken; Shikazono, Naoya; Saito, Takeshi

    2017-11-01

    We have developed a new method for estimating the localization of DNA damage such as apurinic/apyrimidinic sites (APs) on DNA using fluorescence anisotropy. This method is aimed at characterizing clustered DNA damage produced by DNA-damaging agents such as ionizing radiation and genotoxic chemicals. A fluorescent probe with an aminooxy group (AlexaFluor488) was used to label APs. We prepared a pUC19 plasmid with APs by heating under acidic conditions as a model for damaged DNA, and subsequently labeled the APs. We found that the observed fluorescence anisotropy (r obs ) decreases as averaged AP density (λ AP : number of APs per base pair) increases due to homo-FRET, and that the APs were randomly distributed. We applied this method to three DNA-damaging agents, 60 Co γ-rays, methyl methanesulfonate (MMS), and neocarzinostatin (NCS). We found that r obs -λ AP relationships differed significantly between MMS and NCS. At low AP density (λ AP  < 0.001), the APs induced by MMS seemed to not be closely distributed, whereas those induced by NCS were remarkably clustered. In contrast, the AP clustering induced by 60 Co γ-rays was similar to, but potentially more likely to occur than, random distribution. This simple method can be used to estimate mutagenicity of ionizing radiation and genotoxic chemicals. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Improving evapotranspiration estimates in Mediterranean drylands

    DEFF Research Database (Denmark)

    Morillas, Laura; Leuning, Ray; Villagarcia, Luis

    2013-01-01

    An adaptation of a simple model for evapotranspiration (E) estimations in drylands based on remotely sensed leaf area index and the Penman-Monteith equation (PML model) (Leuning et al., 2008) is presented. Three methods for improving the consideration of soil evaporation influence in total evapo-...

  1. A Group Contribution Method for Estimating Cetane and Octane Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group

    2016-07-28

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  2. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  3. A novel method for the angiographic estimation of the percentage of spleen volume embolized during partial splenic embolization

    Energy Technology Data Exchange (ETDEWEB)

    Ou, Ming-Ching; Chuang, Ming-Tsung [Department of Diagnostic Radiology, National Cheng-Kung University Hospital, No. 138 Sheng Li Road, Tainan 704, Taiwan, ROC (China); Lin, Xi-Zhang [Department of Internal Medicine, National Cheng-Kung University Hospital, No. 138 Sheng Li Road, Tainan 704, Taiwan, ROC (China); Tsai, Hong-Ming; Chen, Shu-Yuan [Department of Diagnostic Radiology, National Cheng-Kung University Hospital, No. 138 Sheng Li Road, Tainan 704, Taiwan, ROC (China); Liu, Yi-Sheng, E-mail: taicheng100704@yahoo.com.tw [Department of Diagnostic Radiology, National Cheng-Kung University Hospital, No. 138 Sheng Li Road, Tainan 704, Taiwan, ROC (China)

    2013-08-15

    Purpose: To evaluate the efficacy of estimating the volume of spleen embolized in partial splenic embolization (PSE) by measuring the diameters of the splenic artery and its branches. Materials and methods: A total of 43 liver cirrhosis patients (mean age, 62.19 ± 9.65 years) with thrombocytopenia were included. Among these, 24 patients underwent a follow-up CT scan which showed a correlation between angiographic estimation and measured embolized splenic volume. Estimated splenic embolization volume was calculated by a method based on diameters of the splenic artery and its branches. The diameters of each of the splenic arteries and branches were measured via 2D angiographic images. Embolization was performed with gelatin sponges. Patients underwent follow-up with serial measurement of blood counts and liver function tests. The actual volume of embolized spleen was determined by computed tomography (CT) measuring the volumes of embolized and non-embolized spleen two months after PSE. Results: PSE was performed without immediate major complications. The mean WBC count significantly increased from 3.81 ± 1.69 × 10{sup 3}/mm{sup 3} before PSE to 8.56 ± 3.14 × 10{sup 3}/mm{sup 3} at 1 week after PSE (P < 0.001). Mean platelet count significantly increased from 62.00 ± 22.62 × 10{sup 3}/mm{sup 3} before PSE to 95.40 ± 46.29 × 10{sup 3}/mm{sup 3} 1 week after PSE (P < 0.001). The measured embolization ratio was positively correlated with estimated embolization ratio (Spearman's rho [ρ] = 0.687, P < 0.001). The mean difference between the actual embolization ratio and the estimated embolization ratio was 16.16 ± 8.96%. Conclusions: The method provides a simple method to quantitatively estimate embolized splenic volume with a correlation of measured embolization ratio to estimated embolization ratio of Spearman's ρ = 0.687.

  4. W5″ Test: A simple method for measuring mean power output in the bench press exercise.

    Science.gov (United States)

    Tous-Fajardo, Julio; Moras, Gerard; Rodríguez-Jiménez, Sergio; Gonzalo-Skok, Oliver; Busquets, Albert; Mujika, Iñigo

    2016-11-01

    The aims of the present study were to assess the validity and reliability of a novel simple test [Five Seconds Power Test (W5″ Test)] for estimating the mean power output during the bench press exercise at different loads, and its sensitivity to detect training-induced changes. Thirty trained young men completed as many repetitions as possible in a time of ≈5 s at 25%, 45%, 65% and 85% of one-repetition maximum (1RM) in two test sessions separated by four days. The number of repetitions, linear displacement of the bar and time needed to complete the test were recorded by two independent testers, and a linear encoder was used as the criterion measure. For each load, the mean power output was calculated in the W5″ Test as mechanical work per time unit and compared with that obtained from the linear encoder. Subsequently, 20 additional subjects (10 training group vs. 10 control group) were assessed before and after completing a seven-week training programme designed to improve maximal power. Results showed that both assessment methods correlated highly in estimating mean power output at different loads (r range: 0.86-0.94; p bench press exercise in subjects who have previous resistance training experience.

  5. Motion estimation using point cluster method and Kalman filter.

    Science.gov (United States)

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  6. High precision estimation of inertial rotation via the extended Kalman filter

    Science.gov (United States)

    Liu, Lijun; Qi, Bo; Cheng, Shuming; Xi, Zairong

    2015-11-01

    Recent developments in technology have enabled atomic gyroscopes to become the most sensitive inertial sensors. Atomic spin gyroscopes essentially output an estimate of the inertial rotation rate to be measured. In this paper, we present a simple yet efficient estimation method, the extended Kalman filter (EKF), for the atomic spin gyroscope. Numerical results show that the EKF method is much more accurate than the steady-state estimation method, which is used in the most sensitive atomic gyroscopes at present. Specifically, the root-mean-squared errors obtained by the EKF method are at least 103 times smaller than those obtained by the steady-state estimation method under the same response time.

  7. Fisher classifier and its probability of error estimation

    Science.gov (United States)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  8. Simulation-extrapolation method to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates, 1950-2003

    Energy Technology Data Exchange (ETDEWEB)

    Allodji, Rodrigue S.; Schwartz, Boris; Diallo, Ibrahima; Vathaire, Florent de [Gustave Roussy B2M, Radiation Epidemiology Group/CESP - Unit 1018 INSERM, Villejuif Cedex (France); Univ. Paris-Sud, Villejuif (France); Agbovon, Cesaire [Pierre and Vacances - Center Parcs Group, L' artois - Espace Pont de Flandre, Paris Cedex 19 (France); Laurier, Dominique [Institut de Radioprotection et de Surete Nucleaire (IRSN), DRPH, SRBE, Laboratoire d' epidemiologie, BP17, Fontenay-aux-Roses Cedex (France)

    2015-08-15

    Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10{sup -4} person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10{sup -4} person-years/Gy{sup 2}) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards. (orig.)

  9. Novel method for estimation of the indoor-to-outdoor airborne radioactivity ratio following the Fukushima Daiichi Nuclear Power Plant accident

    Energy Technology Data Exchange (ETDEWEB)

    Tan, Yanliang, E-mail: hytyl@163.com [College of Physics and Electronic Engineering, Hengyang Normal University, Hengyang, Hunan Province (China); Ishikawa, Tetsuo [Fukushima Medical University, 1 Hikariga-oka, Fukushima (Japan); Janik, Miroslaw [Regulatory Science Research Program, National Institute of Radiological Sciences, Chiba (Japan); Tokonami, Shinji [Department of Radiation Physics, Institute of Radiation Emergency Medicine, Hirosaki University, Hirosaki, Aomori (Japan); Hosoda, Masahiro [Hirosaki University Graduate School of Health Science, Hirosaki, Aomori (Japan); Sorimachi, Atsuyuki [Fukushima Medical University, 1 Hikariga-oka, Fukushima (Japan); Kearfott, Kimberlee [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States)

    2015-12-01

    The accident at the Fukushima Daiichi Nuclear Power Plant (FDNPP) in Japan resulted in significant releases of fission products. While substantial data exist concerning outdoor air radioactivity following the accident, the resulting indoor radioactivity remains pure speculation without a proper method for estimating the ratio of the indoor to outdoor airborne radioactivity, termed the airborne sheltering factor (ASF). Lacking a meaningful value of the ASF, it is difficult to assess the inhalation doses to residents and evacuees even when outdoor radionuclide concentrations are available. A simple model was developed and the key parameters needed to estimate the ASF were obtained through data fitting of selected indoor and outdoor airborne radioactivity measurement data obtained following the accident at a single location. Using the new model with values of the air exchange rate, interior air volume, and the inner surface area of the dwellings, the ASF can be estimated for a variety of dwelling types. Assessment of the inhalation dose to individuals readily follows from the value of the ASF, the person's indoor occupancy factor, and the measured outdoor radioactivity concentration. - Highlights: • Actual ASF of the dwells is very important to estimate the inhalation dose. • A simple model is developed to describe ASF. • The key parameter of ASF is obtained from the measurement of NIRS. • The ASF of any dwellings can be obtained by our model and relatively parameters.

  10. A new and simple gravimetric method for determination of uranium

    International Nuclear Information System (INIS)

    Saxena, A.K.

    1994-01-01

    A new and simple gravimetric method for determining uranium has been described. Using a known quantity of uranyl nitrate as the test solution, an alcoholic solution of 2-amino-2-methyl 1:3 propanediol (AMP) was added slowly. A yellow precipitate was obtained which was filtered through ashless filter paper, washed with alcohol, dried and ignited at 800 degC for 4h. It gave a black powder as a product which was shown by X-ray diffraction to be U 3 O 8 . The percentage error was found in the range -0.09 to +0.89. (author). 8 refs., 1 tab

  11. A practical method of estimating stature of bedridden female nursing home patients.

    Science.gov (United States)

    Muncie, H L; Sobal, J; Hoopes, J M; Tenney, J H; Warren, J W

    1987-04-01

    Accurate measurement of stature is important for the determination of several nutritional indices as well as body surface area (BSA) for the normalization of creatinine clearances. Direct standing measurement of stature of bedridden elderly nursing home patients is impossible, and stature as recorded in the chart may not be valid. An accurate stature obtained by summing five segmental measurements was compared to the stature recorded in the patient's chart and calculated estimates of stature from measurement of a long bone (humerus, tibia, knee height). Estimation of stature from measurement of knee height was highly correlated (r = 0.93) to the segmental measurement of stature while estimates from other long-bone measurements were less highly correlated (r = 0.71 to 0.81). Recorded chart stature was poorly correlated (r = 0.37). Measurement of knee height provides a simple, quick, and accurate means of estimating stature for bedridden females in nursing homes.

  12. Simple analytical methods for computing the gravity-wave contribution to the cosmic background radiation anisotropy

    International Nuclear Information System (INIS)

    Wang, Y.

    1996-01-01

    We present two simple analytical methods for computing the gravity-wave contribution to the cosmic background radiation (CBR) anisotropy in inflationary models; one method uses a time-dependent transfer function, the other methods uses an approximate gravity-mode function which is a simple combination of the lowest order spherical Bessel functions. We compare the CBR anisotropy tensor multipole spectrum computed using our methods with the previous result of the highly accurate numerical method, the open-quote open-quote Boltzmann close-quote close-quote method. Our time-dependent transfer function is more accurate than the time-independent transfer function found by Turner, White, and Lindsey; however, we find that the transfer function method is only good for l approx-lt 120. Using our approximate gravity-wave mode function, we obtain much better accuracy; the tensor multipole spectrum we find differs by less than 2% for l approx-lt 50, less than 10% for l approx-lt 120, and less than 20% for l≤300 from the open-quote open-quote Boltzmann close-quote close-quote result. Our approximate graviton mode function should be quite useful in studying tensor perturbations from inflationary models. copyright 1996 The American Physical Society

  13. An Estimation Method for number of carrier frequency

    Directory of Open Access Journals (Sweden)

    Xiong Peng

    2015-01-01

    Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.

  14. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  15. A simple method to design non-collision relative orbits for close spacecraft formation flying

    Science.gov (United States)

    Jiang, Wei; Li, JunFeng; Jiang, FangHua; Bernelli-Zazzera, Franco

    2018-05-01

    A set of linearized relative motion equations of spacecraft flying on unperturbed elliptical orbits are specialized for particular cases, where the leader orbit is circular or equatorial. Based on these extended equations, we are able to analyze the relative motion regulation between a pair of spacecraft flying on arbitrary unperturbed orbits with the same semi-major axis in close formation. Given the initial orbital elements of the leader, this paper presents a simple way to design initial relative orbital elements of close spacecraft with the same semi-major axis, thus preventing collision under non-perturbed conditions. Considering the mean influence of J 2 perturbation, namely secular J 2 perturbation, we derive the mean derivatives of orbital element differences, and then expand them to first order. Thus the first order expansion of orbital element differences can be added to the relative motion equations for further analysis. For a pair of spacecraft that will never collide under non-perturbed situations, we present a simple method to determine whether a collision will occur when J 2 perturbation is considered. Examples are given to prove the validity of the extended relative motion equations and to illustrate how the methods presented can be used. The simple method for designing initial relative orbital elements proposed here could be helpful to the preliminary design of the relative orbital elements between spacecraft in a close formation, when collision avoidance is necessary.

  16. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  17. Consumptive use of upland rice as estimated by different methods

    International Nuclear Information System (INIS)

    Chhabda, P.R.; Varade, S.B.

    1985-01-01

    The consumptive use of upland rice (Oryza sativa Linn.) grown during the wet season (kharif) as estimated by modified Penman, radiation, pan-evaporation and Hargreaves methods showed a variation from computed consumptive use estimated by the gravimetric method. The variability increased with an increase in the irrigation interval, and decreased with an increase in the level of N applied. The average variability was less in pan-evaporation method, which could reliably be used for estimating water requirement of upland rice if percolation losses are considered

  18. Simple and effective method for nuclear tellurium isomers separation from antimony cyclotron targets

    International Nuclear Information System (INIS)

    Bondarevskij, S.I.; Eremin, V.V.

    1999-01-01

    Simple and effective method of generation of tellurium nuclear isomers from irradiated on cyclotron metallic antimony is suggested. Basically this method consists in consideration of the big difference in volatilities of metallic forms of antimony, tin and tellurium. Heating of the tin-antimony alloy at 1200 K permits to separate about 90 % of produced quantity of 121m Te and 123m Te (in this case impurity of antimony radionuclides is not more than 1 % on activity) [ru

  19. A new method to study simple shear processing of wheat gluten-starch mixtures

    NARCIS (Netherlands)

    Peighambardoust, S.H.; Goot, A.J. van der; Hamer, R.J.; Boom, R.M.

    2004-01-01

    This article introduces a new method that uses a shearing device to study the effect of simple shear on the overall properties of pasta-like products made from commercial wheat gluten-starch (GS) blends. The shear-processed GS samples had a lower cooking loss (CL) and a higher swelling index (SI)

  20. A simple method for validation and verification of pipettes mounted on automated liquid handlers

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Hansen, Anders Johannes; Frøslev, Tobias G

    2011-01-01

    We have implemented a simple, inexpensive, and fast procedure for validation and verification of the performance of pipettes mounted on automated liquid handlers (ALHs) as necessary for laboratories accredited under ISO 17025. A six- or seven-step serial dilution of OrangeG was prepared in quadru......We have implemented a simple, inexpensive, and fast procedure for validation and verification of the performance of pipettes mounted on automated liquid handlers (ALHs) as necessary for laboratories accredited under ISO 17025. A six- or seven-step serial dilution of OrangeG was prepared...... are freely available. In conclusion, we have set up a simple, inexpensive, and fast solution for the continuous validation of ALHs used for accredited work according to the ISO 17025 standard. The method is easy to use for aqueous solutions but requires a spectrophotometer that can read microtiter plates....

  1. Methods for estimating low-flow statistics for Massachusetts streams

    Science.gov (United States)

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The

  2. Comparing Methods for Estimating Direct Costs of Adverse Drug Events.

    Science.gov (United States)

    Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas

    2017-12-01

    To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. The problem of the second wind turbine – a note on a common but flawed wind power estimation method

    Directory of Open Access Journals (Sweden)

    A. Kleidon

    2012-06-01

    Full Text Available Several recent wind power estimates suggest that this renewable energy resource can meet all of the current and future global energy demand with little impact on the atmosphere. These estimates are calculated using observed wind speeds in combination with specifications of wind turbine size and density to quantify the extractable wind power. However, this approach neglects the effects of momentum extraction by the turbines on the atmospheric flow that would have effects outside the turbine wake. Here we show with a simple momentum balance model of the atmospheric boundary layer that this common methodology to derive wind power potentials requires unrealistically high increases in the generation of kinetic energy by the atmosphere. This increase by an order of magnitude is needed to ensure momentum conservation in the atmospheric boundary layer. In the context of this simple model, we then compare the effect of three different assumptions regarding the boundary conditions at the top of the boundary layer, with prescribed hub height velocity, momentum transport, or kinetic energy transfer into the boundary layer. We then use simulations with an atmospheric general circulation model that explicitly simulate generation of kinetic energy with momentum conservation. These simulations show that the assumption of prescribed momentum import into the atmospheric boundary layer yields the most realistic behavior of the simple model, while the assumption of prescribed hub height velocity can clearly be disregarded. We also show that the assumptions yield similar estimates for extracted wind power when less than 10% of the kinetic energy flux in the boundary layer is extracted by the turbines. We conclude that the common method significantly overestimates wind power potentials by an order of magnitude in the limit of high wind power extraction. Ultimately, environmental constraints set the upper limit on wind power potential at larger scales rather than

  4. Phase difference estimation method based on data extension and Hilbert transform

    International Nuclear Information System (INIS)

    Shen, Yan-lin; Tu, Ya-qing; Chen, Lin-jun; Shen, Ting-ao

    2015-01-01

    To improve the precision and anti-interference performance of phase difference estimation for non-integer periods of sampling signals, a phase difference estimation method based on data extension and Hilbert transform is proposed. Estimated phase difference is obtained by means of data extension, Hilbert transform, cross-correlation, auto-correlation, and weighted phase average. Theoretical analysis shows that the proposed method suppresses the end effects of Hilbert transform effectively. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of phase difference estimation and has better performance of phase difference estimation than the correlation, Hilbert transform, and data extension-based correlation methods, which contribute to improving the measurement precision of the Coriolis mass flowmeter. (paper)

  5. A Simple Model of Global Aerosol Indirect Effects

    Science.gov (United States)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  6. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  7. Development of spectrophotometric fingerprinting method for ...

    African Journals Online (AJOL)

    Selective and efficient analytical methods are required not only for quality assurance but also for authentication of herbal formulations. A simple, rapid and validated fingerprint method has developed for estimation of piperine in 'Talisadi churna', a well known herbal formulation in India. The estimation was carried out in two ...

  8. A simple, fast and low-cost turn-on fluorescence method for dopamine detection using in situ reaction

    International Nuclear Information System (INIS)

    Zhang, Xiulan; Zhu, Yonggang; Li, Xie; Guo, Xuhong; Zhang, Bo; Jia, Xin

    2016-01-01

    A simple, fast and low-cost method for dopamine (DA) detection based on turn-on fluorescence using resorcinol is developed. The rapid reaction between resorcinol and DA allows the detection to be performed within 5 min, and the reaction product (azamonardine) with high quantum yield generates strong fluorescence signal for sensitive optical detection. The detection exhibits a high sensitivity to DA with a wide linear range of 10 nM–20 μM and the limit of detection is estimated to be 1.8 nM (S/N = 3). This approach has been successfully applied to determine DA concentrations in human urine samples with satisfactory quantitative recovery of 97.84%–103.50%, which shows great potential in clinical diagnosis. - Highlights: • A turn-on fluorescence technique is developed for dopamine detection by using one-step selective reaction between resorcinol and dopamine. • The limit of detection is 1.8 nM (S/N = 3). • This detection could be completed within 5 min. • The method has been demonstrated to successfully detect dopamine in human urine samples with high recovery ratio of 97.84%–103.50%.

  9. A simple, fast and low-cost turn-on fluorescence method for dopamine detection using in situ reaction

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiulan [School of Chemistry and Chemical Engineering/Key Laboratory for Green Processing of Chemical Engineering of Xinjiang Bingtuan, Key Laboratory of Materials-Oriented Chemical Engineering of Xinjiang Uygur Autonomous Region, Engineering Research Center of Materials-Oriented Chemical Engineering of Xinjiang Bingtuan, Shihezi University, Shihezi, 832003 (China); Zhu, Yonggang [Microfluidics and Fluid Dynamics Laboratory, CSIRO Manufacturing, Private Bag 10, Clayton South, Victoria, 3168 (Australia); Li, Xie [School of Chemistry and Chemical Engineering/Key Laboratory for Green Processing of Chemical Engineering of Xinjiang Bingtuan, Key Laboratory of Materials-Oriented Chemical Engineering of Xinjiang Uygur Autonomous Region, Engineering Research Center of Materials-Oriented Chemical Engineering of Xinjiang Bingtuan, Shihezi University, Shihezi, 832003 (China); Guo, Xuhong [School of Chemistry and Chemical Engineering/Key Laboratory for Green Processing of Chemical Engineering of Xinjiang Bingtuan, Key Laboratory of Materials-Oriented Chemical Engineering of Xinjiang Uygur Autonomous Region, Engineering Research Center of Materials-Oriented Chemical Engineering of Xinjiang Bingtuan, Shihezi University, Shihezi, 832003 (China); State Key Laboratory of Chemical Engineering, East China University of Science and Technology, Shanghai, 200237 (China); Zhang, Bo [Key Laboratory of Xinjiang Phytomedicine Resources of Ministry of Education, School of Pharmacy, Shihezi University, Shihezi, 832000 (China); Jia, Xin, E-mail: jiaxin@shzu.edu.cn [School of Chemistry and Chemical Engineering/Key Laboratory for Green Processing of Chemical Engineering of Xinjiang Bingtuan, Key Laboratory of Materials-Oriented Chemical Engineering of Xinjiang Uygur Autonomous Region, Engineering Research Center of Materials-Oriented Chemical Engineering of Xinjiang Bingtuan, Shihezi University, Shihezi, 832003 (China); and others

    2016-11-09

    A simple, fast and low-cost method for dopamine (DA) detection based on turn-on fluorescence using resorcinol is developed. The rapid reaction between resorcinol and DA allows the detection to be performed within 5 min, and the reaction product (azamonardine) with high quantum yield generates strong fluorescence signal for sensitive optical detection. The detection exhibits a high sensitivity to DA with a wide linear range of 10 nM–20 μM and the limit of detection is estimated to be 1.8 nM (S/N = 3). This approach has been successfully applied to determine DA concentrations in human urine samples with satisfactory quantitative recovery of 97.84%–103.50%, which shows great potential in clinical diagnosis. - Highlights: • A turn-on fluorescence technique is developed for dopamine detection by using one-step selective reaction between resorcinol and dopamine. • The limit of detection is 1.8 nM (S/N = 3). • This detection could be completed within 5 min. • The method has been demonstrated to successfully detect dopamine in human urine samples with high recovery ratio of 97.84%–103.50%.

  10. Determination of Some Cephalosporins in Pharmaceutical Formulations by a Simple and Sensitive Spectrofluorimetric Method

    Directory of Open Access Journals (Sweden)

    Ali Abdollahi, Ahad Bavili-Tabrizi

    2016-03-01

    Full Text Available Background: Cephalosporins are among the safest and the most effective broad-spectrum bactericidal antimicrobial agents which have been prescribed by the clinician as antibiotics. Thus, the developing of simple, sensitive and rapid analytical methods for their determination can be attractive and desirable. Methods: A simple, rapid and sensitive spectrofluorimetric method was developed for the determination of cefixime, cefalexin and ceftriaxone in pharmaceutical formulations. Proposed method is based on the oxidation of these cephalosporins with cerium (IV to produce cerium (III, and its fluorescence was monitored at 356 ± 3 nm after excitation at 254 ± 3 nm. Results: The variables effecting oxidation of each cephalosporin with cerum (IV were studied and optimized. Under the experimental conditions used, the calibration graphs were linear over the range 0.1-4 µg/mL. The limit of detection and limit of quantification were in the range 0.031-0.054 and 0.102-0.172 µg/mL, respectively. Intra- and inter-day assay precisions, expressed as the relative standard deviation (RSD, were lower than 5.6 and 6.8%, respectively. Conclusion: The proposed method was applied to the determination of studied cephalosporins in pharmaceutical formulations by good recoveries in the range 91-110%.

  11. A method for the estimation of the enthalpy of formation of mixed oxides in Al2O3-Ln2O3 systems

    International Nuclear Information System (INIS)

    Vonka, P.; Leitner, J.

    2009-01-01

    A new method is proposed for the estimation of the enthalpy of formation (Δ ox H) of various Al 2 O 3 -Ln 2 O 3 mixed oxides from the constituent binary oxides. Our method is based on Pauling's concept of electronegativity and, in particular, on the relation between the enthalpy of formation of a binary oxide and the difference between the electronegativities of the oxide-forming element and oxygen. This relation is extended to mixed oxides with a simple formula given for the calculation of Δ ox H. The parameters of this equation were fitted using published experimental values of Δ ox H derived from high-temperature oxide melt solution calorimetry. Using our proposed method, we obtained a standard deviation (σ) of 4.87 kJ mol -1 for this data set. Taking into account regularities within the lanthanide series, we then estimated the Δ ox H values for Al 2 O 3 -Ln 2 O 3 mixed oxides. The values estimated using our method were compared with those obtained by Aronson's and Zhuang's empirical methods, both of which give significantly poorer results. - Graphical abstract: Enthalpy of formation of Ln-Al-O oxides from the constituent binary ones.

  12. Inversely estimating the vertical profile of the soil CO2 production rate in a deciduous broadleaf forest using a particle filtering method.

    Science.gov (United States)

    Sakurai, Gen; Yonemura, Seiichiro; Kishimoto-Mo, Ayaka W; Murayama, Shohei; Ohtsuka, Toshiyuki; Yokozawa, Masayuki

    2015-01-01

    Carbon dioxide (CO2) efflux from the soil surface, which is a major source of CO2 from terrestrial ecosystems, represents the total CO2 production at all soil depths. Although many studies have estimated the vertical profile of the CO2 production rate, one of the difficulties in estimating the vertical profile is measuring diffusion coefficients of CO2 at all soil depths in a nondestructive manner. In this study, we estimated the temporal variation in the vertical profile of the CO2 production rate using a data assimilation method, the particle filtering method, in which the diffusion coefficients of CO2 were simultaneously estimated. The CO2 concentrations at several soil depths and CO2 efflux from the soil surface (only during the snow-free period) were measured at two points in a broadleaf forest in Japan, and the data were assimilated into a simple model including a diffusion equation. We found that there were large variations in the pattern of the vertical profile of the CO2 production rate between experiment sites: the peak CO2 production rate was at soil depths around 10 cm during the snow-free period at one site, but the peak was at the soil surface at the other site. Using this method to estimate the CO2 production rate during snow-cover periods allowed us to estimate CO2 efflux during that period as well. We estimated that the CO2 efflux during the snow-cover period (about half the year) accounted for around 13% of the annual CO2 efflux at this site. Although the method proposed in this study does not ensure the validity of the estimated diffusion coefficients and CO2 production rates, the method enables us to more closely approach the "actual" values by decreasing the variance of the posterior distribution of the values.

  13. A simple method of shower localization and identification in laterally segmented calorimeters

    International Nuclear Information System (INIS)

    Awes, T.C.; Obenshain, F.E.; Plasil, F.; Saini, S.; Young, G.R.; Sorensen, S.P.

    1992-01-01

    A method is proposed to calculate the first and second moments of the spatial distribution of the energy of electromagnetic and hadronic showers measured in laterally segmented colorimeters. The technique uses a logarithmic weighting of energy fraction observed in the individual detector cells. It is fast and simple requiring no fitting or complicated corrections for the position or angle of incidence. The method is demonstrated with GEANT simulations of a BGO detector array. The position resolution results and the e/π separation results are found to be equal or superior to those obtained with more complicated techniques. (orig.)

  14. Comparison of methods for estimating premorbid intelligence

    OpenAIRE

    Bright, Peter; van der Linde, Ian

    2018-01-01

    To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or ‘premorbid’) estimate of a patient’s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded ‘hold/no hold’ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed ...

  15. A simple method for estimating potential relative radiation (PRR) for landscape-vegetation analysis.

    Science.gov (United States)

    Kenneth B. Jr. Pierce; Todd Lookingbill; Dean. Urban

    2005-01-01

    Radiation is one of the primary influences on vegetation composition and spatial pattern. Topographic orientation is often used as a proxy for relative radiation load due to its effects on evaporative demand and local temperature. Common methods for incorporating this information (i.e., site measures of slope and aspect) fail to include daily or annual changes in solar...

  16. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  17. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  18. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  19. A simple method for deriving functional MSCs and applied for osteogenesis in 3D scaffolds

    DEFF Research Database (Denmark)

    Zou, Lijin; Luo, Yonglun; Chen, Muwan

    2013-01-01

    We describe a simple method for bone engineering using biodegradable scaffolds with mesenchymal stem cells derived from human induced-pluripotent stem cells (hiPS-MSCs). The hiPS-MSCs expressed mesenchymal markers (CD90, CD73, and CD105), possessed multipotency characterized by tri......-lineages differentiation: osteogenic, adipogenic, and chondrogenic, and lost pluripotency - as seen with the loss of markers OCT3/4 and TRA-1-81 - and tumorigenicity. However, these iPS-MSCs are still positive for marker NANOG. We further explored the osteogenic potential of the hiPS-MSCs in synthetic polymer......, our results suggest the iPS-MSCs derived by this simple method retain fully osteogenic function and provide a new solution towards personalized orthopedic therapy in the future....

  20. A simple method for identification of irradiated spices

    International Nuclear Information System (INIS)

    Behere, A.; Desai, S.R.P.; Nair, P.M.; Rao, S.M.D.

    1992-01-01

    Thermoluminescence (TL) properties of curry powder, a salt containing spice mixture, and three different ground spices, viz, chilli, turmeric and pepper, were compared with TL of table salt. The spices other than curry powder, did not exhibit characteristic TL in the absence of salt. Therefore studies were initiated to develop a simple and reliable method using common salt for distinguishing irradiated spices (10 kGy) from unirradiated ones under normal conditions of storage. Common salt exhibited a characteristic TL glow at 170 o C. However, when present in curry powder, the TL glow of salt showed a shift to 208 o C. It was further observed that upon storage up to 6 months, the TL of irradiated curry powder retained about 10% of the original intensity and still could be distinguished from the untreated samples. From our results it is evident that common salt could be used as an indicator either internally or externally in small sachets for incorporating into prepacked spices. (author)