WorldWideScience

Sample records for length estimator technique

  1. The PVC technique a method to estimate the dissipation length scale in turbulent flows

    Science.gov (United States)

    Ho, Chih-Ming; Zohar, Yitshak

    1997-12-01

    A time-averaged length scale can be defined by a pair of successive turbulent-velocity derivatives, i.e. [dnu(x)/ dxn][prime prime or minute]/ [dn+1u(x)/ dxn+1][prime prime or minute]. The length scale associated with the zeroth- and the first-order derivatives, u[prime prime or minute]/u[prime prime or minute]x, is the Taylor microscale. In isotropic turbulence, this scale is the average length between zero crossings of the velocity signal. The average length between zero crossings of the first velocity derivative, i.e. u[prime prime or minute]x/u[prime prime or minute]xx, can be reliably obtained by using the peak-valley-counting (PVC) technique. We have found that the most probable scale, rather than the average, equals the wavelength at the peak of the dissipation spectrum in a plane mixing layer (Zohar & Ho 1996). In this study, we experimentally investigate the generality of applying the PVC technique to estimate the dissipation scale in three basic turbulent shear flows: a flat-plate boundary layer, a wake behind a two-dimensional cylinder and a plane mixing layer. We also analytically explore the quantitative relationships among this length scale and the Kolmogorov and Taylor microscales.

  2. ESTIMATION OF STATURE BASED ON FOOT LENGTH

    Directory of Open Access Journals (Sweden)

    Vidyullatha Shetty

    2015-01-01

    Full Text Available BACKGROUND : Stature is the height of the person in the upright posture. It is an important measure of physical identity. Estimation of body height from its segments or dismember parts has important considerations for identifications of living or dead human body or remains recovered from disasters or other similar conditions. OBJECTIVE : Stature is an important indicator for identification. There are numerous means to establish stature and their significance lies in the simplicity of measurement, applicability and accuracy in prediction. Our aim of the study was to review the relationship between foot length and body height. METHODS : The present study reviews various prospective studies which were done to estimate the stature. All the measurements were taken by using standard measuring devices and standard anthropometric techniques. RESULTS : This review shows there is a correlation between stature and foot dimensions it is found to be positive and statistically highly significant. Prediction of stature was found to be most accurate by multiple regression analysis. CONCLUSIONS : Stature and gender estimation can be done by using foot measurements and stud y will help in medico - legal cases in establishing identity of an individual and this would be useful for Anatomists and Anthropologists to calculate stature based on foot length

  3. Step Length Estimation Using Handheld Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Gérard Lachapelle

    2012-06-01

    Full Text Available In this paper a novel step length model using a handheld Micro Electrical Mechanical System (MEMS is presented. It combines the user’s step frequency and height with a set of three parameters for estimating step length. The model has been developed and trained using 12 different subjects: six men and six women. For reliable estimation of the step frequency with a handheld device, the frequency content of the handheld sensor’s signal is extracted by applying the Short Time Fourier Transform (STFT independently from the step detection process. The relationship between step and hand frequencies is analyzed for different hand’s motions and sensor carrying modes. For this purpose, the frequency content of synchronized signals collected with two sensors placed in the hand and on the foot of a pedestrian has been extracted. Performance of the proposed step length model is assessed with several field tests involving 10 test subjects different from the above 12. The percentages of error over the travelled distance using universal parameters and a set of parameters calibrated for each subject are compared. The fitted solutions show an error between 2.5 and 5% of the travelled distance, which is comparable with that achieved by models proposed in the literature for body fixed sensors only.

  4. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...

  5. Estimation of ocular volume from axial length.

    Science.gov (United States)

    Nagra, Manbir; Gilmartin, Bernard; Logan, Nicola S

    2014-12-01

    To determine which biometric parameters provide optimum predictive power for ocular volume. Sixty-seven adult subjects were scanned with a Siemens 3-T MRI scanner. Mean spherical error (MSE) (D) was measured with a Shin-Nippon autorefractor and a Zeiss IOLMaster used to measure (mm) axial length (AL), anterior chamber depth (ACD) and corneal radius (CR). Total ocular volume (TOV) was calculated from T2-weighted MRIs (voxel size 1.0 mm(3)) using an automatic voxel counting and shading algorithm. Each MR slice was subsequently edited manually in the axial, sagittal and coronal plane, the latter enabling location of the posterior pole of the crystalline lens and partitioning of TOV into anterior (AV) and posterior volume (PV) regions. Mean values (±SD) for MSE (D), AL (mm), ACD (mm) and CR (mm) were -2.62±3.83, 24.51±1.47, 3.55±0.34 and 7.75±0.28, respectively. Mean values (±SD) for TOV, AV and PV (mm(3)) were 8168.21±1141.86, 1099.40±139.24 and 7068.82±1134.05, respectively. TOV showed significant correlation with MSE, AL, PV (all p<0.001), CR (p=0.043) and ACD (p=0.024). Bar CR, the correlations were shown to be wholly attributable to variation in PV. Multiple linear regression indicated that the combination of AL and CR provided optimum R(2) values of 79.4% for TOV. Clinically useful estimations of ocular volume can be obtained from measurement of AL and CR. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Estimation of rainfall indices for the determination of length of ...

    African Journals Online (AJOL)

    2004) to determine the length of growing season and timing of arable crop cultivation in Ogun State. Secondary rainfall data was obtained from the four meteorological stations in the State and subjected to arithmetic calculations for estimation of ...

  7. Blind sequence-length estimation of low-SNR cyclostationary sequences

    CSIR Research Space (South Africa)

    Vlok, JD

    2014-06-01

    Full Text Available performance bound Estimation algorithm 1 takes the index k of the maximum value of the mean-square correlation sequence ρ(k) as the estimated sequence length Nest. The sequence length will therefore be estimated correctly if the peak of ρ(k) is located at k... the estimated sequence length Nest, and technique 1 can therefore only provide the correct answer as long as k = N is considered within the range of k. The positions of segments within the intercepted signal and the value of L will also influence the performance...

  8. STEREOLOGICAL ESTIMATION OF TUBULAR LENGTH FROM THIN VERTICAL SECTIONS

    Directory of Open Access Journals (Sweden)

    Helle V Clausen

    2011-05-01

    Full Text Available In this study tubular structures are represented by stem villous arteries of the human placenta. The architecture of the vascular tree in the human placenta makes it appropriate to use vertical histological sections. Describing tubules, estimates of total length and diameter are informative. The aim of the study was to derive a new stereological estimator of the total length of circular tubules observed in thin vertical sections. Design: Dual perfusion fixed human placentas. Systematic, uniformly random sampling of vertical sections. Five-μm-sections were stained by haematoxylin and eosin (H&E and the vertical axis was identified in all sections. A test system with cycloid test lines was used. Since tubular surface area is proportional to length and diameter, S ∝ πdL, surface-weightening is equivalent to diameter×lengthweightening. As each diameter (extra weight is known, one may eliminate the diameter-weightening by computing the harmonic mean diameter, which is thus the correct, length-weighted mean tubular diameter, dL = d hS . Surface area is estimated in the ordinary way from vertical sections, and with unbiased and robust estimates of S and dL respectively, total length may be estimated L = S / πdhs Conclusion: A new stereological estimator of total length of a circular tubular structure observed in thin vertical sections is presented.

  9. Estimation of gestational age from gall-bladder length.

    Science.gov (United States)

    Udaykumar, K; Udaykumar, Padmaja; Nagesh, K R

    2016-01-01

    Establishing a precise duration of gestation is vital in situations such as infanticide and criminal abortions. The present study attempted to estimate the gestational age of the foetus from gall-bladder length. Foetuses of various gestational age groups were dissected, and the length of the gall bladder was measured. The results were analysed, and a substantial degree of correlation was statistically confirmed. This novel method is helpful when the foetus is fragmented, putrefied or eviscerated, where this method can be used as an additional parameter to improve the accuracy of foetal age estimation. © The Author(s) 2015.

  10. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  11. Stature estimation from hand and phalanges lengths of Egyptians.

    Science.gov (United States)

    Habib, Sahar Refaat; Kamal, Nashwa Nabil

    2010-04-01

    Estimation of stature from extremities plays an important role in identifying the deceased in forensic examinations. This study examines the relationship between stature and hand and phalanges lengths among Egyptians. Stature, hand and phalanges lengths of 159 subjects, 82 males and 77 females (18-25years) were measured. Statistical analysis indicated that bilateral variation was insignificant for all measurements. Sex differences were significant for all measurements. Linear and multiple regression equations for stature estimation were calculated. Correlation coefficients were found to be positive, but little finger measurements of male and distal phalanges of female fingers were not correlated with stature. Regression equations were checked for accuracy by comparing the estimated stature and actual stature.

  12. Estimation of papaya leaf area using the central vein length

    Directory of Open Access Journals (Sweden)

    Campostrini Eliemar

    2001-01-01

    Full Text Available Four genotypes of papaya (Carica papaya L. two from the 'Solo' group (Sunrise Solo and Improved Sunrise Solo line 72/12 and two from the 'Formosa' group (Tainung 02 and Known-You 01, grown in Macaé, RJ, Brazil (lat. 22(0 24' S, long. 41(0 42' W, were used in this study. Twenty-five mature leaves from each genotype were sampled four and five months after seedling transplant to the field to determine the length of the leaf central vein (LLCV and the leaf area (LA. According to covariance analyses there were no significant differences in the slope and intercept of the mathematical models calculated for each genotype. Thus, a single mathematical model (Log LA = 0.315 + 1.85 Log LLCV, R²=0.898 was adjusted to estimate the LA using the length of LLCV for the four genotypes. An unique model can be applied to estimate the LA for the four papaya genotypes using LLCV in the range from 0.25 to 0.60 m, and for papaya trees 150 to 180 days after transplanting.

  13. Bayesian techniques for surface fuel loading estimation

    Science.gov (United States)

    Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell

    2016-01-01

    A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...

  14. Techniques for estimating allometric equations.

    Science.gov (United States)

    Manaster, B J; Manaster, S

    1975-11-01

    Morphologists have long been aware that differential size relationships of variables can be fo great value when studying shape. Allometric patterns have been the basis of many interpretations of adaptations, biomechanisms, and taxonomies. It is of importance that the parameters of the allometric equation be as accurate estimates as possible since they are so commonly used in such interpretations. Since the error term may come into the allometric relation either exponentially or additively, there are at least two methods of estimating the parameters of the allometric equation. That most commonly used assumes exponentiality of the error term, and operates by forming a linear function by a logarithmic transformation and then solving by the method of ordinary least squares. On the other hand, if the rrror term comes into the equation in an additive way, a nonlinear method may be used, searching the parameter space for those parameters which minimize the sum of squared residuals. Study of data on body weight and metabolism in birds explores the issues involved in discriminating between the two models by working through a specific example and shows that these two methods of estimation can yield highly different results. Not only minimizing the sum of squared residuals, but also the distribution and randomness of the residuals must be considered in determing which model more precisely estimates the parameters. In general there is no a priori way to tell which model will be best. Given the importance often attached to the parameter estimates, it may be well worth considerable effort to find which method of solution is appropriate for a given set of data.

  15. Spectral Estimation by the Random Dec Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, Jacob L.; Krenk, Steen

    1990-01-01

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  16. Spectral Estimation by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, J. Laigaard; Krenk, S.

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  17. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K. [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2016-03-15

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  18. A new method for sex estimation from maxillary suture length in a Thai population

    Science.gov (United States)

    Sinthubua, Apichat; Ruengdit, Sittiporn; Das, Srijit

    2017-01-01

    Sex estimation is one of the crucial procedures in the biological profile identification of human skeletal remains. Knowing sex of unknown case can lead to accurate and appropriate methods for predicting age, stature, ancestry, or even personal identification. Skull is one of the most reliable one among other skeletons and it is usually retained for both archaeological and forensic contexts. Although many morphological features and metric measurements of skull have been studied for sexing, but to the best of our knowledge is no study on maxillary suture length for sex estimation. Therefore, this study aims to develop a new sex estimation method for a Thai population by determining three maxillary suture lengths: anterior, transverse, and posterior maxillary suture, by computerizing amount of pixel obtained from photographs of these sutures. The present study was conducted on 190 Thai bone samples of which 96 were males and 94 were females. Independent t test revealed statistically significant difference (P<0.01) between males and females in all maxillary suture measurements. Equations derived from prediction model, which required three maxillary suture lengths gave 76.8421% accuracy from the leave-one-out cross validation in estimating sex percentage accuracies in predicting sex from these equations, which were relatively moderate. This study provides a novel and objective sex estimation method for Thais. It suggests that maxillary suture length can be applied for sex estimation. The new computerized technique will contribute basis knowledge and method for sex estimation, especially when only base of skull is available in forensic circumstance. PMID:29354297

  19. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  20. Can anchovy age structure be estimated from length distribution ...

    African Journals Online (AJOL)

    The analysis provides a new time-series of proportions-at-age 1, together with associated standard errors, for input into assessments of the resource. The results also caution against the danger of scientists reading more information into data than is really there. Keywords: anchovy, effective sample size, length distribution, ...

  1. Calcaneotalar ratio: a new concept in the estimation of the length of the calcaneus.

    Science.gov (United States)

    David, Vikram; Stephens, Terry J; Kindl, Radek; Ang, Andy; Tay, Wei-Han; Asaid, Rafik; McCullough, Keith

    2015-01-01

    Maintaining the calcaneal length after calcaneal fractures is vital to restoring the normal biomechanics of the foot, because it acts as an important lever arm to the plantarflexors of the foot. However, estimation of the length of the calcaneus to be reconstructed in comminuted calcaneal fractures can be difficult. We propose a new method to reliably estimate the calcaneal length radiographically by defining the calcaneotalar length ratio. A total of 100 ankle radiographs with no fracture in the calcaneus or talus taken in skeletally mature patients were reviewed by 6 observers. The anteroposterior lengths of the calcaneus and talus were measured, and the calcaneotalar length ratio was determined. The ratio was then used to estimate the length of the calcaneus. Interobserver reliability was determined using Cronbach's α coefficient and Pearson's correlation coefficient. The mean length of the calcaneus was 75 ± 0.6 mm, and the mean length of the talus was 59 ± 0.5 mm. The calcaneotalar ratio was 1.3. Using this ratio and multiplying it by the talar length, the mean average estimated length of the calcaneus was within 0.7 mm of the known calcaneal length. Cronbach's α coefficient and Pearson's correlation coefficient showed excellent interobserver reliability. The proposed calcaneotalar ratio is a new and reliable method to radiographically estimate the normal length of the calcaneus when reconstructing the calcaneus. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Studies on the Estimation of Stature from Hand and Foot Length of an Individual

    Directory of Open Access Journals (Sweden)

    O. S. Saka

    2016-10-01

    Full Text Available Background: Studies on the estimation of stature from hand and foot length of an individual are essential study in personal identification. Aim and Objectives: This study is to find out correlation between statures with hand and foot dimensions in both sexes and gender comparison from an individual in Lautech Staff College in Ogbomoso and College ogbomoso and College of Health Sciences, Obafemi Awolowo University, Ile-Ife, Nigeria. Material and Methods: A sample of 140 students and staff; 70 male and 70 female Students and staff of Lautech Staff College in Ogbomoso and College ogbomoso and College of Health Sciences, Obafemi Awolowo University, Ile-Ife, between 16-35years were considered and measurements were taken for each of the parameters. Gender differences for the two parameters were determined using Student t-test. Pearson's correlation coefficient (r was used to examine the relationship between two anthropometric parameters and standing height (stature. All these measurements were done by using standard anthropometric instruments and standard anthropometric techniques. Results: The findings of the study indicated that the males mean values are not significantly difference when compared with females mean values in all measured parameters. The study showed significant (p<0.001 positive correlation between the stature with hand lengths and foot lengths. The hand and foot length provide accurate and reliable means in establishing the height of an individual. Conclusion: This study will be useful for forensic scientists and anthropologists as well as anatomists in ascertain medico-legal cases

  3. Focal length estimation guided with object distribution on FocaLens dataset

    Science.gov (United States)

    Yan, Han; Zhang, Yu; Zhang, Shunli; Zhao, Sicong; Zhang, Li

    2017-05-01

    The focal length information of an image is indispensable for many computer vision tasks. In general, focal length can be obtained via camera calibration using specific planner patterns. However, for images taken by an unknown device, focal length can only be estimated based on the image itself. Currently, most of the single-image focal length estimation methods make use of predefined geometric cues (such as vanishing points or parallel lines) to infer focal length, which constrains their applications mainly on manmade scenes. The machine learning algorithms have demonstrated great performance in many computer vision tasks, but these methods are seldom used in the focal length estimation task, partially due to the shortage of labeled images for training the model. To bridge this gap, we first introduce a large-scale dataset FocaLens, which is especially designed for single-image focal length estimation. Taking advantage of the FocaLens dataset, we also propose a new focal length estimation model, which exploits the multiscale detection architecture to encode object distributions in images to assist focal length estimation. Additionally, an online focal transformation approach is proposed to further promote the model's generalization ability. Experimental results demonstrate that the proposed model trained on FocaLens can not only achieve state-of-the-art results on the scenes with distinct geometric cues but also obtain comparable results on the scenes even without distinct geometric cues.

  4. Usefulness of telomere length in DNA from human teeth for age estimation.

    Science.gov (United States)

    Márquez-Ruiz, Ana Belén; González-Herrera, Lucas; Valenzuela, Aurora

    2018-03-01

    Age estimation is widely used to identify individuals in forensic medicine. However, the accuracy of the most commonly used procedures is markedly reduced in adulthood, and these methods cannot be applied in practice when morphological information is limited. Molecular methods for age estimation have been extensively developed in the last few years. The fact that telomeres shorten at each round of cell division has led to the hypothesis that telomere length can be used as a tool to predict age. The present study thus aimed to assess the correlation between telomere length measured in dental DNA and age, and the effect of sex and tooth type on telomere length; a further aim was to propose a statistical regression model to estimate the biological age based on telomere length. DNA was extracted from 91 tooth samples belonging to 77 individuals of both sexes and 15 to 85 years old and was used to determine telomere length by quantitative real-time PCR. Our results suggested that telomere length was not affected by sex and was greater in molar teeth. We found a significant correlation between age and telomere length measured in DNA from teeth. However, the equation proposed to predict age was not accurate enough for forensic age estimation on its own. Age estimation based on telomere length in DNA from tooth samples may be useful as a complementary method which provides an approximate estimate of age, especially when human skeletal remains are the only forensic sample available.

  5. Stature estimation from the length of the sternum in South Indian males: a preliminary study.

    Science.gov (United States)

    Menezes, Ritesh G; Kanchan, Tanuj; Kumar, G Pradeep; Rao, P P Jagadish; Lobo, Stany W; Uysal, Selma; Krishan, Kewal; Kalthur, Sneha G; Nagesh, K R; Shettigar, Sunder

    2009-11-01

    Estimation of stature is one of the important initial steps during forensic analysis of human skeletal remains. The aim of the present study was to derive a linear regression formula for estimating stature of adult South Indian males from the length of the sternum. The study included 35 male sternums of South Indian origin dissected from cadavers during medico-legal autopsies. The linear regression equation [Stature=117.784 + (3.429 x Sternal length)] was derived to estimate the stature from the length of the sternum. The correlation coefficient was 0.638. The standard error of the estimate was 5.64 cm. This preliminary study concludes that the length of the sternum can be used as a tool for stature estimation in adult South Indian males.

  6. Learning curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  7. Development of software and modification of Q-FISH protocol for estimation of individual telomere length in immunopathology.

    Science.gov (United States)

    Barkovskaya, M Sh; Bogomolov, A G; Knauer, N Yu; Rubtsov, N B; Kozlov, V A

    2017-04-01

    Telomere length is an important indicator of proliferative cell history and potential. Decreasing telomere length in the cells of an immune system can indicate immune aging in immune-mediated and chronic inflammatory diseases. Quantitative fluorescent in situ hybridization (Q-FISH) of a labeled (C 3 TA[Formula: see text] peptide nucleic acid probe onto fixed metaphase cells followed by digital image microscopy allows the evaluation of telomere length in the arms of individual chromosomes. Computer-assisted analysis of microscopic images can provide quantitative information on the number of telomeric repeats in individual telomeres. We developed new software to estimate telomere length. The MeTeLen software contains new options that can be used to solve some Q-FISH and microscopy problems, including correction of irregular light effects and elimination of background fluorescence. The identification and description of chromosomes and chromosome regions are essential to the Q-FISH technique. To improve the quality of cytogenetic analysis after Q-FISH, we optimized the temperature and time of DNA-denaturation to get better DAPI-banding of metaphase chromosomes. MeTeLen was tested by comparing telomere length estimations for sister chromatids, background fluorescence estimations, and correction of nonuniform light effects. The application of the developed software for analysis of telomere length in patients with rheumatoid arthritis was demonstrated.

  8. Estimating the minimum delay optimal cycle length based on a time-dependent delay formula

    Directory of Open Access Journals (Sweden)

    Ahmed Y. Zakariya

    2016-09-01

    Full Text Available For fixed time traffic signal control, the well-known Webster’s formula is widely used to estimate the minimum delay optimal cycle length. However, this formula overestimates the cycle length for high degrees of saturation. In this paper, we propose two regression formulas for estimating the minimum delay optimal cycle length based on a time-dependent delay formula as used in the Canadian Capacity Guide and the Highway Capacity Manual (HCM. For this purpose, we develop a search algorithm to determine the minimum delay optimal cycle length required for the regression analysis. Numerical results show that the proposed formulas give a better estimation for the optimal cycle length at high intersection flow ratios compared to Webster’s formula.

  9. A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation

    Science.gov (United States)

    Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming

    2003-01-01

    This paper presents the development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during summer 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR- based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and t i f m rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.

  10. Preoperative estimation of tibial nail length--because size does matter.

    LENUS (Irish Health Repository)

    Galbraith, J G

    2012-11-01

    Selecting the correct tibial nail length is essential for satisfactory outcomes. Nails that are inserted and are found to be of inappropriate length should be removed. Accurate preoperative nail estimation has the potential to reduce intra-operative errors, operative time and radiation exposure.

  11. Cost analysis and estimating tools and techniques

    CERN Document Server

    Nussbaum, Daniel

    1990-01-01

    Changes in production processes reflect the technological advances permeat­ ing our products and services. U. S. industry is modernizing and automating. In parallel, direct labor is fading as the primary cost driver while engineering and technology related cost elements loom ever larger. Traditional, labor-based ap­ proaches to estimating costs are losing their relevance. Old methods require aug­ mentation with new estimating tools and techniques that capture the emerging environment. This volume represents one of many responses to this challenge by the cost analysis profession. The Institute of Cost Analysis (lCA) is dedicated to improving the effective­ ness of cost and price analysis and enhancing the professional competence of its members. We encourage and promote exchange of research findings and appli­ cations between the academic community and cost professionals in industry and government. The 1990 National Meeting in Los Angeles, jointly spo~sored by ICA and the National Estimating Society (NES),...

  12. Transport-constrained extensions of collision and track length estimators for solutions of radiative transport problems

    International Nuclear Information System (INIS)

    Kong, Rong; Spanier, Jerome

    2013-01-01

    In this paper we develop novel extensions of collision and track length estimators for the complete space-angle solutions of radiative transport problems. We derive the relevant equations, prove that our new estimators are unbiased, and compare their performance with that of more conventional estimators. Such comparisons based on numerical solutions of simple one dimensional slab problems indicate the the potential superiority of the new estimators for a wide variety of more general transport problems

  13. Population estimation techniques for routing analysis

    International Nuclear Information System (INIS)

    Sathisan, S.K.; Chagari, A.K.

    1994-01-01

    A number of on-site and off-site factors affect the potential siting of a radioactive materials repository at Yucca Mountain, Nevada. Transportation related issues such route selection and design are among them. These involve evaluation of potential risks and impacts, including those related to population. Population characteristics (total population and density) are critical factors in the risk assessment, emergency preparedness and response planning, and ultimately in route designation. This paper presents an application of Geographic Information System (GIS) technology to facilitate such analyses. Specifically, techniques to estimate critical population information are presented. A case study using the highway network in Nevada is used to illustrate the analyses. TIGER coverages are used as the basis for population information at a block level. The data are then synthesized at tract, county and state levels of aggregation. Of particular interest are population estimates for various corridor widths along transport corridors -- ranging from 0.5 miles to 20 miles in this paper. A sensitivity analysis based on the level of data aggregation is also presented. The results of these analysis indicate that specific characteristics of the area and its population could be used as indicators to aggregate data appropriately for the analysis

  14. Estimation of stature from the length of the sternum in South Indian females.

    Science.gov (United States)

    Menezes, Ritesh G; Nagesh, K R; Monteiro, Francis N P; Kumar, G Pradeep; Kanchan, Tanuj; Uysal, Selma; Rao, P P Jagadish; Rastogi, Prateek; Lobo, Stany W; Kalthur, Sneha G

    2011-08-01

    Estimation of stature is one of the principal elements in practical forensic casework involving examination of skeletal remains. The present study was undertaken to estimate stature from the length of the sternum in South Indian females using a linear regression equation. The material for the present study consisted of intact sternums belonging to adult females of South Indian origin aged between 25 and 35 years of age obtained during medico-legal autopsies. The length of the sternum was measured as the combined length of the manubrium and the mesosternum (body of the sternum) from the incisura jugularis (central suprasternal notch) to the mesoxiphoid junction along the mid-sagittal plane using vernier calipers. A linear regression equation [Stature = 111.599 + (3.316 × Length of the sternum)] was derived to estimate stature from the length of the sternum. The correlation coefficient was 0.639. The standard error of the estimate was 4.11 cm. The present study concludes that the length of the sternum is a reliable predictor of stature in adult South Indian females and can be used as a tool for stature estimation when better predictors of stature like the long bones of the limbs are not available when examining skeletal remains. Copyright © 2011 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    In this paper, two novel methods based on the Estimate Merge Technique are proposed. The Estimate Merge Technique involves a process of getting a final estimate by the fusion of a posteriori estimates given by different nonlinear estimates, which are in turn driven by the towed array bearing-only measurements.

  16. Pedestrian Stride Length Estimation from IMU Measurements and ANN Based Algorithm

    Directory of Open Access Journals (Sweden)

    Haifeng Xing

    2017-01-01

    Full Text Available Pedestrian dead reckoning (PDR can be used for continuous position estimation when satellite or other radio signals are not available, and the accuracy of the stride length measurement is important. Current stride length estimation algorithms, including linear and nonlinear models, consider a few variable factors, and some rely on high precision and high cost equipment. This paper puts forward a stride length estimation algorithm based on a back propagation artificial neural network (BP-ANN, using a consumer-grade inertial measurement unit (IMU; it then discusses various factors in the algorithm. The experimental results indicate that the error of the proposed algorithm in estimating the stride length is approximately 2%, which is smaller than that of the frequency and nonlinear models. Compared with the latter two models, the proposed algorithm does not need to determine individual parameters in advance if the trained neural net is effective. It can, thus, be concluded that this algorithm shows superior performance in estimating pedestrian stride length.

  17. Node Detection and Internode Length Estimation of Tomato Seedlings Based on Image Analysis and Machine Learning

    Directory of Open Access Journals (Sweden)

    Kyosuke Yamamoto

    2016-07-01

    Full Text Available Seedling vigor in tomatoes determines the quality and growth of fruits and total plant productivity. It is well known that the salient effects of environmental stresses appear on the internode length; the length between adjoining main stem node (henceforth called node. In this study, we develop a method for internode length estimation using image processing technology. The proposed method consists of three steps: node detection, node order estimation, and internode length estimation. This method has two main advantages: (i as it uses machine learning approaches for node detection, it does not require adjustment of threshold values even though seedlings are imaged under varying timings and lighting conditions with complex backgrounds; and (ii as it uses affinity propagation for node order estimation, it can be applied to seedlings with different numbers of nodes without prior provision of the node number as a parameter. Our node detection results show that the proposed method can detect 72% of the 358 nodes in time-series imaging of three seedlings (recall = 0.72, precision = 0.78. In particular, the application of a general object recognition approach, Bag of Visual Words (BoVWs, enabled the elimination of many false positives on leaves occurring in the image segmentation based on pixel color, significantly improving the precision. The internode length estimation results had a relative error of below 15.4%. These results demonstrate that our method has the ability to evaluate the vigor of tomato seedlings quickly and accurately.

  18. Node Detection and Internode Length Estimation of Tomato Seedlings Based on Image Analysis and Machine Learning.

    Science.gov (United States)

    Yamamoto, Kyosuke; Guo, Wei; Ninomiya, Seishi

    2016-07-07

    Seedling vigor in tomatoes determines the quality and growth of fruits and total plant productivity. It is well known that the salient effects of environmental stresses appear on the internode length; the length between adjoining main stem node (henceforth called node). In this study, we develop a method for internode length estimation using image processing technology. The proposed method consists of three steps: node detection, node order estimation, and internode length estimation. This method has two main advantages: (i) as it uses machine learning approaches for node detection, it does not require adjustment of threshold values even though seedlings are imaged under varying timings and lighting conditions with complex backgrounds; and (ii) as it uses affinity propagation for node order estimation, it can be applied to seedlings with different numbers of nodes without prior provision of the node number as a parameter. Our node detection results show that the proposed method can detect 72% of the 358 nodes in time-series imaging of three seedlings (recall = 0.72, precision = 0.78). In particular, the application of a general object recognition approach, Bag of Visual Words (BoVWs), enabled the elimination of many false positives on leaves occurring in the image segmentation based on pixel color, significantly improving the precision. The internode length estimation results had a relative error of below 15.4%. These results demonstrate that our method has the ability to evaluate the vigor of tomato seedlings quickly and accurately.

  19. Effect of radiographic technique upon prediction of tooth length in intraoral radiography.

    Science.gov (United States)

    Bhakdinaronk, A; Manson-Hing, L R

    1981-01-01

    Evaluation of the effect of radiographic technique upon the prediction of tooth lengths for all major types of teeth indicated the following: 1. The paralleling technique using the Rinn XCP film holder, the paralleling technique using the hemostat with bite block, and the bisecting-the-angle technique using the Rinn XCP film holder were the most accurate systems. 2. There were no significant differences among the three most accurate techniques. 3. The paralleling technique using a 16-inch tube-to-patient distance was more accurate than the bisecting-the-angle technique using an 8-inch tube-to-patient distance. 4. A beam-guiding film holder produced a more accurate radiographic image than the other film holders used in the study. 5. For the buccal roots of maxillary molars, the bisecting-the-angle technique using the Rinn XCP film holder produced the least mean difference between radiographic image and tooth length. 6. The accuracy of predicting the length of mandibular molar teeth from the diagnostic radiograph was not affected by radiographic technique.

  20. Monte Carlo simulation of prompt γ-ray emission in proton therapy using a specific track length estimator

    International Nuclear Information System (INIS)

    El Kanawati, W; Létang, J M; Sarrut, D; Freud, N; Dauvergne, D; Pinto, M; Testa, É

    2015-01-01

    A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 10 5 . (paper)

  1. Natural spline interpolation and exponential parameterization for length estimation of curves

    Science.gov (United States)

    Kozera, R.; Wilkołazka, M.

    2017-07-01

    This paper tackles the problem of estimating a length of a regular parameterized curve γ from an ordered sample of interpolation points in arbitrary Euclidean space by a natural spline. The corresponding tabular parameters are not given and are approximated by the so-called exponential parameterization (depending on λ ∈ [0, 1]). The respective convergence orders α(λ) for estimating length of γ are established for curves sampled more-or-less uniformly. The numerical experiments confirm a slow convergence orders α(λ) = 2 for all λ ∈ [0, 1) and a cubic order α(1) = 3 once natural spline is used.

  2. Estimation of age structure of fish populations from length-frequency data

    International Nuclear Information System (INIS)

    Kumar, K.D.; Adams, S.M.

    1977-01-01

    A probability model is presented to determine the age structure of a fish population from length-frequency data. It is shown that when the age-length key is available, maximum-likelihood estimates of the age structure can be obtained. When the key is not available, approximate estimates of the age structure can be obtained. The model is used for determination of the age structure of populations of channel catfish and white crappie. Practical applications of the model to impact assessment are discussed

  3. Blood capillary length estimation from three-dimensional microscopic data by image analysis and stereology.

    Science.gov (United States)

    Kubínová, Lucie; Mao, Xiao Wen; Janáček, Jiří

    2013-08-01

    Studies of the capillary bed characterized by its length or length density are relevant in many biomedical studies. A reliable assessment of capillary length from two-dimensional (2D), thin histological sections is a rather difficult task as it requires physical cutting of such sections in randomized directions. This is often technically demanding, inefficient, or outright impossible. However, if 3D image data of the microscopic structure under investigation are available, methods of length estimation that do not require randomized physical cutting of sections may be applied. Two different rat brain regions were optically sliced by confocal microscopy and resulting 3D images processed by three types of capillary length estimation methods: (1) stereological methods based on a computer generation of isotropic uniform random virtual test probes in 3D, either in the form of spatial grids of virtual "slicer" planes or spherical probes; (2) automatic method employing a digital version of the Crofton relations using the Euler characteristic of planar sections of the binary image; and (3) interactive "tracer" method for length measurement based on a manual delineation in 3D of the axes of capillary segments. The presented methods were compared in terms of their practical applicability, efficiency, and precision.

  4. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    Science.gov (United States)

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  5. Standing Height and its Estimation Utilizing Foot Length Measurements in Adolescents from Western Region in Kosovo

    Directory of Open Access Journals (Sweden)

    Stevo Popović

    2017-10-01

    Full Text Available The purpose of this research is to examine standing height in both Kosovan genders in the Western Region as well as its association with foot length, as an alternative to estimating standing height. A total of 664 individuals (338 male and 326 female participated in this research. The anthropometric measurements were taken according to the protocol of ISAK. The relationships between body height and foot length were determined using simple correlation coefficients at a ninety-five percent confidence interval. A comparison of means of standing height and foot length between genders was performed using a t-test. After that a linear regression analysis were carried out to examine extent to which foot length can reliably predict standing height. Results displayed that Western Kosovan male are 179.71±6.00cm tall and have a foot length of 26.73±1.20cm, while Western Kosovan female are 166.26±5.23cm tall and have a foot length of 23.66±1.06cm. The results have shown that both genders made Western-Kosovans a tall group, a little bit taller that general Kosovan population. Moreover, the foot length reliably predicts standing height in both genders; but, not reliably enough as arm span. This study also confirms the necessity for developing separate height models for each region in Kosovo as the results from Western-Kosovans don’t correspond to the general values.

  6. A method for estimating age of Danish medieval sub-adults based on long bone length

    DEFF Research Database (Denmark)

    Primeau, C.; Lynnerup, Niels; Friis, Laila Saidane

    2012-01-01

    for aging archaeological Danish sub-adults from the medieval period based on diaphyseal lengths. The problem with using data on Danish samples, which have been derived from a different population, is the possibility of skewing age estimates. In this study 58 Danish archaeological sub-adults were examined...

  7. Estimating Transformation Length in Linear- to Minimum-Phase Transformation Using Cepstrums

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen

    1997-01-01

    response of the minimum-phase FIR-filter. The transformation length estimation is made using the absolute value of the dominating zero in the linear-phase FIR-filter and the maximum allowed amplitude and phase deviation of the disturbing function. Two examples are given. The first one verifies...

  8. The Grid Method in Estimating the Path Length of a Moving Animal

    NARCIS (Netherlands)

    Reddingius, J.; Schilstra, A.J.; Thomas, G.

    1983-01-01

    (1) The length of a path covered by a moving animal may be estimated by counting the number of times the animal crosses any line of a grid and applying a conversion factor. (2) Some factors are based on the expected distance through a randomly crossed square; another on the expected crossings of a

  9. Growth estimation of mangrove cockle Anadara tuberculosa (Mollusca: Bivalvia: application and evaluation of length-based methods

    Directory of Open Access Journals (Sweden)

    Luis A Flores

    2011-03-01

    Full Text Available Growth is one of the key processes in the dynamic of exploited resources, since it provides part of the information required for structured population models. Growth of mangrove cockle, Anadara tuberculosa was estimated through length-based methods (ELEFAN I y NSLCA and using diverse shell length intervals (SLI. The variability of L∞, k and phi prime (Φ` estimates and the effect of each sample were quantified by jackknife techniques. Results showed the same L∞ estimates from ELEFAN I and NSLCA across each SLI used, and all L∞ were within the expected range. On the contrary, k estimates differed between methods. Jackknife estimations uncovered the tendency of ELEFAN I to overestimate k with increases in SLI, and allowed the identification of differences in uncertainty (PE and CV between both methods. The average values of Φ`derived from NSCLA1.5 and length-age sources were similar and corresponded to ranges reported by other authors. Estimates of L∞, k and Φ` from NSCLA1.5 were 85.97mm, 0.124/year and 2.953 with jackknife and 86.36mm de L∞, 0.110/year de k and 2.914 de Φ` without jackknife, respectively. Based on the observed evidence and according to the biology of the species, NSCLA is suggested to be used with jackknife and a SLI of 1.5mm as an ad hoc approach to estimate the growth parameters of mangrove cockle. Rev. Biol. Trop. 59 (1: 159-170. Epub 2011 March 01.

  10. Traffic volume estimation using network interpolation techniques.

    Science.gov (United States)

    2013-12-01

    Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...

  11. Measured tube technique for ensuring the correct length of slippery artificial chordae in mitral valvuloplasty.

    Science.gov (United States)

    Matsui, Yoshiro; Kubota, Suguru; Sugiki, Hiroshi; Wakasa, Satoshi; Ooka, Tomonori; Tachibana, Tsuyoshi; Sasaki, Shigeyuki

    2011-09-01

    Mitral valvuloplasty using Gore-Tex (W.L. Gore & Associates, Inc, Flagstaff, AZ) as artificial chordae is often associated with difficulties in determining the length of the artificial chordae, as well as preventing knot slippage, especially for patients with broad anterior leaflet prolapse. We describe a simple technique that enables surgeons to easily determine the correct length of the artificial chordae and tie slippery knots without using a specific device. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Estimation of the flow resistances exerted in coronary arteries using a vessel length-based method.

    Science.gov (United States)

    Lee, Kyung Eun; Kwon, Soon-Sung; Ji, Yoon Cheol; Shin, Eun-Seok; Choi, Jin-Ho; Kim, Sung Joon; Shim, Eun Bo

    2016-08-01

    Flow resistances exerted in the coronary arteries are the key parameters for the image-based computer simulation of coronary hemodynamics. The resistances depend on the anatomical characteristics of the coronary system. A simple and reliable estimation of the resistances is a compulsory procedure to compute the fractional flow reserve (FFR) of stenosed coronary arteries, an important clinical index of coronary artery disease. The cardiac muscle volume reconstructed from computed tomography (CT) images has been used to assess the resistance of the feeding coronary artery (muscle volume-based method). In this study, we estimate the flow resistances exerted in coronary arteries by using a novel method. Based on a physiological observation that longer coronary arteries have more daughter branches feeding a larger mass of cardiac muscle, the method measures the vessel lengths from coronary angiogram or CT images (vessel length-based method) and predicts the coronary flow resistances. The underlying equations are derived from the physiological relation among flow rate, resistance, and vessel length. To validate the present estimation method, we calculate the coronary flow division over coronary major arteries for 50 patients using the vessel length-based method as well as the muscle volume-based one. These results are compared with the direct measurements in a clinical study. Further proving the usefulness of the present method, we compute the coronary FFR from the images of optical coherence tomography.

  13. Training Sequence Length Optimization for a Turbo-Detector Using Decision-Directed Channel Estimation

    Directory of Open Access Journals (Sweden)

    Imed Hadj Kacem

    2008-01-01

    Full Text Available We consider the problem of optimization of the training sequence length when a turbo-detector composed of a maximum a posteriori (MAP equalizer and a MAP decoder is used. At each iteration of the receiver, the channel is estimated using the hard decisions on the transmitted symbols at the output of the decoder. The optimal length of the training sequence is found by maximizing an effective signal-to-noise ratio (SNR taking into account the data throughput loss due to the use of pilot symbols.

  14. Applying fuzzy logic to estimate the parameters of the length-weight relationship

    Directory of Open Access Journals (Sweden)

    S. D. Bitar

    Full Text Available Abstract We evaluated three mathematical procedures to estimate the parameters of the relationship between weight and length for Cichla monoculus: least squares ordinary regression on log-transformed data, non-linear estimation using raw data and a mix of multivariate analysis and fuzzy logic. Our goal was to find an alternative approach that considers the uncertainties inherent to this biological model. We found that non-linear estimation generated more consistent estimates than least squares regression. Our results also indicate that it is possible to find consistent estimates of the parameters directly from the centers of mass of each cluster. However, the most important result is the intervals obtained with the fuzzy inference system.

  15. Applying fuzzy logic to estimate the parameters of the length-weight relationship.

    Science.gov (United States)

    Bitar, S D; Campos, C P; Freitas, C E C

    2016-05-03

    We evaluated three mathematical procedures to estimate the parameters of the relationship between weight and length for Cichla monoculus: least squares ordinary regression on log-transformed data, non-linear estimation using raw data and a mix of multivariate analysis and fuzzy logic. Our goal was to find an alternative approach that considers the uncertainties inherent to this biological model. We found that non-linear estimation generated more consistent estimates than least squares regression. Our results also indicate that it is possible to find consistent estimates of the parameters directly from the centers of mass of each cluster. However, the most important result is the intervals obtained with the fuzzy inference system.

  16. Estimation of stature from index and ring finger length in a North Indian adolescent population.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Asha, Ningthoukhongjam

    2012-07-01

    The identification of commingled mutilated remains is a challenge to forensic experts and hence, a need of studies on estimation of stature from various body parts in different population groups. Such studies can help in narrowing down the pool of possible victim matches in cases of identification from dismembered remains. Studies pertaining to stature estimation among adolescents are limited owing to the ongoing growth process and growth spurt during adolescent period. In view of the limited literature on the estimation of stature in adolescent group, the present preliminary research was taken up to report the correlation between index and ring finger length and stature in a North Indian adolescent population. Three anthropometric measurements; Stature, Index finger length (IFL) and ring finger length (RFL) were taken on the subjects included in the study. Mean stature, IFL and RFL were significantly larger in males than females. Statistically significant correlation was observed between stature, IFL and RFL in right and left hands. Pearson correlation (r) was higher among males than females. Among males and females correlation coefficient was higher for the IFL than the RFL. The present research derives the linear regression models and multiplication factors for estimating stature from IFL and RFL and concludes that the living stature can be predicted from the IFL and RFL with a reasonable accuracy in adolescent population of North India. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  17. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...

  18. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jacob Laigaard

    1991-01-01

    The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...... responses simulated by two SDOF ARMA models loaded by the same band-limited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast...... - in some cases up to 100 times faster than the FFT technique. Another important advantage is that if the RDD technique is implemented correctly, the correlation function estimates are unbiased. Comparison with exact solutions for the correlation functions shows that the RDD auto-correlation estimates...

  19. Two biased estimation techniques in linear regression: Application to aircraft

    Science.gov (United States)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  20. [Estimation of the body length from the hand bones in adult subjects].

    Science.gov (United States)

    Zviagin, V N; Zamiatina, A O

    2008-01-01

    A method for estimation of the body length from the wrist bones in adult subjects is reported for the first time. Carpal, metacarpal, and phalangeal bone length in 108 skeletons of Caucasoid subjects (stored in the collections of the Department of Anthropology, M V. Lomonosov Moscow State University, and Museum of Anthropology, Sankt Peterburg State University) was measured to the nearest 0.1 mm by the method of R. Martin. The SPSS programs were used to calculate multiple regression equations allowing for the determination of the body length from the lengths of carpal bones (to the accuracy within +/- 46.1 mm), metacarpal bones 1-V (to the accuracy within +/- 56.7 to 48.6 mm), their combinations (to the accuracy +/- 49.1 to 47.9 mm), and the longitudinal size of radii I-V (to within +/- 50.8-44.4 mm). The precision of the estimation was as high as +/- 3.5 mm provided all the wrist bones were available for the measurement. It is concluded that the results of verification of this method may be applied in the practice of forensic medicine.

  1. An autocorrelation technique for measuring sub-picosecond bunch length using coherent transition radiation

    International Nuclear Information System (INIS)

    Barry, W.

    1991-01-01

    A new technique for determining sub-picosecond bunch length using infrared transition radiation and interferometry is proposed. The technique makes use of an infrared Michelson interferometer for measuring the autocorrelation of transition radiation emitted from a thin conducting foil placed in the beam path. The theory of coherent radiation from a charged particle beam passing through a thin conducting foil is presented. Subsequently, the analysis of this radiation through Michelson interferometry is shown to provide the autocorrelation of the longitudinal bunch profile. An example relevant to the CEBAF front end test is discussed. (author)

  2. INCLUSION RATIO BASED ESTIMATOR FOR THE MEAN LENGTH OF THE BOOLEAN LINE SEGMENT MODEL WITH AN APPLICATION TO NANOCRYSTALLINE CELLULOSE

    Directory of Open Access Journals (Sweden)

    Mikko Niilo-Rämä

    2014-06-01

    Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.

  3. Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient.

    Science.gov (United States)

    Dongaonkar, R M; Laine, G A; Stewart, R H; Quick, C M

    2011-06-01

    Microvascular permeability to water is characterized by the microvascular filtration coefficient (K(f)). Conventional gravimetric techniques to estimate K(f) rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate K(f) estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce K(f) from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to K(f) and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of K(f) in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique.

  4. Estimation of Sex From Index and Ring Finger Lengths in An Indigenous Population of Eastern India

    Science.gov (United States)

    Sen, Jaydip; Ghosh, Ahana; Mondal, Nitish; Krishan, Kewal

    2015-01-01

    Introduction Forensic anthropology involves the identification of human remains for medico-legal purposes. Estimation of sex is an essential element of medico-legal investigations when identification of unknown dismembered remains is involved. Aim The present study was conducted with an aim to estimate sex from index and ring finger lengths of adult individuals belonging to an indigenous population of eastern India. Materials and Methods A total of 500 unrelated adult individuals (18-60 years) from the Rajbanshi population (males: 250, females: 250) took part in the study. A total of 400 (males: 200, 200 female) participants were randomly used to develop sex estimation models using Binary Logistic Regression Analysis (BLR). A separate group of 200 adults (18-60 years) from the Karbi tribal population (males 100, females 100) were included to validate the results obtained on the Rajbanshi population. The univarate and bivariate models derived on the study group (n=400) were tested on hold-out sample of Rajbanshi participants (n=100) and the other test population of the Karbi (n=200) participants. Results The results indicate that Index Finger Length (IFL) and Ring Finger Length (RFL) of both hands were significantly longer in males as compared to females. The ring finger was longer than the index finger in both sexes. The study successfully highlights the existence of sex differences in IFL and RFL (p<0.05). No sex differences were however, observed for the index and ring finger ratio. The predictive accuracy of IFL and RFL in sex estimation ranged between 70%-75% (in the hold out sample from the Rajbanshi population) and 60-66% (in the test sample from the Karbi population). A Receiver Operating Curve (ROC) analysis was performed to test the predictive accuracy after predicting the probability of IFL and RFL in sex estimation. The predicted probabilities using ROC analysis were observed to be higher on the left side and in multivariate analysis. Conclusion The

  5. Parallel phase-shifting digital holography using spectral estimation technique.

    Science.gov (United States)

    Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Matoba, Osamu

    2014-09-20

    We propose a parallel phase-shifting digital holography using a spectral estimation technique, which enables the instantaneous acquisition of spectral information and three-dimensional (3D) information of a moving object. In this technique, an interference fringe image that contains six holograms with two phase shifts for three laser lines, such as red, green, and blue, is recorded by a space-division multiplexing method with single-shot exposure. The 3D monochrome images of these three laser lines are numerically reconstructed by a computer and used to estimate the spectral reflectance distribution of object using a spectral estimation technique. Preliminary experiments demonstrate the validity of the proposed technique.

  6. Dosimetry techniques applied to thermoluminescent age estimation

    International Nuclear Information System (INIS)

    Erramli, H.

    1986-12-01

    The reliability and the ease of the field application of the measuring techniques of natural radioactivity dosimetry are studied. The natural radioactivity in minerals in composed of the internal dose deposited by alpha and beta radiations issued from the sample itself and the external dose deposited by gamma and cosmic radiations issued from the surroundings of the sample. Two technics for external dosimetry are examined in details. TL Dosimetry and field gamma dosimetry. Calibration and experimental conditions are presented. A new integrated dosimetric method for internal and external dose measure is proposed: the TL dosimeter is placed in the soil in exactly the same conditions as the sample ones, during a time long enough for the total dose evaluation [fr

  7. Live Weight Estimation by Chest Girth, Body Length and Body Volume Formula in Minahasa Local Horse

    Directory of Open Access Journals (Sweden)

    B. J. Takaendengan

    2012-08-01

    Full Text Available Study was conducted in the regency of Minahasa to estimate horse live weight using its chest girth, body length and body volume formula (cylinder volume formula represented by animal chest girth and body length dimensions, particularly focused in Minahasa local horses. Data on animal live weight (LW, body length (BL, chest girth (CG and body volume were collected from 221 stallions kept by traditional household farmers. Animal body volume was calculated using cylinder volume formula with CG and BL as the components of its formula. Regression analysis was carried out for LW with all the linear body measurements. The data were classified on the basis of age. Age significantly (P0.05. Animal live weight was predicted by simple regression models using dependent variable (Y of the animal live weight and independent variable (X of the animal body measurement, either body length, chest girth, or body volume. The correlations between all pairs of measurements were highly significant (P<0.01 for all age groups. Regression analysis showed that live weight could be predicted accurately from body volume (R2= 0.92 and chest girth (R2= 0.90. Simple regression model that can be recommended to predict horse live weight based on body volume with their age groups ranging from 3 to ≥10 years old was as follow: Live weight (kg= 5.044 + 1.87088 body volume (liters. The analyses of data on horse chest girth, body length and body volume formula provided quantitative measure of body size and shape that were desirable, as they enable genetic parameters for these traits to be estimated and also included in breeding programs.

  8. Modeling relaxation length and density of acacia mangium wood using gamma - ray attenuation technique

    International Nuclear Information System (INIS)

    Tamer A Tabet; Fauziah Abdul Aziz

    2009-01-01

    Wood density measurement is related to the several factors that influence wood quality. In this paper, density, relaxation length and half-thickness value of eight ages, 3, 5, 7, 10, 11, 13 and 15 year-old of Acacia mangium wood were determined using gamma radiation from 137 Cs source. Results show that Acacia mangium tree of age 3 year has the highest relaxation length of 83.33 cm and least density of 0.43 gcm -3 , while the tree of age 15 year has the least Relaxation length of 28.56 cm and highest density of 0.76 gcm -3 . Results also show that the 3 year-old Acacia mangium wood has the highest half thickness value of 57.75 cm and 15 year-old tree has the least half thickness value of 19.85 cm. Two mathematical models have been developed for the prediction of density, variation with relaxation length and half-thickness value of different age of tree. A good agreement (greater than 85% in most cases) was observed between the measured values and predicted ones. Very good linear correlation was found between measured density and the age of tree (R2 = 0.824), and between estimated density and Acacia mangium tree age (R2 = 0.952). (Author)

  9. Fractal-Based Lightning Channel Length Estimation from Convex-Hull Flash Areas for DC3 Lightning Mapping Array Data

    Science.gov (United States)

    Bruning, Eric C.; Thomas, Ronald J.; Krehbiel, Paul R.; Rison, William; Carey, Larry D.; Koshak, William; Peterson, Harold; MacGorman, Donald R.

    2013-01-01

    We will use VHF Lightning Mapping Array data to estimate NOx per flash and per unit channel length, including the vertical distribution of channel length. What s the best way to find channel length from VHF sources? This paper presents the rationale for the fractal method, which is closely related to the box-covering method.

  10. Sexual Dimorphism and Estimation of Height from Body Length Anthropometric Parameters among the Hausa Ethnic Group of Nigeria

    Directory of Open Access Journals (Sweden)

    Jaafar Aliyu

    2018-01-01

    Full Text Available The study was carried out to investigate the sexual dimorphism in length and other anthropometric parameters. To also generate formulae for height estimation using anthropometric measurements of some length parameters among Hausa ethnic group of Kaduna State, Nigeria. A cross sectional study was conducted and a total of 500 subjects participated in this study which was mainly secondary school students between the age ranges of 16-27 years, anthropometric measurements were obtained using standard protocols. It was observed that there was significant sexual dimorphism in all the parameters except for body mass index. In all the parameters males tend to have significantly (P < 0.05 higher mean values except biaxillary distances. Height showed positive and strongest correlations with demispan length, followed by knee height, thigh length, sitting height, hand length, foot length, humeral length, forearm length and weight respectively. There were weak and positive correlations between height and neck length as well as biaxillary length. The demi span length showed the strongest correlation coefficient and low standard error of estimate indicating the strong estimation ability than other parameters. The combination of two parameters tends to give better estimations and low standard error of estimates, so also combining the three parameters gives better estimations with a lower standard error of estimates. The better correlation coefficient was also observed with the double and triple parameters respectively. Male Hausa tend to have larger body proportion compared to female. Height showed positive and strongest correlations with demispan length. Body length anthropometric proved to be useful in estimation of stature among Hausa ethnic group of Kaduna state Nigeria.

  11. Estimation of Stature from Percutaneous Tibia Length of Indigenes of Bekwara Ethnic Group of Cross River State, Nigeria

    Directory of Open Access Journals (Sweden)

    Ugochukwu Godfrey Esomonu

    2016-01-01

    Full Text Available Estimating stature by developing linear regression equations which incorporate the features of fragmented body parts or human skeletal remains has been employed by many forensic anthropologists to establish the identity of victims of mass disaster although all formulas are ethnic, age, and gender specific. The study is aimed at using the percutaneous tibia length (PCTL to deriving a specific regression equation formula which could be used to estimate the stature of adult indigenes of Bekwara ethnic group in Cross River State. A total number of 600 subjects within the age range of 21–45 years were recruited randomly for this research (300 males and 300 females. Observed height and PCTL were measured using the standard anthropometric technique, respectively. Stature was estimated from PCTL using simple regression analysis. On analysis of the data, the mean PCTL for male was found to be 43.60 ± 2.31 cm while that of female was 42.55 ± 2.83 cm. The observed height was 165.80 ± 6.88 cm and 156.70 ± 6.06 cm for male and female, respectively. Statistical analysis showed that the male values of the measured parameters were significantly higher than the corresponding female values. The linear regression equations derived for male and female for the estimation of height using the PCTL was found to be 5.289 (PCTL + (−64.78 and 4.230 (TL + (−23.28, respectively. It was concluded that stature can be estimated using the length of an intact mutilated leg. Thus, the data of this study are recommended in anthropological studies for stature estimation among the ethnic group under study.

  12. Zero-crossing tracking technique for noninvasive ultrasonic temperature estimation.

    Science.gov (United States)

    Ju, Kuen-Cheng; Liu, Hao-Li

    2010-11-01

    The purpose of this study was to investigate the feasibility of a zero-crossing tracking (ZCT) technique for temperature estimation using ultrasound images. The backscattered ultrasound radio frequency (RF) echo from a heated region experiences time shifts, which have been identified as causing a gross effect on sound speed changes and thermal expansion. The ZCT technique tracks the shifts in the zero-crossing instants between preheated and postheated A-lines to estimate the echo shifts caused by local temperature changes. Compared to the conventional cross-correlation (CCR) technique, ZCT does not require intensive computational loadings for correlation operations; hence, the computational efficiency could be improved. Phantom experiments were performed to compare the results of temperature estimation by using the ZCT and CCR techniques. The imaging probe was a commercial linear array, and a high-intensity focused ultrasound transducer was used as a heating source. The acquired RF echo data were processed using the ZCT and CCR techniques. The estimation results of both techniques were similar. However, the ZCT technique yielded up to 7-fold better computational efficiency than the CCR technique. The ZCT technique has the ability to monitor temperature changes with superior processing speed. This method could be an alternative signal-processing technique for ultrasonic temperature estimation.

  13. Estimation of fracture toughness and critical crack length of zircaloy pressure tube from ring tension test

    International Nuclear Information System (INIS)

    Chatterjee, S.; Anantharaman, S.; Balakrishnan, K.S.; Sriharsha, H.K.

    1999-09-01

    Transverse fracture, toughness data of zircaloy pressure tubes are needed for assuring leak-before-break, if any, during their in-reactor residence. These data are conventionally computed from burst tests and/or using compact tension specimens. A study was undertaken to derive fracture toughness of zircaloy in the temperature range ambient to 300 deg C, from the transverse tensile properties. The fracture toughness properties so derived were used to estimate the values of critical crack length of zircaloy pressure tubes in the above temperature range. (author)

  14. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods.

    Science.gov (United States)

    Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke; Palsbøll, Per J

    2012-09-06

    Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments.

  15. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods

    Directory of Open Access Journals (Sweden)

    Olsen Morten

    2012-09-01

    Full Text Available Abstract Background Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Results Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Conclusion Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause

  16. Image Analytical Approach for Needle-Shaped Crystal Counting and Length Estimation

    DEFF Research Database (Denmark)

    Wu, Jian X.; Kucheryavskiy, Sergey V.; Jensen, Linda G.

    2015-01-01

    Estimation of nucleation and crystal growth rates from microscopic information is of critical importance. This can be an especially challenging task if needle growth of crystals is observed. To address this challenge, an image analytical method for counting of needle-shaped crystals and estimating...... their length is presented. Since the current algorithm has a number of parameters that need to be optimized, a combination of simulation of needle crystal growth and Design of Experiments was applied to identify the optimal parameter settings for the algorithm. The algorithm was validated for its accuracy...... in different scenarios of simulated needle crystallization, and subsequently, the algorithm was applied to study the influence of additive on antisolvent crystallization. The developed algorithm is robust for quantifying heavily intersecting needle crystals in optical microscopy images, and has the potential...

  17. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods

    DEFF Research Database (Denmark)

    Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke

    2012-01-01

    BACKGROUND:Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent...... critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting...... steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. RESULTS...

  18. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  19. Estimates of bottom roughness length and bottom shear stress in South San Francisco Bay, California

    Science.gov (United States)

    Cheng, R.T.; Ling, C.-H.; Gartner, J.W.; Wang, P.-F.

    1999-01-01

    A field investigation of the hydrodynamics and the resuspension and transport of participate matter in a bottom boundary layer was carried out in South San Francisco Bay (South Bay), California, during March-April 1995. Using broadband acoustic Doppler current profilers, detailed measurements of turbulent mean velocity distribution within 1.5 m above bed have been obtained. A global method of data analysis was used for estimating bottom roughness length zo and bottom shear stress (or friction velocities u*). Field data have been examined by dividing the time series of velocity profiles into 24-hour periods and independently analyzing the velocity profile time series by flooding and ebbing periods. The global method of solution gives consistent properties of bottom roughness length zo and bottom shear stress values (or friction velocities u*) in South Bay. Estimated mean values of zo and u* for flooding and ebbing cycles are different. The differences in mean zo and u* are shown to be caused by tidal current flood-ebb inequality, rather than the flooding or ebbing of tidal currents. The bed shear stress correlates well with a reference velocity; the slope of the correlation defines a drag coefficient. Forty-three days of field data in South Bay show two regimes of zo (and drag coefficient) as a function of a reference velocity. When the mean velocity is >25-30 cm s-1, the ln zo (and thus the drag coefficient) is inversely proportional to the reference velocity. The cause for the reduction of roughness length is hypothesized as sediment erosion due to intensifying tidal currents thereby reducing bed roughness. When the mean velocity is <25-30 cm s-1, the correlation between zo and the reference velocity is less clear. A plausible explanation of scattered values of zo under this condition may be sediment deposition. Measured sediment data were inadequate to support this hypothesis, but the proposed hypothesis warrants further field investigation.

  20. An Empirical Approach for Estimating Stress-Coupling Lengths for Marine-Terminating Glaciers

    Directory of Open Access Journals (Sweden)

    Ellyn Mary Enderlin

    2016-12-01

    Full Text Available Variability in the dynamic behavior of marine-terminating glaciers is poorly understood, despite an increase in the abundance and resolution of observations. When paired with ice thicknesses, surface velocities can be used to quantify the dynamic redistribution of stresses in response to environmental perturbations through computation of the glacier force balance. However, because the force balance is not purely local, force balance calculations must be performed at the spatial scale over which stresses are transferred within glacier ice, or the stress-coupling length (SCL. Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.

  1. Comparative study on direct and indirect bracket bonding techniques regarding time length and bracket detachment

    Directory of Open Access Journals (Sweden)

    Jefferson Vinicius Bozelli

    2013-12-01

    Full Text Available OBJECTIVE: The aim of this study was to assess the time spent for direct (DBB - direct bracket bonding and indirect (IBB - indirect bracket bonding bracket bonding techniques. The time length of laboratorial (IBB and clinical steps (DBB and IBB as well as the prevalence of loose bracket after a 24-week follow-up were evaluated. METHODS: Seventeen patients (7 men and 10 women with a mean age of 21 years, requiring orthodontic treatment were selected for this study. A total of 304 brackets were used (151 DBB and 153 IBB. The same bracket type and bonding material were used in both groups. Data were submitted to statistical analysis by Wilcoxon non-parametric test at 5% level of significance. RESULTS: Considering the total time length, the IBB technique was more time-consuming than the DBB (p < 0.001. However, considering only the clinical phase, the IBB took less time than the DBB (p < 0.001. There was no significant difference (p = 0.910 for the time spent during laboratorial positioning of the brackets and clinical session for IBB in comparison to the clinical procedure for DBB. Additionally, no difference was found as for the prevalence of loose bracket between both groups. CONCLUSION: the IBB can be suggested as a valid clinical procedure since the clinical session was faster and the total time spent for laboratorial positioning of the brackets and clinical procedure was similar to that of DBB. In addition, both approaches resulted in similar frequency of loose bracket.

  2. Aerodynamic roughness length estimation from very high-resolution imaging LIDAR observations over the Heihe basin in China

    Directory of Open Access Journals (Sweden)

    J. Colin

    2010-12-01

    Full Text Available Roughness length of land surfaces is an essential variable for the parameterisation of momentum and heat exchanges. The growing interest in the estimation of the surface turbulent flux parameterisation from passive remote sensing leads to an increasing development of models, and the common use of simple semi-empirical formulations to estimate surface roughness. Over complex surface land cover, these approaches would benefit from the combined use of passive remote sensing and land surface structure measurements from Light Detection And Ranging (LIDAR techniques. Following early studies based on LIDAR profile data, this paper explores the use of imaging LIDAR measurements for the estimation of the aerodynamic roughness length over a heterogeneous landscape of the Heihe river basin, a typical inland river basin in the northwest of China. The point cloud obtained from multiple flight passes over an irrigated farmland area were used to separate the land surface topography and the vegetation canopy into a Digital Elevation Model (DEM and a Digital Surface Model (DSM respectively. These two models were then incorporated in two approaches: (i a strictly geometrical approach based on the calculation of the plan surface density and the frontal surface density to derive a geometrical surface roughness; (ii a more aerodynamic approach where both the DEM and DSM are introduced in a Computational Fluid Dynamics model (CFD. The inversion of the resulting 3-D wind field leads to a fine representation of the aerodynamic surface roughness. Examples of the use of these three approaches are presented for various wind directions together with a cross-comparison of results on heterogeneous land cover and complex roughness element structures.

  3. Noise Attenuation Estimation for Maximum Length Sequences in Deconvolution Process of Auditory Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Xian Peng

    2017-01-01

    Full Text Available The use of maximum length sequence (m-sequence has been found beneficial for recovering both linear and nonlinear components at rapid stimulation. Since m-sequence is fully characterized by a primitive polynomial of different orders, the selection of polynomial order can be problematic in practice. Usually, the m-sequence is repetitively delivered in a looped fashion. Ensemble averaging is carried out as the first step and followed by the cross-correlation analysis to deconvolve linear/nonlinear responses. According to the classical noise reduction property based on additive noise model, theoretical equations have been derived in measuring noise attenuation ratios (NARs after the averaging and correlation processes in the present study. A computer simulation experiment was conducted to test the derived equations, and a nonlinear deconvolution experiment was also conducted using order 7 and 9 m-sequences to address this issue with real data. Both theoretical and experimental results show that the NAR is essentially independent of the m-sequence order and is decided by the total length of valid data, as well as stimulation rate. The present study offers a guideline for m-sequence selections, which can be used to estimate required recording time and signal-to-noise ratio in designing m-sequence experiments.

  4. Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

    Science.gov (United States)

    McCabe, Patrick; Korb, Oliver; Cole, Jason

    2014-05-27

    We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions.

  5. Decision support for hospital bed management using adaptable individual length of stay estimations and shared resources.

    Science.gov (United States)

    Schmidt, Robert; Geisler, Sandra; Spreckelsen, Cord

    2013-01-07

    Elective patient admission and assignment planning is an important task of the strategic and operational management of a hospital and early on became a central topic of clinical operations research. The management of hospital beds is an important subtask. Various approaches have been proposed, involving the computation of efficient assignments with regard to the patients' condition, the necessity of the treatment, and the patients' preferences. However, these approaches are mostly based on static, unadaptable estimates of the length of stay and, thus, do not take into account the uncertainty of the patient's recovery. Furthermore, the effect of aggregated bed capacities have not been investigated in this context. Computer supported bed management, combining an adaptable length of stay estimation with the treatment of shared resources (aggregated bed capacities) has not yet been sufficiently investigated. The aim of our work is: 1) to define a cost function for patient admission taking into account adaptable length of stay estimations and aggregated resources, 2) to define a mathematical program formally modeling the assignment problem and an architecture for decision support, 3) to investigate four algorithmic methodologies addressing the assignment problem and one base-line approach, and 4) to evaluate these methodologies w.r.t. cost outcome, performance, and dismissal ratio. The expected free ward capacity is calculated based on individual length of stay estimates, introducing Bernoulli distributed random variables for the ward occupation states and approximating the probability densities. The assignment problem is represented as a binary integer program. Four strategies for solving the problem are applied and compared: an exact approach, using the mixed integer programming solver SCIP; and three heuristic strategies, namely the longest expected processing time, the shortest expected processing time, and random choice. A baseline approach serves to compare these

  6. Growth estimation of mangrove cockle Anadara tuberculosa (Mollusca: Bivalvia: application and evaluation of length-based methods

    Directory of Open Access Journals (Sweden)

    Luis A Flores

    2011-03-01

    Full Text Available Growth is one of the key processes in the dynamic of exploited resources, since it provides part of the information required for structured population models. Growth of mangrove cockle, Anadara tuberculosa was estimated through length-based methods (ELEFAN I y NSLCA and using diverse shell length intervals (SLI. The variability of L∞, k and phi prime (Φ` estimates and the effect of each sample were quantified by jackknife techniques. Results showed the same L∞ estimates from ELEFAN I and NSLCA across each SLI used, and all L∞ were within the expected range. On the contrary, k estimates differed between methods. Jackknife estimations uncovered the tendency of ELEFAN I to overestimate k with increases in SLI, and allowed the identification of differences in uncertainty (PE and CV between both methods. The average values of Φ`derived from NSCLA1.5 and length-age sources were similar and corresponded to ranges reported by other authors. Estimates of L∞, k and Φ` from NSCLA1.5 were 85.97mm, 0.124/year and 2.953 with jackknife and 86.36mm de L∞, 0.110/year de k and 2.914 de Φ` without jackknife, respectively. Based on the observed evidence and according to the biology of the species, NSCLA is suggested to be used with jackknife and a SLI of 1.5mm as an ad hoc approach to estimate the growth parameters of mangrove cockle. Rev. Biol. Trop. 59 (1: 159-170. Epub 2011 March 01.El crecimiento es uno de los procesos clave en la dinámica de los recursos explotados. En este estudio se estimó el crecimiento de la concha prieta Anadara tuberculosa por medio de métodos basados en tallas (ELEFAN I y NSLCA y usando distintos intervalos de clase de talla (ICT. La variabilidad de los estimadores para L∞y k, y el efecto de cada muestra e ICT fueron cuantificados por la técnica de jackknife. ELEFAN I y NSLCA producen una misma estimación de L∞para cada uno de los ICT. Por el contrario, para k ambos métodos estiman diferentes valores. Las

  7. Uncertainties estimation in surveying measurands: application to lengths, perimeters and areas

    Science.gov (United States)

    Covián, E.; Puente, V.; Casero, M.

    2017-10-01

    The present paper develops a series of methods for the estimation of uncertainty when measuring certain measurands of interest in surveying practice, such as points elevation given a planimetric position within a triangle mesh, 2D and 3D lengths (including perimeters enclosures), 2D areas (horizontal surfaces) and 3D areas (natural surfaces). The basis for the proposed methodology is the law of propagation of variance-covariance, which, applied to the corresponding model for each measurand, allows calculating the resulting uncertainty from known measurement errors. The methods are tested first in a small example, with a limited number of measurement points, and then in two real-life measurements. In addition, the proposed methods have been incorporated to commercial software used in the field of surveying engineering and focused on the creation of digital terrain models. The aim of this evolution is, firstly, to comply with the guidelines of the BIPM (Bureau International des Poids et Mesures), as the international reference agency in the field of metrology, in relation to the determination and expression of uncertainty; and secondly, to improve the quality of the measurement by indicating the uncertainty associated with a given level of confidence. The conceptual and mathematical developments for the uncertainty estimation in the aforementioned cases were conducted by researchers from the AssIST group at the University of Oviedo, eventually resulting in several different mathematical algorithms implemented in the form of MATLAB code. Based on these prototypes, technicians incorporated the referred functionality to commercial software, developed in C++. As a result of this collaboration, in early 2016 a new version of this commercial software was made available, which will be the first, as far as the authors are aware, that incorporates the possibility of estimating the uncertainty for a given level of confidence when computing the aforementioned surveying

  8. Estimating age from recapture data: integrating incremental growth measures with ancillary data to infer age-at-length

    Science.gov (United States)

    Eaton, Mitchell J.; Link, William A.

    2011-01-01

    Estimating the age of individuals in wild populations can be of fundamental importance for answering ecological questions, modeling population demographics, and managing exploited or threatened species. Significant effort has been devoted to determining age through the use of growth annuli, secondary physical characteristics related to age, and growth models. Many species, however, either do not exhibit physical characteristics useful for independent age validation or are too rare to justify sacrificing a large number of individuals to establish the relationship between size and age. Length-at-age models are well represented in the fisheries and other wildlife management literature. Many of these models overlook variation in growth rates of individuals and consider growth parameters as population parameters. More recent models have taken advantage of hierarchical structuring of parameters and Bayesian inference methods to allow for variation among individuals as functions of environmental covariates or individual-specific random effects. Here, we describe hierarchical models in which growth curves vary as individual-specific stochastic processes, and we show how these models can be fit using capture–recapture data for animals of unknown age along with data for animals of known age. We combine these independent data sources in a Bayesian analysis, distinguishing natural variation (among and within individuals) from measurement error. We illustrate using data for African dwarf crocodiles, comparing von Bertalanffy and logistic growth models. The analysis provides the means of predicting crocodile age, given a single measurement of head length. The von Bertalanffy was much better supported than the logistic growth model and predicted that dwarf crocodiles grow from 19.4 cm total length at birth to 32.9 cm in the first year and 45.3 cm by the end of their second year. Based on the minimum size of females observed with hatchlings, reproductive maturity was estimated

  9. Is length an appropriate estimator to characterize pulmonary alveolar capillaries? A critical evaluation in the human lung

    DEFF Research Database (Denmark)

    Mühlfeld, Christian; Weibel, Ewald R.; Hahn, Ute

    2010-01-01

    Stereological estimations of total capillary length have been used to characterize changes in the alveolar capillary network (ACN) during developmental processes or pathophysiological conditions. Here, we analyzed whether length estimations are appropriate to describe the 3D nature of the ACN. Semi...... resulted in a mean of 2,746 km (SD: 722 km). Because of the geometry of the ACN both approaches carry an unpredictable bias. The bias incurred by the design-based approach is proportional to the ratio between radius and length of the capillary segments in the ACN, the number of branching points...... and the winding of the capillaries. The model-based approach is biased because of the real noncylindrical shape of capillaries and the network structure. In conclusion, the estimation of the total length of capillaries in the ACN cannot be recommended as the geometry of the ACN does not fulfill the requirements...

  10. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  11. Cubic spline approximation techniques for parameter estimation in distributed systems

    Science.gov (United States)

    Banks, H. T.; Crowley, J. M.; Kunisch, K.

    1983-01-01

    Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.

  12. Smoothing techniques for decision-directed MIMO OFDM channel estimation

    Science.gov (United States)

    Beinschob, P.; Zölzer, U.

    2011-07-01

    With the purpose of supplying the demand of faster and more reliable communication, multiple-input multiple-output (MIMO) systems in conjunction with Orthogonal Frequency Division Multiplexing (OFDM) are subject of extensive research. Successful Decoding requires an accurate channel estimate at the receiver, which is gained either by evaluation of reference symbols which requires designated resources in the transmit signal or decision-directed approaches. The latter offers a convenient way to maximize bandwidth efficiency, but it suffers from error propagation due to the dependency between the decoding of the current data symbol and the calculation of the next channel estimate. In our contribution we consider linear smoothing techniques to mitigate error propagation by the introduction of backward dependencies in the decision-based channel estimation. Designed as a post-processing step, frame repeat requests can be lowered by applying this technique if the data is insensitive to latency. The problem of high memory requirements of FIR smoothing in the context of MIMO-OFDM is addressed with an recursive approach that acquires minimal resources with virtual no performance loss. Channel estimate normalized mean square error and bit error rate (BER) performance evaluations are presented. For reference, a median filtering technique is presented that operates on the MIMO time-frequency grids of channel coefficients to reduce the peak-like outliers produced by wrong decisions due to unsuccessful decoding. Performance in terms of Bit Error Rate is compared to the proposed smoothing techniques.

  13. Identification and characterization of some aromatic rice mutants using amplified fragment length polymorphism (AFLP) technique

    International Nuclear Information System (INIS)

    Fahmy, E.M.; Sobieh, S. E. S.; Ayaad, M. H.; El-Gohary, A. A.; Rownak, A.

    2012-12-01

    Accurate identifying of the genotypes is considered one of the most important mechanisms used in the recording or the protection of plant varieties. The investigation was conducted at the experimental form belonging to the egyptian Atomic Energy Authority, Inshas. The aim was to evaluate grain quality characteristics and molecular genetic variation using Amplified Fragment Length Polymorphism (AFLP) technique among six rice genotypes, Egyptian Jasmine aromatic rice cultivar and five aromatic rice mutants in (M3 mutagenic generation). Two mutation (Egy22 and Egy24) were selected from irradiated Sakha 102 population with 200 and 400Gy of gamma rays in the M2 generation, respectively, and three mutations ( Egy32, Egy33, and Egy34) were selected from irradiated Sakha 103 population with 200, 300, 400Gy of gamma rays in the M2 generation, respectively. The obtained results showed that the strong aroma was obtained for mutant Egy22 as compared with Egyptian Jasmine rice cultivar (moderate aroma). Seven primer combinations were used through six rice genotypes on the molecular level using AFLP marker. The size of AFLP Fragments Were Ranged from 51- 494bp. The total number of amplified bands was 997 band among them 919 polymorphic bans representing 92.2%. The highest similarity index (89%) was observed between Egyptian Jasmine and Egy32 followed by (82%) observed between Egyptian Jasmine and Egy34. On the other hand, the lowest similarity index was (48%) between Egyptian Jasmine and Egy24. In six rice genotypes, Egy24 produced the highest number of the AFLP makers giving 49 unique markers (23 positive and 26 negative), then Egy22 showed 23 unique markers (27 positive and 6 negative) while Egy33 was characterized by 17 unique markers (12 positive and 5 negative). At last Egyptian Jasmine was discriminated by the lowest number of markets, 10 (6 positive and 4 negative). The study further confirmed that AFLP technique was able to differentiate rice genotypes by a higher number

  14. Estimating Length of Stay by Patient Type in the Neonatal Intensive Care Unit.

    Science.gov (United States)

    Lee, Henry C; Bennett, Mihoko V; Schulman, Joseph; Gould, Jeffrey B; Profit, Jochen

    2016-07-01

    Objective Develop length of stay prediction models for neonatal intensive care unit patients. Study Design We used data from 2008 to 2010 to construct length of stay models for neonates admitted within 1 day of age to neonatal intensive care units and surviving to discharge home. Results Our sample included 23,551 patients. Median length of stay was 79 days when birth weight was g. Risk factors for longer length of stay varied by weight. Units with shorter length of stay for one weight group had shorter lengths of stay for other groups. Conclusion Risk models for comparative assessments of length of stay need to appropriately account for weight, particularly considering the cutoff of 1,500 g. Refining prediction may benefit counseling of families and health care systems to efficiently allocate resources. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  15. Age determination by back length for African savanna elephants: extending age assessment techniques for aerial-based surveys.

    Science.gov (United States)

    Trimble, Morgan J; van Aarde, Rudi J; Ferreira, Sam M; Nørgaard, Camilla F; Fourie, Johan; Lee, Phyllis C; Moss, Cynthia J

    2011-01-01

    Determining the age of individuals in a population can lead to a better understanding of population dynamics through age structure analysis and estimation of age-specific fecundity and survival rates. Shoulder height has been used to accurately assign age to free-ranging African savanna elephants. However, back length may provide an analog measurable in aerial-based surveys. We assessed the relationship between back length and age for known-age elephants in Amboseli National Park, Kenya, and Addo Elephant National Park, South Africa. We also compared age- and sex-specific back lengths between these populations and compared adult female back lengths across 11 widely dispersed populations in five African countries. Sex-specific Von Bertalanffy growth curves provided a good fit to the back length data of known-age individuals. Based on back length, accurate ages could be assigned relatively precisely for females up to 23 years of age and males up to 17. The female back length curve allowed more precise age assignment to older females than the curve for shoulder height does, probably because of divergence between the respective growth curves. However, this did not appear to be the case for males, but the sample of known-age males was limited to ≤27 years. Age- and sex-specific back lengths were similar in Amboseli National Park and Addo Elephant National Park. Furthermore, while adult female back lengths in the three Zambian populations were generally shorter than in other populations, back lengths in the remaining eight populations did not differ significantly, in support of claims that growth patterns of African savanna elephants are similar over wide geographic regions. Thus, the growth curves presented here should allow researchers to use aerial-based surveys to assign ages to elephants with greater precision than previously possible and, therefore, to estimate population variables.

  16. Age Determination by Back Length for African Savanna Elephants: Extending Age Assessment Techniques for Aerial-Based Surveys

    Science.gov (United States)

    Trimble, Morgan J.; van Aarde, Rudi J.; Ferreira, Sam M.; Nørgaard, Camilla F.; Fourie, Johan; Lee, Phyllis C.; Moss, Cynthia J.

    2011-01-01

    Determining the age of individuals in a population can lead to a better understanding of population dynamics through age structure analysis and estimation of age-specific fecundity and survival rates. Shoulder height has been used to accurately assign age to free-ranging African savanna elephants. However, back length may provide an analog measurable in aerial-based surveys. We assessed the relationship between back length and age for known-age elephants in Amboseli National Park, Kenya, and Addo Elephant National Park, South Africa. We also compared age- and sex-specific back lengths between these populations and compared adult female back lengths across 11 widely dispersed populations in five African countries. Sex-specific Von Bertalanffy growth curves provided a good fit to the back length data of known-age individuals. Based on back length, accurate ages could be assigned relatively precisely for females up to 23 years of age and males up to 17. The female back length curve allowed more precise age assignment to older females than the curve for shoulder height does, probably because of divergence between the respective growth curves. However, this did not appear to be the case for males, but the sample of known-age males was limited to ≤27 years. Age- and sex-specific back lengths were similar in Amboseli National Park and Addo Elephant National Park. Furthermore, while adult female back lengths in the three Zambian populations were generally shorter than in other populations, back lengths in the remaining eight populations did not differ significantly, in support of claims that growth patterns of African savanna elephants are similar over wide geographic regions. Thus, the growth curves presented here should allow researchers to use aerial-based surveys to assign ages to elephants with greater precision than previously possible and, therefore, to estimate population variables. PMID:22028925

  17. Age determination by back length for African savanna elephants: extending age assessment techniques for aerial-based surveys.

    Directory of Open Access Journals (Sweden)

    Morgan J Trimble

    Full Text Available Determining the age of individuals in a population can lead to a better understanding of population dynamics through age structure analysis and estimation of age-specific fecundity and survival rates. Shoulder height has been used to accurately assign age to free-ranging African savanna elephants. However, back length may provide an analog measurable in aerial-based surveys. We assessed the relationship between back length and age for known-age elephants in Amboseli National Park, Kenya, and Addo Elephant National Park, South Africa. We also compared age- and sex-specific back lengths between these populations and compared adult female back lengths across 11 widely dispersed populations in five African countries. Sex-specific Von Bertalanffy growth curves provided a good fit to the back length data of known-age individuals. Based on back length, accurate ages could be assigned relatively precisely for females up to 23 years of age and males up to 17. The female back length curve allowed more precise age assignment to older females than the curve for shoulder height does, probably because of divergence between the respective growth curves. However, this did not appear to be the case for males, but the sample of known-age males was limited to ≤27 years. Age- and sex-specific back lengths were similar in Amboseli National Park and Addo Elephant National Park. Furthermore, while adult female back lengths in the three Zambian populations were generally shorter than in other populations, back lengths in the remaining eight populations did not differ significantly, in support of claims that growth patterns of African savanna elephants are similar over wide geographic regions. Thus, the growth curves presented here should allow researchers to use aerial-based surveys to assign ages to elephants with greater precision than previously possible and, therefore, to estimate population variables.

  18. Evaluation of mfcc estimation techniques for music similarity

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Christensen, Mads Græsbøll; Murthi, Manohar

    2006-01-01

    Spectral envelope parameters in the form of mel-frequencycepstral coefficients are often used for capturing timbral information of music signals in connection with genre classification applications. In this paper, we evaluate mel-frequencycepstral coefficient (MFCC) estimation techniques, namely...... the classical FFT and linear prediction based implementations and an implementation based on the more recent MVDR spectral estimator. The performance of these methods are evaluated in genre classification using a probabilistic classifier based on Gaussian Mixture models. MFCCs based on fixed order, signal...

  19. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions......-passive behaviour of the proposed method comes from the combination of the non intrusive behaviour of the passive methods with a better accuracy of the active methods. The simulation results reveal the good accuracy of the proposed method....

  20. A NOVEL TECHNIQUE APPLYING SPECTRAL ESTIMATION TO JOHNSON NOISE THERMOMETRY

    Energy Technology Data Exchange (ETDEWEB)

    Ezell, N Dianne Bull [ORNL; Britton Jr, Charles L [ORNL; Roberts, Michael [ORNL; Holcomb, David Eugene [ORNL; Ericson, Milton Nance [ORNL; Djouadi, Seddik M [ORNL; Wood, Richard Thomas [ORNL

    2017-01-01

    Johnson noise thermometry (JNT) is one of many important measurements used to monitor the safety levels and stability in a nuclear reactor. However, this measurement is very dependent on the electromagnetic environment. Properly removing unwanted electromagnetic interference (EMI) is critical for accurate drift free temperature measurements. The two techniques developed by Oak Ridge National Laboratory (ORNL) to remove transient and periodic EMI are briefly discussed in this document. Spectral estimation is a key component in the signal processing algorithm utilized for EMI removal and temperature calculation. Applying these techniques requires the simple addition of the electronics and signal processing to existing resistive thermometers.

  1. Automatic string generation for estimating in vivo length changes of the medial patellofemoral ligament during knee flexion.

    Science.gov (United States)

    Graf, Matthias; Diether, Salomon; Vlachopoulos, Lazaros; Fucentese, Sandro; Fürnstahl, Philipp

    2014-06-01

    Modeling ligaments as three-dimensional strings is a popular method for in vivo estimation of ligament length. The purpose of this study was to develop an algorithm for automated generation of non-penetrating strings between insertion points and to evaluate its feasibility for estimating length changes of the medial patellofemoral ligament during normal knee flexion. Three-dimensional knee models were generated from computed tomography (CT) scans of 10 healthy subjects. The knee joint under weight-bearing was acquired in four flexion positions (0°-120°). The path between insertion points was computed in each position to quantify string length and isometry. The average string length was maximal in 0° of flexion (64.5 ± 3.9 mm between femoral and proximal patellar point; 62.8 ± 4.0 mm between femoral and distal patellar point). It was minimal in 30° (60.0 ± 2.6 mm) for the proximal patellar string and in 120° (58.7 ± 4.3 mm) for the distal patellar string. The insertion points were considered to be isometric in 4 of the 10 subjects. The proposed algorithm appears to be feasible for estimating string lengths between insertion points in an automatic fashion. The length measurements based on CT images acquired under physiological loading conditions may give further insights into knee kinematics.

  2. Application of cokriging techniques for the estimation of hail size

    Science.gov (United States)

    Farnell, Carme; Rigo, Tomeu; Martin-Vide, Javier

    2018-01-01

    There are primarily two ways of estimating hail size: the first is the direct interpolation of point observations, and the second is the transformation of remote sensing fields into measurements of hail properties. Both techniques have advantages and limitations as regards generating the resultant map of hail damage. This paper presents a new methodology that combines the above mentioned techniques in an attempt to minimise the limitations and take advantage of the benefits of interpolation and the use of remote sensing data. The methodology was tested for several episodes with good results being obtained for the estimation of hail size at practically all the points analysed. The study area presents a large database of hail episodes, and for this reason, it constitutes an optimal test bench.

  3. A new estimation technique of sovereign default risk

    Directory of Open Access Journals (Sweden)

    Mehmet Ali Soytaş

    2016-12-01

    Full Text Available Using the fixed-point theorem, sovereign default models are solved by numerical value function iteration and calibration methods, which due to their computational constraints, greatly limits the models' quantitative performance and foregoes its country-specific quantitative projection ability. By applying the Hotz-Miller estimation technique (Hotz and Miller, 1993- often used in applied microeconometrics literature- to dynamic general equilibrium models of sovereign default, one can estimate the ex-ante default probability of economies, given the structural parameter values obtained from country-specific business-cycle statistics and relevant literature. Thus, with this technique we offer an alternative solution method to dynamic general equilibrium models of sovereign default to improve upon their quantitative inference ability.

  4. Estimating Crop Growth Stage by Combining Meteorological and Remote Sensing Based Techniques

    Science.gov (United States)

    Champagne, C.; Alavi-Shoushtari, N.; Davidson, A. M.; Chipanshi, A.; Zhang, Y.; Shang, J.

    2016-12-01

    Estimations of seeding, harvest and phenological growth stage of crops are important sources of information for monitoring crop progress and crop yield forecasting. Growth stage has been traditionally estimated at the regional level through surveys, which rely on field staff to collect the information. Automated techniques to estimate growth stage have included agrometeorological approaches that use temperature and day length information to estimate accumulated heat and photoperiod, with thresholds used to determine when these stages are most likely. These approaches however, are crop and hybrid dependent, and can give widely varying results depending on the method used, particularly if the seeding date is unknown. Methods to estimate growth stage from remote sensing have progressed greatly in the past decade, with time series information from the Normalized Difference Vegetation Index (NDVI) the most common approach. Time series NDVI provide information on growth stage through a variety of techniques, including fitting functions to a series of measured NDVI values or smoothing these values and using thresholds to detect changes in slope that are indicative of rapidly increasing or decreasing `greeness' in the vegetation cover. The key limitations of these techniques for agriculture are frequent cloud cover in optical data that lead to errors in estimating local features in the time series function, and the incongruity between changes in greenness and traditional agricultural growth stages. There is great potential to combine both meteorological approaches and remote sensing to overcome the limitations of each technique. This research will examine the accuracy of both meteorological and remote sensing approaches over several agricultural sites in Canada, and look at the potential to integrate these techniques to provide improved estimates of crop growth stage for common field crops.

  5. Problems associated with pooling mark-recapture data prior to estimating stopover length for migratory passerines

    Science.gov (United States)

    Sara R. Morris; Erica M. Turner; David A. Liebner; Amanda M. Larracuente; H. David Sheets

    2005-01-01

    One measure of the importance of a stopover site is the length of time that migrants spend at an area, however measuring the time birds spend at a stopover site has proven difficult. Most banding studies have presented only minimum length of stopover, based on the difference between initial capture and final recapture of birds that are captured more than once. Cormack-...

  6. Telomerecat: A ploidy-agnostic method for estimating telomere length from whole genome sequencing data

    NARCIS (Netherlands)

    Farmery, James H. R.; Smith, Mike L.; Lynch, Andy G.; Huissoon, Aarnoud; Furnell, Abigail; Mead, Adam; Levine, Adam P.; Manzur, Adnan; Thrasher, Adrian; Greenhalgh, Alan; Parker, Alasdair; Sanchis-Juan, Alba; Richter, Alex; Gardham, Alice; Lawrie, Allan; Sohal, Aman; Creaser-Myers, Amanda; Frary, Amy; Greinacher, Andreas; Themistocleous, Andreas; Peacock, Andrew J.; Marshall, Andrew; Mumford, Andrew; Rice, Andrew; Webster, Andrew; Brady, Angie; Koziell, Ania; Manson, Ania; Chandra, Anita; Hensiek, Anke; Veld, Anna Huis In't; Maw, Anna; Kelly, Anne M.; Moore, Anthony; Vonk Noordegraaf, Anton; Attwood, Antony; Herwadkar, Archana; Ghofrani, Ardi; Houweling, Arjan C.; Girerd, Barbara; Furie, Bruce; Treacy, Carmen M.; Millar, Carolyn M.; Sewell, Carrock; Roughley, Catherine; Titterton, Catherine; Williamson, Catherine; Hadinnapola, Charaka; Deshpande, Charu; Toh, Cheng-Hock; Bacchelli, Chiara; Patch, Chris; Geet, Chris Van; Babbs, Christian; Bryson, Christine; Penkett, Christopher J.; Rhodes, Christopher J.; Watt, Christopher; Bethune, Claire; Booth, Claire; Lentaigne, Claire; McJannet, Coleen; Church, Colin; French, Courtney; Samarghitean, Crina; Halmagyi, Csaba; Gale, Daniel; Greene, Daniel; Hart, Daniel; Allsup, David; Bennett, David; Edgar, David; Kiely, David G.; Gosal, David; Perry, David J.; Keeling, David; Montani, David; Shipley, Debbie; Whitehorn, Deborah; Fletcher, Debra; Krishnakumar, Deepa; Grozeva, Detelina; Kumararatne, Dinakantha; Thompson, Dorothy; Josifova, Dragana; Maher, Eamonn; Wong, Edwin K. S.; Murphy, Elaine; Dewhurst, Eleanor; Louka, Eleni; Rosser, Elisabeth; Chalmers, Elizabeth; Colby, Elizabeth; Drewe, Elizabeth; McDermott, Elizabeth; Thomas, Ellen; Staples, Emily; Clement, Emma; Matthews, Emma; Wakeling, Emma; Oksenhendler, Eric; Turro, Ernest; Reid, Evan; Wassmer, Evangeline; Raymond, F. Lucy; Hu, Fengyuan; Kennedy, Fiona; Soubrier, Florent; Flinter, Frances; Kovacs, Gabor; Polwarth, Gary; Ambegaonkar, Gautum; Arno, Gavin; Hudson, Gavin; Woods, Geoff; Coghlan, Gerry; Hayman, Grant; Arumugakani, Gururaj; Schotte, Gwen; Cook, H. Terry; Alachkar, Hana; Lango Allen, Hana; Lango-Allen, Hana; Stark, Hannah; Stauss, Hans; Schulze, Harald; Boggard, Harm J.; Baxendale, Helen; Dolling, Helen; Firth, Helen; Gall, Henning; Watson, Henry; Longhurst, Hilary; Markus, Hugh S.; Watkins, Hugh; Simeoni, Ilenia; Emmerson, Ingrid; Roberts, Irene; Quinti, Isabella; Wanjiku, Ivy; Gibbs, J. Simon R.; Thaventhiran, James; Whitworth, James; Hurst, Jane; Collins, Janine; Suntharalingam, Jay; Payne, Jeanette; Thachil, Jecko; Martin, Jennifer M.; Martin, Jennifer; Carmichael, Jenny; Maimaris, Jesmeen; Paterson, Joan; Pepke-Zaba, Joanna; Heemskerk, Johan W. M.; Gebhart, Johanna; Davis, John; Pasi, John; Bradley, John R.; Wharton, John; Stephens, Jonathan; Rankin, Julia; Anderson, Julie; Vogt, Julie; von Ziegenweldt, Julie; Rehnstrom, Karola; Megy, Karyn; Talks, Kate; Peerlinck, Kathelijne; Yates, Katherine; Freson, Kathleen; Stirrups, Kathleen; Gomez, Keith; Smith, Kenneth G. C.; Carss, Keren; Rue-Albrecht, Kevin; Gilmour, Kimberley; Masati, Larahmie; Scelsi, Laura; Southgate, Laura; Ranganathan, Lavanya; Ginsberg, Lionel; Devlin, Lisa; Willcocks, Lisa; Ormondroyd, Liz; Lorenzo, Lorena; Harper, Lorraine; Allen, Louise; Daugherty, Louise; Chitre, Manali; Kurian, Manju; Humbert, Marc; Tischkowitz, Marc; Bitner-Glindzicz, Maria; Erwood, Marie; Scully, Marie; Veltman, Marijke; Caulfield, Mark; Layton, Mark; McCarthy, Mark; Ponsford, Mark; Toshner, Mark; Bleda, Marta; Wilkins, Martin; Mathias, Mary; Reilly, Mary; Afzal, Maryam; Brown, Matthew; Rondina, Matthew; Stubbs, Matthew; Haimel, Matthias; Lees, Melissa; Laffan, Michael A.; Browning, Michael; Gattens, Michael; Richards, Michael; Michaelides, Michel; Lambert, Michele P.; Makris, Mike; de Vries, Minka; Mahdi-Rogers, Mohamed; Saleem, Moin; Thomas, Moira; Holder, Muriel; Eyries, Mélanie; Clements-Brod, Naomi; Canham, Natalie; Dormand, Natalie; Zuydam, Natalie Van; Kingston, Nathalie; Ghali, Neeti; Cooper, Nichola; Morrell, Nicholas W.; Yeatman, Nigel; Roy, Noémi; Shamardina, Olga; Alavijeh, Omid S.; Gresele, Paolo; Nurden, Paquita; Chinnery, Patrick; Deegan, Patrick; Yong, Patrick; Man, Patrick Yu Wai; Corris, Paul A.; Calleja, Paul; Gissen, Paul; Bolton-Maggs, Paula; Rayner-Matthews, Paula; Ghataorhe, Pavandeep K.; Gordins, Pavel; Stein, Penelope; Collins, Peter; Dixon, Peter; Kelleher, Peter; Ancliff, Phil; Yu, Ping; Tait, R. Campbell; Linger, Rachel; Doffinger, Rainer; Machado, Rajiv; Kazmi, Rashid; Sargur, Ravishankar; Favier, Remi; Tan, Rhea; Liesner, Ri; Antrobus, Richard; Sandford, Richard; Scott, Richard; Trembath, Richard; Horvath, Rita; Hadden, Rob; MackenzieRoss, Rob V.; Henderson, Robert; MacLaren, Robert; James, Roger; Ghurye, Rohit; DaCosta, Rosa; Hague, Rosie; Mapeta, Rutendo; Armstrong, Ruth; Noorani, Sadia; Murng, Sai; Santra, Saikat; Tuna, Salih; Johnson, Sally; Chong, Sam; Lear, Sara; Walker, Sara; Goddard, Sarah; Mangles, Sarah; Westbury, Sarah; Mehta, Sarju; Hackett, Scott; Nejentsev, Sergey; Moledina, Shahin; Bibi, Shahnaz; Meehan, Sharon; Othman, Shokri; Revel-Vilk, Shoshana; Holden, Simon; McGowan, Simon; Staines, Simon; Savic, Sinisa; Burns, Siobhan; Grigoriadou, Sofia; Papadia, Sofia; Ashford, Sofie; Schulman, Sol; Ali, Sonia; Park, Soo-Mi; Davies, Sophie; Stock, Sophie; Ali, Souad; Deevi, Sri V. V.; Gräf, Stefan; Ghio, Stefano; Wort, Stephen J.; Jolles, Stephen; Austin, Steve; Welch, Steve; Meacham, Stuart; Rankin, Stuart; Walker, Suellen; Seneviratne, Suranjith; Holder, Susan; Sivapalaratnam, Suthesh; Richardson, Sylvia; Kuijpers, Taco; Bariana, Tadbir K.; Bakchoul, Tamam; Everington, Tamara; Renton, Tara; Young, Tim; Aitman, Timothy; Warner, Timothy Q.; Vale, Tom; Hammerton, Tracey; Pollock, Val; Matser, Vera; Cookson, Victoria; Clowes, Virginia; Qasim, Waseem; Wei, Wei; Erber, Wendy N.; Ouwehand, Willem H.; Astle, William; Egner, William; Turek, Wojciech; Henskens, Yvonne; Tan, Yvonne

    2018-01-01

    Telomere length is a risk factor in disease and the dynamics of telomere length are crucial to our understanding of cell replication and vitality. The proliferation of whole genome sequencing represents an unprecedented opportunity to glean new insights into telomere biology on a previously

  7. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  8. HLA-DPB1 typing with polymerase chain reaction and restriction fragment length polymorphism technique in Danes

    DEFF Research Database (Denmark)

    Hviid, T V; Madsen, H O; Morling, N

    1992-01-01

    We have used the polymerase chain reaction (PCR) in combination with the restriction fragment length polymorphism (RFLP) technique for HLA-DBP1 typing. After PCR amplification of the polymorphic second exon of the HLA-DPB1 locus, the PCR product was digested with seven allele-specific restriction...

  9. Anatomic ACL reconstruction produces greater graft length change during knee range-of-motion than transtibial technique.

    Science.gov (United States)

    Lubowitz, James H

    2014-05-01

    Because distance between the knee ACL femoral and tibial footprint centrums changes during knee range-of-motion, surgeons must understand the effect of ACL socket position on graft length, in order to avoid graft rupture which may occur when tensioning and fixation is performed at the incorrect knee flexion angle. The purpose of this study is to evaluate change in intra-articular length of a reconstructed ACL during knee range-of-motion comparing anatomic versus transtibial techniques. After power analysis, seven matched pair cadaveric knees were tested. The ACL was debrided, and femoral and tibial footprint centrums for anatomic versus transtibial techniques were identified and marked. Asuture anchor was placed at the femoral centrum and a custom, cannulated suture-centring device at the tibial centrum, and excursion of the suture, representing length change of an ACL graft during knee range-of-motion, was measured in millimeters and recorded using a digital transducer. Mean increase in length as the knee was ranged 120°–0° (full extension) was 4.5 mm (±2.0 mm) for transtibial versus 6.7 mm (±0.9 mm) for anatomic ACL technique. A significant difference in length change occurs during knee range-of-motion both within groups and between the two groups. Change in length of the ACL intra-articular distance during knee range-of-motion is greater for anatomic socket position compared to transtibial position. Surgeons performing anatomic single-bundle ACL reconstruction may tension and fix grafts with the knee in full extension to minimize risk of graft stretch or rupture or knee capture during full extension. This technique may also result in knee anterior–posterior laxity in knee flexion.

  10. A method for estimating the fibre length in fibre-PLA composites.

    Science.gov (United States)

    Chinga-Carrasco, G; Solheim, O; Lenes, M; Larsen, A

    2013-04-01

    Wood pulp fibres are an important component of environmentally sound and renewable fibre-reinforced composite materials. The high aspect ratio of pulp fibres is an essential property with respect to the mechanical properties a given composite material can achieve. The length of pulp fibres is affected by composite processing operations. This thus emphasizes the importance of assessing the pulp fibre length and how this may be affected by a given process for manufacturing composites. In this work a new method for measuring the length distribution of fibres and fibre fragments has been developed. The method is based on; (i) dissolving the composites, (ii) preparing the fibres for image acquisition and (iii) image analysis of the resulting fibre structures. The image analysis part is relatively simple to implement and is based on images acquired with a desktop scanner and a new ImageJ plugin. The quantification of fibre length has demonstrated the fibre shortening effect because of an extrusion process and subsequent injection moulding. Fibres with original lengths of >1 mm where shortened to fibre fragments with length of <200 μm. The shortening seems to be affected by the number of times the fibres have passed through the extruder, the amount of chain extender and the fraction of fibres in the polymer matrix. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  11. Load-estimation techniques for unsteady incompressible flows

    Science.gov (United States)

    Rival, David E.; Oudheusden, Bas van

    2017-03-01

    In a large variety of fluid-dynamic problems, it is often impossible to directly measure the instantaneous aerodynamic or hydrodynamic forces on a moving body. Examples include studies of propulsion in nature, either with mechanical models or living animals, wings, and blades subjected to significant surface contamination, such as icing, sting blockage effects, etc. In these circumstances, load estimation from flow-field data provides an attractive alternative method, while at the same time providing insight into the relationship between unsteady loadings and their associated vortex-wake dynamics. Historically, classical control-volume techniques based on time-averaged measurements have been used to extract the mean forces. With the advent of high-speed imaging, and the rapid progress in time-resolved volumetric measurements, such as Tomo-PIV and 4D-PTV, it is becoming feasible to estimate the instantaneous forces on bodies of complex geometry and/or motion. For effective application under these conditions, a number of challenges still exist, including the near-body treatment of the acceleration field as well as the estimation of pressure on the outer surfaces of the control volume. Additional limitations in temporal and spatial resolutions, and their associated impact on the feasibility of the various approaches, are also discussed. Finally, as an outlook towards the development of future methodologies, the potential application of Lagrangian techniques is explored.

  12. Estimation technique on thermal properties data of reactor materials

    International Nuclear Information System (INIS)

    Imai, Hidetaka; Baba, Tetsuya; Matsumoto, Tsuyoshi; Kishimoto, Isao; Taketoshi, Naoyuki; Arai, Teruo

    1998-01-01

    This study aims at rapid measurement of thermal properties (thermal conductivity, thermal diffusivity, specific heat capacity, and emissivity) with the highest precision and till ultra high temperature in the world under identifying high temperature materials expected at reactor engineering in future such as plasma facing materials of nuclear fusion reactor. It was conducted by setting some sub-theme such as highly precise measurement and characterization of thermal properties, estimation technique of their data. Thus, precise measurement on specific heat capacity of meso-phase graphite was conducted. Between those at 1000degC and 3000degC a difference of about 5% was observed. As a result, it was found that it was required for highly precise estimation of thermal property data to consider value of the specific heat capacity. (G.K.)

  13. Learning curve estimation techniques for the nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  14. Learning-curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  15. Transport-Constrained Extensions of Collision and Track Length Estimators for Solutions of Radiative Transport Problems.

    Science.gov (United States)

    Kong, Rong; Spanier, Jerome

    2013-06-01

    In this paper we develop novel extensions of collision and track lengh estimators for the complete space-angle solutions of radiative transport problems. We derive the relevant equations, prove that our new estimators are unbiased, and compare their performance with that of more conventional ) estimators. Such comparisons based on numerical solutions of simple one dimensional slab problems indicate the the potential superiority of the new estimators for a wide variety of more general transport problems.

  16. Use of the Rietveld technique for estimating cation distributions

    International Nuclear Information System (INIS)

    Nord, A.G.

    1984-01-01

    The use of the Rietveld full-profile refinement technique to estimate cation distributions is exemplifed by a neutron powder diffraction study of the farringtonite-type solid solution γ-(Znsub(0.70)Fesub(0.30)) 3 (PO 4 ) 2 , with five- and six-coordinated cation sites. A review of similar studies of phases with the farringtonite, sarcopside, Ni 2 P 4 O 12 or olivine structure is given. The accuracy is discussed in terms of Ksub(D) distribution coefficients and metal-oxygen distances. Some investigations of olivines based on X-ray single-crystal data are reviewed for comparison. (Auth.)

  17. Cost Estimation Techniques for C3I System Software.

    Science.gov (United States)

    1984-07-01

    N@ROIMS~w L unliite * 4 RPONMON O9:RG AIIATION 1POL NUMERISNTRINTG ORAIATIN EPRTNUOEI *UNASSFE N/ADTR8- 26 g ~pr 7..SPIA~O N MERIT O. MITI ONITORINAGIL...PICATION NUMEER11 ORGANIZATION r uan, Rome Air Development Center COEE 730602-83-C-0184 ft 6. LORIISS ICJIy. S&am d ZIP Code# I. SOURCE OP PUNtOING NO...J. Stone. Jr.. "Software .. ,(COCOMO Model) Transfer Cost Estimation Technique", B.W. Boehm. ftkbLacu Oi nath J g MITRE, M70-43, E--r g ga i , July

  18. Sound Power Estimation by Laser Doppler Vibration Measurement Techniques

    Directory of Open Access Journals (Sweden)

    G.M. Revel

    1998-01-01

    Full Text Available The aim of this paper is to propose simple and quick methods for the determination of the sound power emitted by a vibrating surface, by using non-contact vibration measurement techniques. In order to calculate the acoustic power by vibration data processing, two different approaches are presented. The first is based on the method proposed in the Standard ISO/TR 7849, while the second is based on the superposition theorem. A laser-Doppler scanning vibrometer has been employed for vibration measurements. Laser techniques open up new possibilities in this field because of their high spatial resolution and their non-intrusivity. The technique has been applied here to estimate the acoustic power emitted by a loudspeaker diaphragm. Results have been compared with those from a commercial Boundary Element Method (BEM software and experimentally validated by acoustic intensity measurements. Predicted and experimental results seem to be in agreement (differences lower than 1 dB thus showing that the proposed techniques can be employed as rapid solutions for many practical and industrial applications. Uncertainty sources are addressed and their effect is discussed.

  19. Modified thresholding technique of MMSPCA for extracting respiratory activity from short length PPG signal.

    Science.gov (United States)

    Motin, Mohammod Abdul; Karmakar, Chandan Kumar; Palaniswami, Marimuthu

    2017-07-01

    In this paper, we propose an automatic threshold selection of modified multi scale principal component analysis (MMSPCA) for reliable extraction of respiratory activity (RA) from short length photoplethysmographic (PPG) signals. MMSPCA was applied to the PPG signal with a varying data length, from 30 seconds to 60 seconds, to extract the respiratory activity. To examine the performance, we used 100 epochs of simultaneously recorded PPG and respiratory signals extracted from the MIMIC database (Physionet ATM data bank). The respiratory signal used as the ground truth and several performance measurement metrics such as magnitude squared coherence (MSC), correlation coefficients (CC), and normalized root mean square error (NRMSE) were used to compare the performance of MMSPCA based PPG derived RA. At the data length of 30 seconds, MSC, CC and NRMSE for proposed thresholding were 0.65, 0.62 and -0.82 dB respectively where as they were 0.68, 0.47 and 0.25 dB respectively for existing thresholding. These results illustrated that the proposed threshold selection performs better than existing threshold selection for short length data.

  20. Improved Battery State Estimation Using Novel Sensing Techniques

    Science.gov (United States)

    Abdul Samad, Nassim

    Lithium-ion batteries have been considered a great complement or substitute for gasoline engines due to their high energy and power density capabilities among other advantages. However, these types of energy storage devices are still yet not widespread, mainly because of their relatively high cost and safety issues, especially at elevated temperatures. This thesis extends existing methods of estimating critical battery states using model-based techniques augmented by real-time measurements from novel temperature and force sensors. Typically, temperature sensors are located near the edge of the battery, and away from the hottest core cell regions, which leads to slower response times and increased errors in the prediction of core temperatures. New sensor technology allows for flexible sensor placement at the cell surface between cells in a pack. This raises questions about the optimal locations of these sensors for best observability and temperature estimation. Using a validated model, which is developed and verified using experiments in laboratory fixtures that replicate vehicle pack conditions, it is shown that optimal sensor placement can lead to better and faster temperature estimation. Another equally important state is the state of health or the capacity fading of the cell. This thesis introduces a novel method of using force measurements for capacity fade estimation. Monitoring capacity is important for defining the range of electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs). Current capacity estimation techniques require a full discharge to monitor capacity. The proposed method can complement or replace current methods because it only requires a shallow discharge, which is especially useful in EVs and PHEVs. Using the accurate state estimation accomplished earlier, a method for downsizing a battery pack is shown to effectively reduce the number of cells in a pack without compromising safety. The influence on the battery performance (e

  1. Estimation of Length and Order of Polynomial-based Filter Implemented in the Form of Farrow Structure

    Directory of Open Access Journals (Sweden)

    S. Vukotic

    2016-08-01

    Full Text Available Digital polynomial-based interpolation filters implemented using the Farrow structure are used in Digital Signal Processing (DSP to calculate the signal between its discrete samples. The two basic design parameters for these filters are number of polynomial-segments defining the finite length of impulse response, and order of polynomials in each polynomial segment. The complexity of the implementation structure and the frequency domain performance depend on these two parameters. This contribution presents estimation formulae for length and polynomial order of polynomial-based filters for various types of requirements including attenuation in stopband, width of transitions band, deviation in passband, weighting in passband/stopband.

  2. ESTIMATION OF INSULATOR CONTAMINATIONS BY MEANS OF REMOTE SENSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    G. Han

    2016-06-01

    Full Text Available The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD, digital elevation model (DEM, land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data. Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.

  3. Quantitative estimation of Holocene surface salinity variation in the Black Sea using dinoflagellate cyst process length

    DEFF Research Database (Denmark)

    Mertens, Kenneth Neil; Bradley, Lee R.; Takano, Yoshihito

    2012-01-01

    this calibration to make a regional reconstruction of paleosalinity in the Black Sea, calculated by averaging out process length variation observed at four core sites from the Black Sea with high sedimentation rates and dated by multiple mollusk shell ages. Results show a very gradual change of salinity from ∼14...

  4. Increasing the accuracy and precision of relative telomere length estimates by RT qPCR

    NARCIS (Netherlands)

    Eastwood, Justin R.; Mulder, Ellis; Verhulst, Simon; Peters, Anne

    As attrition of telomeres, DNA caps that protect chromosome integrity, is accelerated by various forms of stress, telomere length (TL) has been proposed as an indicator of lifetime accumulated stress. In ecological studies, it has been used to provide insights into ageing, life history trade-offs,

  5. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    Science.gov (United States)

    Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej

    2014-01-01

    High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492

  6. Estimation of alpine skier posture using machine learning techniques.

    Science.gov (United States)

    Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej

    2014-10-13

    High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing.

  7. Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques.

    Science.gov (United States)

    Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; De Gaetani, Carlo Iapige; Pinto, Livio; Tornatore, Vincenza

    2018-03-02

    Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.

  8. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Bojan Nemec

    2014-10-01

    Full Text Available High precision Global Navigation Satellite System (GNSS measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier’s neck. A key issue is how to estimate other more relevant parameters of the skier’s body, like the center of mass (COM and ski trajectories. Previously, these parameters were estimated by modeling the skier’s body with an inverted-pendulum model that oversimplified the skier’s body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier’s body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing.

  9. Using support vector machines in the multivariate state estimation technique

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Gross, K.C.

    1999-01-01

    One approach to validate nuclear power plant (NPP) signals makes use of pattern recognition techniques. This approach often assumes that there is a set of signal prototypes that are continuously compared with the actual sensor signals. These signal prototypes are often computed based on empirical models with little or no knowledge about physical processes. A common problem of all data-based models is their limited ability to make predictions on the basis of available training data. Another problem is related to suboptimal training algorithms. Both of these potential shortcomings with conventional approaches to signal validation and sensor operability validation are successfully resolved by adopting a recently proposed learning paradigm called the support vector machine (SVM). The work presented here is a novel application of SVM for data-based modeling of system state variables in an NPP, integrated with a nonlinear, nonparametric technique called the multivariate state estimation technique (MSET), an algorithm developed at Argonne National Laboratory for a wide range of nuclear plant applications

  10. ESTIMATION OF BURSTS LENGTH AND DESIGN OF A FIBER DELAY LINE BASED OBS ROUTER

    Directory of Open Access Journals (Sweden)

    RICHA AWASTHI

    2017-03-01

    Full Text Available The demand for higher bandwidth is increasing day by day and this ever growing demand cannot be catered to with current electronic technology. Thus new communication technology like optical communication needs to be used. In the similar context OBS (optical burst switching is considered as next generation data transfer technology. In OBS information is transmitted in forms of optical bursts of variable lengths. However, contention among the bursts is a major problem in OBS system, and for contention resolution defection routing is mostly preferred. However, deflection routing increases delay. In this paper, it is shown that the arrival of very large bursts is rare event, and for moderate burst length the buffering of contending burst can provide very effective solution. However, in case of arrival of large bursts deflection can be used.

  11. Blood Capillary Length Estimation from Three-Dimensional Microscopic Data by Image Analysis and Stereology

    Czech Academy of Sciences Publication Activity Database

    Kubínová, Lucie; Mao, X. W.; Janáček, Jiří

    2013-01-01

    Roč. 19, č. 4 (2013), s. 898-906 ISSN 1431-9276 R&D Projects: GA MŠk(CZ) ME09010; GA MŠk(CZ) LH13028; GA ČR(CZ) GAP108/11/0794 Institutional research plan: CEZ:AV0Z5011922 Institutional support: RVO:67985823 Keywords : capillaries * confocal microscopy * image analysis * length * rat brain * stereology Subject RIV: EA - Cell Biology Impact factor: 1.757, year: 2013

  12. Carrier Estimation Using Classic Spectral Estimation Techniques for the Proposed Demand Assignment Multiple Access Service

    Science.gov (United States)

    Scaife, Bradley James

    1999-01-01

    In any satellite communication, the Doppler shift associated with the satellite's position and velocity must be calculated in order to determine the carrier frequency. If the satellite state vector is unknown then some estimate must be formed of the Doppler-shifted carrier frequency. One elementary technique is to examine the signal spectrum and base the estimate on the dominant spectral component. If, however, the carrier is spread (as in most satellite communications) this technique may fail unless the chip rate-to-data rate ratio (processing gain) associated with the carrier is small. In this case, there may be enough spectral energy to allow peak detection against a noise background. In this thesis, we present a method to estimate the frequency (without knowledge of the Doppler shift) of a spread-spectrum carrier assuming a small processing gain and binary-phase shift keying (BPSK) modulation. Our method relies on an averaged discrete Fourier transform along with peak detection on spectral match filtered data. We provide theory and simulation results indicating the accuracy of this method. In addition, we will describe an all-digital hardware design based around a Motorola DSP56303 and high-speed A/D which implements this technique in real-time. The hardware design is to be used in NMSU's implementation of NASA's demand assignment, multiple access (DAMA) service.

  13. A method for estimating age of medieval sub-adults from infancy to adulthood based on long bone length

    DEFF Research Database (Denmark)

    Primeau, Charlotte; Friis, Laila Saidane; Sejrsen, Birgitte

    2016-01-01

    AND METHODS: A total of 183 skeletal sub-adults from the Danish medieval period, were aged from radiographic images. Linear regression formulae were then produced for individual bones. Age was then estimated from the femur length using three different methods: equations developed in this study, data based...... as later than the medieval period, although this would require further testing. The quadratic equations are suggested to yield more accurate ages then using simply linear regression equations. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc.......OBJECTIVES: To develop a series of regression equations for estimating age from length of long bones for archaeological sub-adults when aging from dental development cannot be performed. Further, to compare derived ages when using these regression equations, and two other methods. MATERIAL...

  14. Length and volume of morphologically normal kidneys in Korean Children: Ultrasound measurement and estimation using body size

    International Nuclear Information System (INIS)

    Kim, Jun Hwee; Kim, Myung Joon; Lim, Sok Hwan; Lee, Mi Jung; Kim, Ji Eun

    2013-01-01

    To evaluate the relationship between anthropometric measurements and renal length and volume measured with ultrasound in Korean children who have morphologically normal kidneys, and to create simple equations to estimate the renal sizes using the anthropometric measurements. We examined 794 Korean children under 18 years of age including a total of 394 boys and 400 girls without renal problems. The maximum renal length (L) (cm), orthogonal anterior-posterior diameter (D) (cm) and width (W) (cm) of each kidney were measured on ultrasound. Kidney volume was calculated as 0.523 x L x D x W (cm 3 ). Anthropometric indices including height (cm), weight (kg) and body mass index (m 2 /kg) were collected through a medical record review. We used linear regression analysis to create simple equations to estimate the renal length and the volume with those anthropometric indices that were mostly correlated with the US-measured renal sizes. Renal length showed the strongest significant correlation with patient height (R2, 0.874 and 0.875 for the right and left kidneys, respectively, p < 0.001). Renal volume showed the strongest significant correlation with patient weight (R2, 0.842 and 0.854 for the right and left kidneys, respectively, p < 0.001). The following equations were developed to describe these relationships with an estimated 95% range of renal length and volume (R2, 0.826-0.884, p < 0.001): renal length = 2.383 + 0.045 x Height (± 1.135) and = 2.374 + 0.047 x Height (± 1.173) for the right and left kidneys, respectively; and renal volume 7.941 + 1.246 x Weight (± 15.920) and = 7.303 + 1.532 x Weight (± 18.704) for the right and left kidneys, respectively. Scatter plots between height and renal length and between weight and renal volume have been established from Korean children and simple equations between them have been developed for use in clinical practice.

  15. Different methods to estimate the Einstein-Markov coherence length in turbulence

    Science.gov (United States)

    Stresing, R.; Kleinhans, D.; Friedrich, R.; Peinke, J.

    2011-04-01

    We study the Markov property of experimental velocity data of different homogeneous isotropic turbulent flows. In particular, we examine the stochastic “cascade” process of nested velocity increments ξ(r):=u(x+r)-u(x) as a function of scale r for different nesting structures. It was found in previous work that, for a certain nesting structure, the stochastic process of ξ(r) has the Markov property for step sizes larger than the so-called Einstein-Markov coherence length lEM, which is of the order of magnitude of the Taylor microscale λ [Phys. Lett. APYLAAG0375-960110.1016/j.physleta.2006.06.053 359, 335 (2006)]. We now show that, if a reasonable definition of the effective step size of the process is applied, this result holds independently of the nesting structure. Furthermore, we analyze the stochastic process of the velocity u as a function of the spatial position x. Although this process does not have the exact Markov property, a characteristic length scale lu(x)≈lEM can be identified on the basis of a statistical test for the Markov property. Using a method based on the matrix of transition probabilities, we examine the significance of the non-Markovian character of the velocity u(x) for the statistical properties of turbulence.

  16. Handover Management for VoWLAN Based on Estimation of AP Queue Length and Frame Retries

    Science.gov (United States)

    Niswar, Muhammad; Kashihara, Shigeru; Tsukamoto, Kazuya; Kadobayashi, Youki; Yamaguchi, Suguru

    Switching a communication path from one Access Point (AP) to another in inter-domain WLANs is a critical challenge for delay-sensitive applications such as Voice over IP (VoIP) because communication quality during handover (HO) is more likely to be deteriorated. To maintain VoIP quality during HO, we need to solve many problems. In particular, in bi-directional communication such as VoIP, an AP becomes a bottleneck with the increase of VoIP calls. As a result, packets queued in the AP buffer may experience a large queuing delay or packet losses due to increase in queue length or buffer overflow, thereby causing the degradation of VoIP quality for the Mobile Nodes (MNs) side. To avoid this degradation, MNs need to appropriately and autonomously execute HO in response to the change in wireless network condition, i.e., the deterioration of wireless link quality and the congestion state at the AP. In this paper, we propose an HO decision strategy considering frame retries, AP queue length, and transmission rate at an MN for maintaining VoIP quality during HO. Through simulation experiments, we then show that our proposed method can maintain VoIP quality during HO by properly detecting the wireless network condition.

  17. Amide Proton Transfer Imaging of Diffuse Gliomas: Effect of Saturation Pulse Length in Parallel Transmission-Based Technique.

    Science.gov (United States)

    Togao, Osamu; Hiwatashi, Akio; Keupp, Jochen; Yamashita, Koji; Kikuchi, Kazufumi; Yoshiura, Takashi; Yoneyama, Masami; Kruiskamp, Marijn J; Sagiyama, Koji; Takahashi, Masaya; Honda, Hiroshi

    2016-01-01

    In this study, we evaluated the dependence of saturation pulse length on APT imaging of diffuse gliomas using a parallel transmission-based technique. Twenty-two patients with diffuse gliomas (9 low-grade gliomas, LGGs, and 13 high-grade gliomas, HGGs) were included in the study. APT imaging was conducted at 3T with a 2-channel parallel transmission scheme using three different saturation pulse lengths (0.5 s, 1.0 s, 2.0 s). The 2D fast spin-echo sequence was used for imaging. Z-spectrum was obtained at 25 frequency offsets from -6 to +6 ppm (step 0.5 ppm). A point-by-point B0 correction was performed with a B0 map. Magnetization transfer ratio (MTRasym) and ΔMTRasym (contrast between tumor and normal white matter) at 3.5 ppm were compared among different saturation lengths. A significant increase in MTRasym (3.5 ppm) of HGG was found when the length of saturation pulse became longer (3.09 ± 0.54% at 0.5 s, 3.83 ± 0.67% at 1 s, 4.12 ± 0.97% at 2 s), but MTRasym (3.5 ppm) was not different among the saturation lengths in LGG. ΔMTRasym (3.5 ppm) increased with the length of saturation pulse in both LGG (0.48 ± 0.56% at 0.5 s, 1.28 ± 0.56% at 1 s, 1.88 ± 0.56% at 2 s and HGG (1.72 ± 0.54% at 0.5 s, 2.90 ± 0.49% at 1 s, 3.83 ± 0.88% at 2 s). In both LGG and HGG, APT-weighted contrast was enhanced with the use of longer saturation pulses.

  18. Estimating cubic volume of small diameter tree-length logs from ponderosa and lodgepole pine.

    Science.gov (United States)

    Marlin E. Plank; James M. Cahill

    1984-01-01

    A sample of 351 ponderosa pine (Pinus ponderosa Dougl. ex Laws.) and 509 lodgepole pine (Pinus contorta Dougl. ex Loud.) logs were used to evaluate the performance of three commonly used formulas for estimating cubic volume. Smalian's formula, Bruce's formula, and Huber's formula were tested to determine which...

  19. Influence of syllable train length and performance end effects on estimation of phonation threshold pressure.

    Science.gov (United States)

    Faver, Katherine Y; Plexico, Laura W; Sandage, Mary J

    2012-01-01

    The purpose of this study was to determine whether the number of syllables collected and performance end effects had a significant effect on phonation threshold pressure (PTP) estimates. Ten adult females with normal voices produced five- and seven-syllable trains of /pi/ at low, modal, and high pitches. The results were analyzed using repeated-measures analysis of variance to determine whether a difference existed in PTP when a five-syllable train was collected versus when a seven-syllable train was collected and whether the typically discarded first and last syllables within a train differed from the middle syllables. The results indicated that there was no significant difference in estimated PTP values when calculated from a five-syllable versus a seven-syllable train or between the first, middle, and last syllables within a train. Based on these findings, it appears that a five-syllable train provides adequate information from which to estimate PTP values. Furthermore, these findings also suggest that within the five-syllable train, any three adjacent syllables could be used to estimate PTP. These findings are significant in developing a clinically standardized, effective, and efficient method for collecting PTP. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  20. Probabilistic divergence time estimation without branch lengths: dating the origins of dinosaurs, avian flight and crown birds.

    Science.gov (United States)

    Lloyd, G T; Bapst, D W; Friedman, M; Davis, K E

    2016-11-01

    Branch lengths-measured in character changes-are an essential requirement of clock-based divergence estimation, regardless of whether the fossil calibrations used represent nodes or tips. However, a separate set of divergence time approaches are typically used to date palaeontological trees, which may lack such branch lengths. Among these methods, sophisticated probabilistic approaches have recently emerged, in contrast with simpler algorithms relying on minimum node ages. Here, using a novel phylogenetic hypothesis for Mesozoic dinosaurs, we apply two such approaches to estimate divergence times for: (i) Dinosauria, (ii) Avialae (the earliest birds) and (iii) Neornithes (crown birds). We find: (i) the plausibility of a Permian origin for dinosaurs to be dependent on whether Nyasasaurus is the oldest dinosaur, (ii) a Middle to Late Jurassic origin of avian flight regardless of whether Archaeopteryx or Aurornis is considered the first bird and (iii) a Late Cretaceous origin for Neornithes that is broadly congruent with other node- and tip-dating estimates. Demonstrating the feasibility of probabilistic time-scaling further opens up divergence estimation to the rich histories of extinct biodiversity in the fossil record, even in the absence of detailed character data. © 2016 The Authors.

  1. Experimental estimation of the efficacy of the FLOTAC basic technique.

    Science.gov (United States)

    Kochanowski, Maciej; Karamon, Jacek; Dąbrowska, Joanna; Cencek, Tomasz

    2014-10-01

    The FLOTAC technique is a quantitative coproscopic method for the diagnosis of parasitic infection that is based on the centrifugation of a fecal sample to levitate helminth eggs with a flotation solution in a proprietary apparatus. Determination of the efficacy of the FLOTAC method and multiplication factors for calculation of the number of Toxocara, Trichuris, and Ascaris eggs in 1 g of feces on the basis of the number of detected eggs is presented. An investigation was conducted using feces samples enriched with a known number of parasite eggs: 3, 15, 50, or 100 parasite eggs of 3 nematode genera (Toxocara, Trichuris, and Ascaris) per 1 g (EPG) of feces. In addition, 80 samples of dog feces were prepared consisting of 20 repetitions for each level of contamination. The samples were analyzed using the FLOTAC basic technique. The limit of detection was calculated as the lowest level of egg content at which at least 50% of repetitions were positive. Multiplication factors for estimating the true number of parasite eggs in the samples were derived from regression coefficients that illustrated the linear relationship between the number of detected eggs and the number of eggs added to the sample. The percentages of recovered eggs for 1 chamber and for the whole apparatus ranged from 11.67 to 21.90% and from 21.33 to 40.10%, respectively, depending on dose enrichment and genus of parasite. The limit of detection calculated for the whole FLOTAC device was 3 EPG and was 15 EPG for 1 chamber for each of the 3 parasite genera. The limit of quantification calculated for whole FLOTAC was 15 EPG for each of 3 kinds of eggs. For 1 chamber, the limit of quantification was 15 EPG for Ascaris and Toxocara eggs and 50 EPG for Trichuris eggs. Multiplication factors for calculation of the number of eggs in 1 g of feces calculated for whole FLOTAC were 3 (for Toxocara and Ascaris eggs) and 4 (for Trichuris eggs). Experimentally calculated parameters of the method differ significantly

  2. The distribution of blow fly (Diptera: Calliphoridae) larval lengths and its implications for estimating post mortem intervals.

    Science.gov (United States)

    Moffatt, Colin; Heaton, Viv; De Haan, Dorine

    2016-01-01

    The length or stage of development of blow fly (Diptera: Calliphoridae) larvae may be used to estimate a minimum postmortem interval, often by targeting the largest individuals of a species in the belief that they will be the oldest. However, natural variation in rate of development, and therefore length, implies that the size of the largest larva, as well as the number of larvae longer than any stated length, will be greater for larger cohorts. Length data from the blow flies Protophormia terraenovae and Lucilia sericata were collected from one field-based and two laboratory-based experiments. The field cohorts contained considerably more individuals than have been used for reference data collection in the literature. Cohorts were shown to have an approximately normal distribution. Summary statistics were derived from the collected data allowing the quantification of errors in development time which arise when different sized cohorts are compared through their largest larvae. These errors may be considerable and can lead to overestimation of postmortem intervals when making comparisons with reference data collected from smaller cohorts. This source of error has hitherto been overlooked in forensic entomology.

  3. Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean

    Science.gov (United States)

    Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.

    2018-02-01

    The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.

  4. Dosimetry with semiconductor diodes in the application to the full-length irradiation technique of electrons

    International Nuclear Information System (INIS)

    Madrid G, O. A.; Rivera M, T.

    2012-10-01

    The use of charged particles as electrons for the tumor-like lesions treatment to total surface of skin is not very frequent, the types of fungo id mycosis and cutaneous lymphomas compared with other neoplasms they are relatively scarce, however for the existent cases a non conventional technique should be contemplated as treatment alternative that can reach an effective control. In this work the variables of more influence with ionization chamber and semiconductor diodes are studied for to determine the quality of an electrons beam. (Author)

  5. Empirical evaluation of humpback whale telomere length estimates : Quality control and factors causing variability in the singleplex and multiplex qPCR methods

    NARCIS (Netherlands)

    Olsen, Morten Tange; Berube, Martine; Robbins, Jooke; Palsboll, Per J.

    2012-01-01

    Background: Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent

  6. Leg and Femoral Neck Length Evaluation Using an Anterior Capsule Preservation Technique in Primary Direct Anterior Approach Total Hip Arthroplasty

    Directory of Open Access Journals (Sweden)

    Stephen J Nelson

    2017-04-01

    Full Text Available Background Achieving correct leg and femoral neck lengths remains a challenge during total hip arthroplasty (THA.  Several methods for intraoperative evaluation and restoration of leg length have been proposed, and each has inaccuracies and shortcomings.  Both the supine positioning of a patient on the operating table during the direct anterior approach (DAA THA and the preservation of the anterior capsule tissue  are simple, readily available, and cost-effective strategies that can lend themselves well as potential solutions to this problem. Technique The joint replacement is performed through a longitudinal incision (capsulotomy of the anterior hip joint capsule, and release of the capsular insertion from the femoral intertrochanteric line. As trial components of the prosthesis are placed, the position of the released distal capsule in relationship to its original insertion line is an excellent guide to leg length gained, lost, or left unchanged. Methods The radiographs of 80 consecutive primary THAs were reviewed which utilized anterior capsule preservation and direct capsular measurement as a means of assessing change in leg/femoral neck length. Preoperatively, the operative legs were 2.81 +/- 8.5 mm (SD shorter than the nonoperative leg (range: 17.7 mm longer to 34.1 mm shorter.  Postoperatively, the operative legs were 1.05 +/- 5.64 mm (SD longer than the nonoperative leg (range: 14.9 mm longer to 13.7 mm shorter. Conclusion The preservation and re-assessment of the native anterior hip capsule in relationship to its point of release on the femur is a simple and effective means of determining leg/femoral neck length during DAA THA.

  7. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    1992-01-01

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast...

  8. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast...

  9. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, S.; Jensen, Jakob Laigaard

    1993-01-01

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast...

  10. Genomic Fingerprinting of the Vaccine Strain of Clostridium Tetani by Restriction Fragment Length Polymorphism Technique

    Directory of Open Access Journals (Sweden)

    Naser Harzandi

    2013-05-01

    Full Text Available Background: Clostridium tetani or Nicolaier’s bacillus is an obligatory anaerobic, Gram-positive, movable with terminal or sub terminal spore. The chromosome of C. tetani contains 2,799,250 bp with a G+C content of 28.6%. The aim of this study was identification and genomic fingerprinting of the vaccine strain of C. tetani.Materials and Methods: The vaccine strain of C. tetani was provided by Razi Vaccine and Serum Research Institute. The seeds were inoculated into Columbia blood agar and grown for 72 h and transferred to the thioglycolate broth medium for further 36 h culturing. The cultures were incubated at 35ºC in anaerobic conditions. DNA extraction with phenol/ chloroform method was performed. After extraction, the consistency of DNA was assayed. Next, the vaccine strain was digested using pvuII enzyme and incubated at 37ºC for overnight. The digested DNA was gel-electrophoresed by 1% agarose for a short time. Then, the gel was studied with Gel Doc system and transferred to Hybond N+membrane using standard DNA blotting techniques.Results: The vaccine strain of C. tetani genome was fingerprinted by RFLP technique. Our preliminary results showed no divergence exists in the vaccine strain used for the production tetanus toxoid during the periods of 1990-2011.Conclusion: Observation suggests that there is lack of significant changes in RFLP genomic fingerprinting profile of the vaccine strain. Therefore, this strain did not lose its efficiency in tetanus vaccine production. RFLP analysis is worthwhile in investigating the nature of the vaccine strain C. tetani.

  11. Early cost estimating for road construction projects using multiple regression techniques

    Directory of Open Access Journals (Sweden)

    Ibrahim Mahamid

    2011-12-01

    Full Text Available The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.

  12. A method for estimating age of medieval sub-adults from infancy to adulthood based on long bone length.

    Science.gov (United States)

    Primeau, Charlotte; Friis, Laila; Sejrsen, Birgitte; Lynnerup, Niels

    2016-01-01

    To develop a series of regression equations for estimating age from length of long bones for archaeological sub-adults when aging from dental development cannot be performed. Further, to compare derived ages when using these regression equations, and two other methods. A total of 183 skeletal sub-adults from the Danish medieval period, were aged from radiographic images. Linear regression formulae were then produced for individual bones. Age was then estimated from the femur length using three different methods: equations developed in this study, data based on a modern population (Maresh: Human growth and development (1970) pp 155-200), and, lastly, based on archeological data with known ages (Rissech et al.: Forensic Sci Int 180 (2008) 1-9). As growth of long bones is known to be non-linear it was tested if the regression model could be improved by applying a quadratic model. Comparison between estimated ages revealed that the modern data result in lower estimated ages when compared to the Danish regression equations. The estimated ages using the Danish regression equations and the regression equations developed by Rissech et al. (Forensic Sci Int 180 (2007) 1-9) were very similar, if not identical. This indicates that the growth between the two archaeological populations is not that dissimilar. This would suggest that the regression equations developed in this study may potentially be applied to archaeological material outside Denmark as well as later than the medieval period, although this would require further testing. The quadratic equations are suggested to yield more accurate ages then using simply linear regression equations. © 2015 Wiley Periodicals, Inc.

  13. Project cost estimation techniques used by most emerging building ...

    African Journals Online (AJOL)

    Keywords: Cost estimation, estimation methods, emerging contractors, tender. Dr Solly Matshonisa .... historical cost data (data from cost accounting records and/ ..... emerging contractors in tendering. Table 13: Use of project risk management versus responsibility: expected. Internal document analysis. Checklist analysis.

  14. Review Article: Project cost estimation techniques used by most ...

    African Journals Online (AJOL)

    The research tool used was a questionnaire, which investigated biographical and company information, proposal management and estimation, programming and scheduling, estimating strategies, understanding of basic cost concepts, project risk management, pre-tender internal price evaluation, and tender submission.

  15. submitter Estimation of stepping motor current from long distances through cable-length-adaptive piecewise affine virtual sensor

    CERN Document Server

    Oliveri, Alberto; Masi, Alessandro; Storace, Marco

    2015-01-01

    In this paper a piecewise affine virtual sensor is used for the estimation of the motor-side current of hybrid stepper motors, which actuate the LHC (Large Hadron Collider) collimators at CERN. The estimation is performed starting from measurements of the current in the driver, which is connected to the motor by a long cable (up to 720 m). The measured current is therefore affected by noise and ringing phenomena. The proposed method does not require a model of the cable, since it is only based on measured data and can be used with cables of different length. A circuit architecture suitable for FPGA implementation has been designed and the effects of fixed point representation of data are analyzed.

  16. ESTIMATING A DOSE-RESPONSE RELATIONSHIP BETWEEN LENGTH OF STAY AND FUTURE RECIDIVISM IN SERIOUS JUVENILE OFFENDERS*

    Science.gov (United States)

    Loughran, Thomas A.; Mulvey, Edward P.; Schubert, Carol A.; Fagan, Jeffrey; Piquero, Alex R.; Losoya, Sandra H.

    2009-01-01

    The effect of sanctions on subsequent criminal activity is of central theoretical importance in criminology. A key question for juvenile justice policy is the degree to which serious juvenile offenders respond to sanctions and/or treatment administered by the juvenile court. The policy question germane to this debate is finding the level of confinement within the juvenile justice system that maximizes the public safety and therapeutic benefits of institutional confinement. Unfortunately, research on this issue has been limited with regard to serious juvenile offenders. We use longitudinal data from a large sample of serious juvenile offenders from two large cities to 1) estimate a causal treatment effect of institutional placement, as opposed to probation, on future rate of rearrest and 2) investigate the existence of a marginal effect (i.e., benefit) for longer length of stay once the institutional placement decision had been made. We accomplish the latter by determining a dose-response relationship between the length of stay and future rates of rearrest and self-reported offending. The results suggest that an overall null effect of placement exists on future rates of rearrest or self-reported offending for serious juvenile offenders. We also find that, for the group placed out of the community, it is apparent that little or no marginal benefit exists for longer lengths of stay. Theoretical, empirical, and policy issues are outlined. PMID:20052309

  17. Automatic landslide length and width estimation based on the geometric processing of the bounding box and the geomorphometric analysis of DEMs

    Science.gov (United States)

    Niculiţǎ, Mihai

    2016-08-01

    The morphology of landslides is influenced by the slide/flow of the material downslope. Usually, the distance of the movement of the material is greater than the width of the displaced material (especially for flows, but also the majority of slides); the resulting landslides have a greater length than width. In some specific geomorphologic environments (monoclinic regions, with cuesta landforms type) or as is the case for some types of landslides (translational slides, bank failures, complex landslides), for the majority of landslides, the distance of the movement of the displaced material can be smaller than its width; thus the landslides have a smaller length than width. When working with landslide inventories containing both types of landslides presented above, the analysis of the length and width of the landslides computed using usual geographic information system techniques (like bounding boxes) can be flawed. To overcome this flaw, I present an algorithm which uses both the geometry of the landslide polygon minimum oriented bounding box and a digital elevation model of the landslide topography for identifying the long vs. wide landslides. I tested the proposed algorithm for a landslide inventory which covers 131.1 km2 of the Moldavian Plateau, eastern Romania. This inventory contains 1327 landslides, of which 518 were manually classified as long and 809 as wide. In a first step, the difference in elevation of the length and width of the minimum oriented bounding box is used to separate long landslides from wide landslides (long landslides having the greatest elevation difference along the length of the bounding box). In a second step, the long landslides are checked as to whether their length is greater than the length of flow downslope (estimated with a flow-routing algorithm), in which case the landslide is classified as wide. By using this approach, the area under the Receiver Operating Characteristic curve value for the classification of the long vs. wide

  18. Ventricular cycle length characteristics estimative of prolonged RR interval during atrial fibrillation.

    Science.gov (United States)

    Ciaccio, Edward J; Biviano, Angelo B; Gambhir, Alok; Einstein, Andrew J; Garan, Hasan

    2014-03-01

    When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13-T2, and (2) Tm-T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. ©2013, The Authors. Journal compilation ©2013 Wiley Periodicals, Inc.

  19. Tip Vortex Index (TVI) Technique for Inboard Propeller Noise Estimation

    OpenAIRE

    Sezen, Savaş; Dogrul, Ali; Bal, Şakir

    2018-01-01

    Cavitating marine propeller is one of the most dominant noise sources inmarine vessels.  The aim of this study isto examine the cavitating propeller noise induced by tip vortices for twinscrew passenger vessels. To determine the noise level inboard, tip vortex index(TVI) technique has been used. This technique is an approximate method based onnumerical and experimental data. In this study, it is aimed to predict theunderwater noise of a marine propeller by applying TVI technique for ...

  20. Software risk estimation and management techniques at JPL

    Science.gov (United States)

    Hihn, J.; Lum, K.

    2002-01-01

    In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.

  1. Project cost estimation techniques used by most emerging building ...

    African Journals Online (AJOL)

    management, pre-tender internal price evaluation, and tender submission. Findings of this research revealed that South ... contractors are the main target of the cidb support. Despite the preferential procumbent ... the initial and ultimate pricing stages as the term 'estimator' refers to all persons involved with the estimating ...

  2. Nonlinear Filtering Techniques Comparison for Battery State Estimation

    Directory of Open Access Journals (Sweden)

    Aspasia Papazoglou

    2014-09-01

    Full Text Available The performance of estimation algorithms is vital for the correct functioning of batteries in electric vehicles, as poor estimates will inevitably jeopardize the operations that rely on un-measurable quantities, such as State of Charge and State of Health. This paper compares the performance of three nonlinear estimation algorithms: the Extended Kalman Filter, the Unscented Kalman Filter and the Particle Filter, where a lithium-ion cell model is considered. The effectiveness of these algorithms is measured by their ability to produce accurate estimates against their computational complexity in terms of number of operations and execution time required. The trade-offs between estimators' performance and their computational complexity are analyzed.

  3. Estimation of axial curvature of anterior sclera: correlation between axial length and anterior scleral curvature as affected by angle kappa.

    Science.gov (United States)

    Lee, Sang-Mok; Choi, Hyuk Jin; Choi, Heejin; Kim, Mee Kum; Wee, Won Ryang

    2016-10-07

    BACKGROUND: Though the development and fitting of scleral contact lenses are expanding steadily, there is no simple method to provide scleral metrics for scleral contact lens fitting yet. The aim of this study was to establish formulae for estimation of the axial radius of curvature (ARC) of the anterior sclera using ocular biometric parameters that can be easily obtained with conventional devices. A semi-automated stitching method and a computational analysis tool for calculating ARC were developed by using the ImageJ and MATLAB software. The ARC of all the ocular surface points were analyzed from the composite horizontal cross-sectional images of the right eyes of 24 volunteers; these measurements were obtained using anterior segment optical coherence tomography for a previous study (AS-OCT; Visante). Ocular biometric parameters were obtained from the same volunteers with slit-scanning topography and partial coherence interferometry. Correlation analysis was performed between the ARC at 8 mm to the axis line (ARC[8]) and other ocular parameters (including age). With ARC obtained on several nasal and temporal points (7.0, 7.5, 8.0, 8.5, and 9.0 mm from the axis line), univariate and multivariate linear regression analyses were performed to develop a model for estimating ARC with the help of ocular biometric parameters. Axial length, spherical equivalent, and angle kappa showed correlations with temporal ARC[8] (tARC[8]; Pearson's r = 0.653, -0.579, and -0.341; P = 0.001, 0.015, and 0.015, respectively). White-to-white corneal diameter (WTW) and anterior chamber depth (ACD) showed correlation with nasal ARC[8] (nARC[8]; Pearson's r = -0.492 and -0.461; P = 0.015 and 0.023, respectively). The formulae for estimating scleral curvatures (tARC, nARC, and average ARC) were developed as a function of axial length, ACD, WTW, and distance from the axis line, with good determinant power (72 - 80 %; SPSS ver. 22.0). Angle kappa showed strong

  4. In-vivo Validation of Fast Spectral Velocity Estimation Techniques

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2010-01-01

    Spectrograms in medical ultrasound are usually estimated with Welch’s method (WM). WM is dependent on an observation window (OW) of up to 256 emissions per estimate to achieve sufficient spectral resolution and contrast. Two adaptive filterbank methods have been suggested to reduce the OW: Blood...... spectral Power Capon (BPC) and the Blood Amplitude and Phase EStimation method (BAPES). Ten volunteers were scanned over the carotid artery. From each data set, 28 spectrograms were produced by combining four approaches (WM with a Hanning window (W.HAN), WM with a boxcar window (W.BOX), BPC and BAPES...

  5. MAGNETIC QUENCHING OF TURBULENT DIFFUSIVITY: RECONCILING MIXING-LENGTH THEORY ESTIMATES WITH KINEMATIC DYNAMO MODELS OF THE SOLAR CYCLE

    International Nuclear Information System (INIS)

    Munoz-Jaramillo, Andres; Martens, Petrus C. H.; Nandy, Dibyendu

    2011-01-01

    The turbulent magnetic diffusivity in the solar convection zone is one of the most poorly constrained ingredients of mean-field dynamo models. This lack of constraint has previously led to controversy regarding the most appropriate set of parameters, as different assumptions on the value of turbulent diffusivity lead to radically different solar cycle predictions. Typically, the dynamo community uses double-step diffusivity profiles characterized by low values of diffusivity in the bulk of the convection zone. However, these low diffusivity values are not consistent with theoretical estimates based on mixing-length theory, which suggest much higher values for turbulent diffusivity. To make matters worse, kinematic dynamo simulations cannot yield sustainable magnetic cycles using these theoretical estimates. In this work, we show that magnetic cycles become viable if we combine the theoretically estimated diffusivity profile with magnetic quenching of the diffusivity. Furthermore, we find that the main features of this solution can be reproduced by a dynamo simulation using a prescribed (kinematic) diffusivity profile that is based on the spatiotemporal geometric average of the dynamically quenched diffusivity. This bridges the gap between dynamically quenched and kinematic dynamo models, supporting their usage as viable tools for understanding the solar magnetic cycle.

  6. Intercomparison of methods for the estimation of displacement height and roughness length from single-level eddy covariance data

    Science.gov (United States)

    Graf, Alexander; van de Boer, Anneke; Schüttemeyer, Dirk; Moene, Arnold; Vereecken, Harry

    2013-04-01

    The displacement height d and roughness length z0 are parameters of the logarithmic wind profile and as such these are characteristics of the surface, that are required in a multitude of meteorological modeling applications. Classically, both parameters are estimated from multi-level measurements of wind speed over a terrain sufficiently homogeneous to avoid footprint-induced differences between the levels. As a rule-of thumb, d of a dense, uniform crop or forest canopy is 2/3 to 3/4 of the canopy height h, and z0 about 10% of canopy height in absence of any d. However, the uncertainty of this rule-of-thumb becomes larger if the surface of interest is not "dense and uniform", in which case a site-specific determination is required again. By means of the eddy covariance method, alternative possibilities to determine z0 and d have become available. Various authors report robust results if either several levels of sonic anemometer measurements, or one such level combined with a classic wind profile is used to introduce direct knowledge on the friction velocity into the estimation procedure. At the same time, however, the eddy covariance method to measure various fluxes has superseded the profile method, leaving many current stations without a wind speed profile with enough levels sufficiently far above the canopy to enable the classic estimation of z0 and d. From single-level eddy covariance measurements at one point in time, only one parameter can be estimated, usually z0 while d is assumed to be known. Even so, results tend to scatter considerably. However, it has been pointed out, that the use of multiple points in time providing different stability conditions can enable the estimation of both parameters, if they are assumed constant over the time period regarded. These methods either rely on flux-variance similarity (Weaver 1990 and others following), or on the integrated universal function for momentum (Martano 2000 and others following). In both cases

  7. Cardiac-Specific Conversion Factors to Estimate Radiation Effective Dose From Dose-Length Product in Computed Tomography.

    Science.gov (United States)

    Trattner, Sigal; Halliburton, Sandra; Thompson, Carla M; Xu, Yanping; Chelliah, Anjali; Jambawalikar, Sachin R; Peng, Boyu; Peters, M Robert; Jacobs, Jill E; Ghesani, Munir; Jang, James J; Al-Khalidi, Hussein; Einstein, Andrew J

    2018-01-01

    This study sought to determine updated conversion factors (k-factors) that would enable accurate estimation of radiation effective dose (ED) for coronary computed tomography angiography (CTA) and calcium scoring performed on 12 contemporary scanner models and current clinical cardiac protocols and to compare these methods to the standard chest k-factor of 0.014 mSv·mGy -1 cm -1 . Accurate estimation of ED from cardiac CT scans is essential to meaningfully compare the benefits and risks of different cardiac imaging strategies and optimize test and protocol selection. Presently, ED from cardiac CT is generally estimated by multiplying a scanner-reported parameter, the dose-length product, by a k-factor which was determined for noncardiac chest CT, using single-slice scanners and a superseded definition of ED. Metal-oxide-semiconductor field-effect transistor radiation detectors were positioned in organs of anthropomorphic phantoms, which were scanned using all cardiac protocols, 120 clinical protocols in total, on 12 CT scanners representing the spectrum of scanners from 5 manufacturers (GE, Hitachi, Philips, Siemens, Toshiba). Organ doses were determined for each protocol, and ED was calculated as defined in International Commission on Radiological Protection Publication 103. Effective doses and scanner-reported dose-length products were used to determine k-factors for each scanner model and protocol. k-Factors averaged 0.026 mSv·mGy -1 cm -1 (95% confidence interval: 0.0258 to 0.0266) and ranged between 0.020 and 0.035 mSv·mGy -1 cm -1 . The standard chest k-factor underestimates ED by an average of 46%, ranging from 30% to 60%, depending on scanner, mode, and tube potential. Factors were higher for prospective axial versus retrospective helical scan modes, calcium scoring versus coronary CTA, and higher (100 to 120 kV) versus lower (80 kV) tube potential and varied among scanner models (range of average k-factors: 0.0229 to 0.0277 mSv·mGy -1 cm -1 ). Cardiac k

  8. Effects of time-series length and gauge network density on rainfall climatology estimates in Latin America

    Science.gov (United States)

    Maeda, E.; Arevalo, J.; Carmona-Moreno, C.

    2012-04-01

    Despite recent advances in the development of satellite sensors for monitoring precipitation at high spatial and temporal resolutions, the assessment of rainfall climatology still relies strongly on ground-station measurements. The Global Historical Climatology Network (GHCN) is one of the most popular stations database available for the international community. Nevertheless, the spatial distribution of these stations is not always homogeneous and the record length largely varies for each station. This study aimed to evaluate how the number of years recorded in the GHCN stations and the density of the network affect the uncertainties of annual rainfall climatology estimates in Latin America. The method applied was divided in two phases. In the first phase, Monte Carlo simulations were performed to evaluate how the number of samples and the characteristics of rainfall regime affect estimates of annual average rainfall. The simulations were performed using gamma distributions with pre-defined parameters, which generated synthetic annual precipitation records. The average and dispersion of the synthetic records were then estimated through the L-moments approach and compared with the original probability distribution that was used to produce the samples. The number of records (n) used in the simulation varied from 10 to 150, reproducing the range of number of years typically found in meteorological stations. A power function, in the form RMSE= f(n) = c.na, where the coefficients were defined as a function of the rainfall statistical dispersion, was applied to fit the errors. In the second phase of the assessment, the results of the simulations were extrapolated to real records obtained by the GHCN over Latin America, creating estimates of errors associated with number of records and rainfall characteristics in each station. To generate a spatially-explicit representation of the uncertainties, the errors in each station were interpolated using the inverse distance

  9. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    D V A N Ravi Kumar

    2017-07-04

    Jul 4, 2017 ... Abstract. Bearing-only passive target tracking is a well-known underwater defence issue dealt in the recent past with the conventional nonlinear estimators like extended Kalman filter (EKF) and unscented Kalman filter. (UKF). It is being treated now-a-days with the derivatives of EKF, UKF and a highly ...

  10. A technique for estimating maximum harvesting effort in a stochastic ...

    Indian Academy of Sciences (India)

    Unknown

    realistic environmental variability the maximum harvesting effort is less than what is estimated in the deterministic model. This method also enables us to find out the safe regions in the parametric space for which the chance of extinction of the species is minimized. A real life fishery problem has been considered to obtain.

  11. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    ... at the same time require much lesser number of computations than that of the PF, showing that these filters can serve as an optimal estimator. A testimony of the aforementioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in MATLAB R2009a for a typical war time scenario ...

  12. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    D V A N Ravi Kumar

    2017-07-04

    Jul 4, 2017 ... named as Pre-Merge UKF and the other Post-Merge UKF, differ in the way the feedback to the individual UKFs is applied. These novel methods have an advantage of less root mean square estimation error in position and velocity compared with the EKF and UKF and at the same time require much lesser ...

  13. A Novel DOA Estimation Algorithm Using Array Rotation Technique

    Directory of Open Access Journals (Sweden)

    Xiaoyu Lan

    2014-03-01

    Full Text Available The performance of traditional direction of arrival (DOA estimation algorithm based on uniform circular array (UCA is constrained by the array aperture. Furthermore, the array requires more antenna elements than targets, which will increase the size and weight of the device and cause higher energy loss. In order to solve these issues, a novel low energy algorithm utilizing array base-line rotation for multiple targets estimation is proposed. By rotating two elements and setting a fixed time delay, even the number of elements is selected to form a virtual UCA. Then, the received data of signals will be sampled at multiple positions, which improves the array elements utilization greatly. 2D-DOA estimation of the rotation array is accomplished via multiple signal classification (MUSIC algorithms. Finally, the Cramer-Rao bound (CRB is derived and simulation results verified the effectiveness of the proposed algorithm with high resolution and estimation accuracy performance. Besides, because of the significant reduction of array elements number, the array antennas system is much simpler and less complex than traditional array.

  14. A comparison of spatial rainfall estimation techniques: A case study ...

    African Journals Online (AJOL)

    Two geostatistical interpolation techniques (kriging and cokriging) were evaluated against inverse distance weighted (IDW) and global polynomial interpolation (GPI). Of the four spatial interpolators, kriging and cokriging produced results with the least root mean square error (RMSE). A digital elevation model (DEM) was ...

  15. A comparison of spatial rainfall estimation techniques: a case study ...

    African Journals Online (AJOL)

    Many hydrological models for watershed management and planning require rainfall as an input in a continuous format. This study analyzed four different rainfall interpolation techniques in Nyando river basin, Kenya. Interpolation was done for a period of 30 days using 19 rainfall stations. Two geostatistical interpolation ...

  16. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA

    OpenAIRE

    Huang, G; Nix, AR; Armour, SMD

    2010-01-01

    Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...

  17. How to Appropriately Calculate Effective Dose for CT Using Either Size-Specific Dose Estimates or Dose-Length Product.

    Science.gov (United States)

    Brady, Samuel L; Mirro, Amy E; Moore, Bria M; Kaufman, Robert A

    2015-05-01

    The purpose of this study is to show how to calculate effective dose in CT using size-specific dose estimates and to correct the current method using dose-length product (DLP). Data were analyzed from 352 chest and 241 abdominopelvic CT images. Size-specific dose estimate was used as a surrogate for organ dose in the chest and abdominopelvic regions. Organ doses were averaged by patient weight-based populations and were used to calculate effective dose by the International Commission on Radiological Protection (ICRP) report 103 method using tissue-weighting factors (EICRP). In addition, effective dose was calculated using population-averaged CT examination DLP for the chest and abdominopelvic region using published k-coefficients (EDLP = k × DLP). EDLP differed from EICRP by an average of 21% (1.4 vs 1.1) in the chest and 42% (2.4 vs 3.4) in the abdominopelvic region. The differences occurred because the published kcoefficients did not account for pitch factor other than unity, were derived using a 32-cm diameter CT dose index (CTDI) phantom for CT examinations of the pediatric body, and used ICRP 60 tissue-weighting factors. Once it was corrected for pitch factor, the appropriate size of CTDI phantom, and ICRP 103 tissue-weighting factors, EDLP improved in agreement with EICRP to better than 7% (1.4 vs 1.3) and 4% (2.4 vs 2.5) for chest and abdominopelvic regions, respectively. Current use of DLP to calculate effective dose was shown to be deficient because of the outdated means by which the k-coefficients were derived. This study shows a means to calculate EICRP using patient size-specific dose estimate and how to appropriately correct EDLP.

  18. A simple technique for estimating EUVE sky survey exposure times

    Science.gov (United States)

    Carlisle, G. L.

    1986-01-01

    A simple way to estimate accumulated exposure time over the celestial sphere for a scanning telescope in earth orbit is described. Primary constraints on observation time, such as earth blockage, solar occultation, and passage through the South Atlantic Anomaly, are modeled using relatively straightforward, mainly closed-form geometrical solutions. The resulting algorithm is implemented on a desktop microcomputer. Though not rigorously precise, the algorithm is sufficient for conducting preliminary mission design studies for the Extreme Ultraviolet Explorer (EUVE).

  19. Effects of cane length and diameter and judgment type on the constant error ratio for estimated height in blindfolded, visually impaired, and sighted participants.

    Science.gov (United States)

    Huang, Kuo-Chen; Leung, Cherng-Yee; Wang, Hsiu-Feng

    2010-04-01

    The purpose of this study was to assess the ability of blindfolded, visually impaired, and sighted individuals to estimate object height as a function of cane length, cane diameter, and judgment type. 48 undergraduate students (ages 20 to 23 years) were recruited to participate in the study. Participants were divided into low-vision, severely myopic, and normal-vision groups. Five stimulus heights were explored with three cane lengths, varying cane diameters, and judgment types. The participants were asked to estimate the stimulus height with or without reference to a standard block. Results showed that the constant error ratio for estimated height improved with decreasing cane length and comparative judgment. The findings were unclear regarding the effect of cane length on haptic perception of height. Implications were discussed for designing environments, such as stair heights, chairs, the magnitude of apertures, etc., for visually impaired individuals.

  20. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  1. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  2. New Solid Phases for Estimation of Hormones by Radioimmunoassay Technique

    International Nuclear Information System (INIS)

    Sheha, R.R.; Ayoub, H.S.M.; Shafik, M.

    2013-01-01

    The efforts in this study were initiated to develop and validate new solid phases for estimation of hormones by radioimmunoassay (RIA). The study argued the successful application of different hydroxy apatites (HAP) as new solid phases for estimation of Alpha fetoprotein (AFP), Thyroid Stimulating hormone (TSH) and Luteinizing hormone (LH) in human serum. Hydroxy apatites have different alkali earth elements were successfully prepared by a well-controlled co-precipitation method with stoichiometric ratio value 1.67. The synthesized barium and calcium hydroxy apatites were characterized using XRD and Ftir and data clarified the preparation of pure structures of both BaHAP and CaHAP with no evidence on presence of other additional phases. The prepared solid phases were applied in various radioimmunoassay systems for separation of bound and free antigens of AFP, TSH and LH hormones. The preparation of radiolabeled tracer for these antigens was carried out using chloramine-T as oxidizing agent. The influence of different parameters on the activation and coupling of the used apatite particles with the polyclonal antibodies was systematically investigated and the optimum conditions were determined. The assay was reproducible, specific and sensitive enough for regular estimation of the studied hormones. The intra-and inter-assay variation were satisfactory and also the recovery and dilution tests indicated an accurate calibration. The reliability of these apatite particles had been validated by comparing the results that obtained by using commercial kits. The results finally authenticates that hydroxyapatite particles would have a great potential to address the emerging challenge of accurate quantitation in laboratory medical application

  3. Comparative Study of Online Open Circuit Voltage Estimation Techniques for State of Charge Estimation of Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Hicham Chaoui

    2017-04-01

    Full Text Available Online estimation techniques are extensively used to determine the parameters of various uncertain dynamic systems. In this paper, online estimation of the open-circuit voltage (OCV of lithium-ion batteries is proposed by two different adaptive filtering methods (i.e., recursive least square, RLS, and least mean square, LMS, along with an adaptive observer. The proposed techniques use the battery’s terminal voltage and current to estimate the OCV, which is correlated to the state of charge (SOC. Experimental results highlight the effectiveness of the proposed methods in online estimation at different charge/discharge conditions and temperatures. The comparative study illustrates the advantages and limitations of each online estimation method.

  4. Development of flow injection analysis technique for uranium estimation

    International Nuclear Information System (INIS)

    Paranjape, A.H.; Pandit, S.S.; Shinde, S.S.; Ramanujam, A.; Dhumwad, R.K.

    1991-01-01

    Flow injection analysis is increasingly used as a process control analytical technique in many industries. It involves injection of the sample at a constant rate into a steady flowing stream of reagent and passing this mixture through a suitable detector. This paper describes the development of such a system for the analysis of uranium (VI) and (IV) and its gross gamma activity. It is amenable for on-line or automated off-line monitoring of uranium and its activity in process streams. The sample injection port is suitable for automated injection of radioactive samples. The performance of the system has been tested for the colorimetric response of U(VI) samples at 410 nm in the range of 35 to 360mg/ml in nitric acid medium using Metrohm 662 Photometer and a recorder as detector assembly. The precision of the method is found to be better than +/- 0.5%. This technique with certain modifications is used for the analysis of U(VI) in the range 0.1-3mg/ailq. by alcoholic thiocynate procedure within +/- 1.5% precision. Similarly the precision for the determination of U(IV) in the range 15-120 mg at 650 nm is found to be better than 5%. With NaI well-type detector in the flow line, the gross gamma counting of the solution under flow is found to be within a precision of +/- 5%. (author). 4 refs., 2 figs., 1 tab

  5. Comparison of techniques for estimating herbage intake by grazing dairy cows

    NARCIS (Netherlands)

    Smit, H.J.; Taweel, H.Z.; Tas, B.M.; Tamminga, S.; Elgersma, A.

    2005-01-01

    For estimating herbage intake during grazing, the traditional sward cutting technique was compared in grazing experiments in 2002 and 2003 with the recently developed n-alkanes technique and with the net energy method. The first method estimates herbage intake by the difference between the herbage

  6. Latent Variable Regression: A Technique for Estimating Interaction and Quadratic Coefficients.

    Science.gov (United States)

    Ping, Robert A., Jr.

    1996-01-01

    A technique is proposed to estimate regression coefficients for interaction and quadratic latent variables that combines regression analysis with the measurement model portion of structural equation analysis. The proposed technique will provide coefficient estimates for regression models involving existing measures or new measures for which a…

  7. Estimation of soil hydraulic properties with microwave techniques

    Science.gov (United States)

    Oneill, P. E.; Gurney, R. J.; Camillo, P. J.

    1985-01-01

    Useful quantitative information about soil properties may be obtained by calibrating energy and moisture balance models with remotely sensed data. A soil physics model solves heat and moisture flux equations in the soil profile and is driven by the surface energy balance. Model generated surface temperature and soil moisture and temperature profiles are then used in a microwave emission model to predict the soil brightness temperature. The model hydraulic parameters are varied until the predicted temperatures agree with the remotely sensed values. This method is used to estimate values for saturated hydraulic conductivity, saturated matrix potential, and a soil texture parameter. The conductivity agreed well with a value measured with an infiltration ring and the other parameters agreed with values in the literature.

  8. The efficacy of two modified proprioceptive neuromuscular facilitation stretching techniques in subjects with reduced hamstring muscle length.

    Science.gov (United States)

    Youdas, James W; Haeflinger, Kristin M; Kreun, Melissa K; Holloway, Andrew M; Kramer, Christine M; Hollman, John H

    2010-05-01

    Difference scores in knee extension angle and electromyographic (EMG) activity were quantified before and after modified proprioceptive neuromuscular facilitation (PNF) hold-relax (HR) and hold-relax-antagonist contraction (HR-AC) stretching procedures in 35 healthy individuals with reduced hamstring muscle length bilaterally (knee extension angle <160 degrees ). Participants were randomly assigned each PNF procedure to opposite lower extremities. Knee extension values were measured by using a goniometer. EMG data were collected for 10 seconds before and immediately after each PNF stretching technique and normalized to maximum voluntary isometric contraction (% MVIC). A significant time by stretch-type interaction was detected (F(1,34) = 21.1; p < 0.001). Angles of knee extension for HR and HR-AC were not different prior to stretching (p = 0.45). Poststretch knee extension angle was greater in the HR-AC condition than the HR condition (p < 0.007). The proportion of subjects who exceeded the minimal detectable change (MDC(95)) with the HR-AC stretch (97%) did not differ (p = 0.07) from the proportion who exceeded the MDC(95) with the HR stretch (80%). Because EMG activation increased (p < 0.013) after the HR-AC procedure, it is doubtful a relationship exists between range of motion improvement after stretching and inhibition of the hamstrings. On average the 10-second modified HR procedure produced an 11 degrees gain in knee extension angle within a single stretch session.

  9. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data.

    Science.gov (United States)

    Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N

    2016-01-01

    To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p algorithms (all p algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.

  10. Background estimation techniques in searches for heavy resonances at CMS

    CERN Document Server

    Benato, Lisa

    2017-01-01

    Many Beyond Standard Model theories foresee the existence of heavy resonances (over 1 TeV) decaying into final states that include a high-energetic, boosted jet and charged leptons or neutrinos. In these very peculiar conditions, Monte Carlo predictions are not reliable enough to reproduce accurately the expected Standard Model background. A data-Monte Carlo hybrid approach (alpha method) has been successfully adopted since Run 1 in searches for heavy Higgs bosons performed by the CMS Collaboration. By taking advantage of data in signal-free control regions, determined exploiting the boosted jet substructure, predictions are extracted in the signal region. The alpha method and jet substructure techniques are described in detail, along with some recent results obtained with 2016 Run 2 data collected by the CMS detector.

  11. A track length estimator method for dose calculations in low-energy X-ray irradiations. Implementation, properties and performance

    Energy Technology Data Exchange (ETDEWEB)

    Baldacci, F.; Delaire, F.; Letang, J.M.; Sarrut, D.; Smekens, F.; Freud, N. [Lyon-1 Univ. - CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Centre Leon Berard (France); Mittone, A.; Coan, P. [LMU Munich (Germany). Dept. of Physics; LMU Munich (Germany). Faculty of Medicine; Bravin, A.; Ferrero, C. [European Synchrotron Radiation Facility, Grenoble (France); Gasilov, S. [LMU Munich (Germany). Dept. of Physics

    2015-05-01

    The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.

  12. Selection criteria for scoring amplified fragment length polymorphisms (AFLPs) positively affect the reliability of population genetic parameter estimates.

    Science.gov (United States)

    Herrmann, Doris; Poncet, Bénédicte N; Manel, Stéphanie; Rioux, Delphine; Gielly, Ludovic; Taberlet, Pierre; Gugerli, Felix

    2010-04-01

    A reliable data set is a fundamental prerequisite for consistent results and conclusions in population genetic studies. However, marker scoring of genetic fingerprints such as amplified fragment length polymorphisms (AFLPs) is a highly subjective procedure, inducing inconsistencies owing to personal or laboratory-specific criteria. We applied two alternative marker selection algorithms, the newly developed script scanAFLP and the recently published AFLPScore, to a large AFLP genome scan to test how population genetic parameters and error rates were affected. These results were confronted with replicated random selections of marker subsets. We show that the newly developed marker selection criteria reduced the mismatch error rate and had a notable influence on estimates of genetic diversity and differentiation. Both effects are likely to influence biological inference. For example, genetic diversity (HS) was 29% lower while genetic differentiation (FST) was 8% higher when applying scanAFLP compared with AFLPScore. Likewise, random selections of markers resulted in substantial deviations of population genetic parameters compared with the data sets including specific selection criteria. These randomly selected marker sets showed surprisingly low variance among replicates. We conclude that stringent marker selection and phenotype calling reduces noise in the data set while retaining patterns of population genetic structure.

  13. Rumen microbial growth estimation using in vitro radiophosphorous incorporation technique

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Ives Claudio da Silva; Machado, Mariana de Carvalho; Cabral Filho, Sergio Lucio Salomon; Gobbo, Sarita Priscila; Vitti, Dorinha Miriam Silber Schmidt; Abdalla, Adibe Luiz [Centro de Energia Nuclear na Agricultura (CENA), Piracicaba, SP (Brazil)

    2002-07-01

    Rumen microorganisms are able to transform low biological value nitrogen of feed stuff into high quality protein. To determine how much microbial protein that process forms, radiomarkers can be used. Radiophosphorous has been used to mark microbial protein, as element P is present in all rumen microorganisms (as phospholipids) and the P:N ratio of rumen biomass is quite constant. The aim of this work was to estimate microbial synthesis from feedstuff commonly used in ruminant nutrition in Brazil. Tested feeds were fresh alfalfa, raw sugarcane bagasse, rice hulls, rice meal, soybean meal, wheat meal, Tifton hay, leucaena, dehydrated citrus pulp, wet brewers' grains and cottonseed meal. {sup 32} P-labelled phosphate solution was used as marker for microbial protein. Results showed the diversity of feeds by distinct quantities of nitrogen incorporated into microbial mass. Low nutrient availability feeds (sugarcane bagasse and rice hulls) promoted the lowest values of incorporated nitrogen. Nitrogen incorporation showed positive relationship (r=0.56; P=0.06) with the rate of degradation and negative relationship (r=-0.59; P<0.05) with fiber content of feeds. The results highlight that easier fermentable feeds (higher rates of degradation) and/or with lower fiber contents promote a more efficient microbial growth and better performance for the host animal. (author)

  14. Measurement of the mass of the top quark using the transverse decay length and lepton transverse momentum techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christian

    2014-05-02

    A measurement of the mass of the top quark using the transverse momentum of the lepton and decay length of the B-Hadron has been presented. The result is m{sub Top}=(170.4±1.1{sub stat.}±2.3{sub syst.}) GeV. This is compatible with previous measurements of the mass of the top quark, done by either the ATLAS collaboration or other experiments. The total uncertainty on the result of this analysis, Δ{sup total}m{sub Top}=2.6 GeV is larger than results by other measurements. However, with an jet energy scale uncertainty of only Δ{sup Jes}m{sub Top}=0.3 GeV it has one of the smallest uncertainties caused by this source. In a combination of results this will help reducing the total uncertainty on the mass of the top quark. The value of 0.42 on the strength on final state radiation indicates that the simulation underestimates the strength of final state radiation. There is currently work ongoing aiming to publish the results found in this thesis in the context of an official ATLAS publication. Additionally the uncertainties can be compared with those one would obtain by using only one of the two variables. If one considers only the transverse decay length, a statistical error of Δm{sub Top}{sup stat.}=1.7 GeV and a systematic uncertainty of Δm{sub Top}{sup stat.}=7.8 GeV is obtained, dominated by the uncertainty on initial and final state radiation. The statistical uncertainty obtained by using the transverse momentum of the lepton is with Δm{sub Top}{sup stat.}=1.4 GeV a bit lower than the one obtained by the transverse decay length alone but still larger than the one of the presented measurement. The systematic uncertainty obtained is Δm{sub Top}{sup stat.}=2.7 GeV. Combining the two variables is therefore worthwhile compared with using only the transverse momentum of the lepton alone. The dominant uncertainties on the measurement are caused by imperfect knowledge of the simulation parameters, especially the choice of Monte-Carlo generator. Other large

  15. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data.

    Directory of Open Access Journals (Sweden)

    Jorge A Banda

    Full Text Available To examine the effects of accelerometer epoch lengths, wear time (WT algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB, and physical activity (PA.268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA, moderate (MPA, and vigorous (VPA PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points.WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001, but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001. Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001.The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.

  16. Step-Detection and Adaptive Step-Length Estimation for Pedestrian Dead-Reckoning at Various Walking Speeds Using a Smartphone.

    Science.gov (United States)

    Ho, Ngoc-Huynh; Truong, Phuc Huu; Jeong, Gu-Min

    2016-09-02

    We propose a walking distance estimation method based on an adaptive step-length estimator at various walking speeds using a smartphone. First, we apply a fast Fourier transform (FFT)-based smoother on the acceleration data collected by the smartphone to remove the interference signals. Then, we analyze these data using a set of step-detection rules in order to detect walking steps. Using an adaptive estimator, which is based on a model of average step speed, we accurately obtain the walking step length. To evaluate the accuracy of the proposed method, we examine the distance estimation for four different distances and three speed levels. The experimental results show that the proposed method significantly outperforms conventional estimation methods in terms of accuracy.

  17. Weight-for-length/height growth curves for children and adolescents in China in comparison with body mass index in prevalence estimates of malnutrition.

    Science.gov (United States)

    Zong, Xinnan; Li, Hui; Zhang, Yaqin; Wu, Huahong

    2017-05-01

    It is important to update weight-for-length/height growth curves in China and re-examine their performance in screening malnutrition. To develop weight-for-length/height growth curves for Chinese children and adolescents. A total of 94 302 children aged 0-19 years with complete sex, age, weight and length/height data were obtained from two cross-sectional large-scaled national surveys in China. Weight-for-length/height growth curves were constructed using the LMS method before and after average spermarcheal/menarcheal ages, respectively. Screening performance in prevalence estimates of wasting, overweight and obesity was compared between weight-for-height and body mass index (BMI) criteria based on a test population of 21 416 children aged 3-18. The smoothed weight-for-length percentiles and Z-scores growth curves with length 46-110 cm for both sexes and weight-for-height with height 70-180 cm for boys and 70-170 cm for girls were established. The weight-for-height and BMI-for-age had strong correlation in screening wasting, overweight and obesity in each age-sex group. There was no striking difference in prevalence estimates of wasting, overweight and obesity between two indicators except for obesity prevalence at ages 6-11. This set of smoothed weight-for-length/height growth curves may be useful in assessing nutritional status from infants to post-pubertal adolescents.

  18. ARTIFICIAL INTELLIGENCE TECHNIQUES FOR ESTIMATING THE EFFORT IN SOFTWARE DEVELOPMENT PROJECTS

    Directory of Open Access Journals (Sweden)

    Ferreira, G., Gálvez, D.,

    2015-06-01

    Full Text Available Among the most popular algorithmic cost and efforts estimation models are COCOMO, SLIM, Function Points. However, since the 90s, the models based on Artificial Intelligence techniques, mainly in Machine Learning techniques have been used to improve the accuracy of the estimates. These models are based on two fundamental aspects: the use of data collected in previous projects where estimates were performed and the application of various knowledge extraction techniques, with the idea of making estimates more efficiently, effectively and, if possible, with greater precision. The aim of this paper is to present an analysis of some of these techniques and how they are been applied in estimating the effort of software projects.

  19. Fast Spectral Velocity Estimation Using Adaptive Techniques: In-Vivo Results

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Udesen, Jesper

    2007-01-01

    spectral Capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slowtime and depth. The Blood Amplitude and Phase Estimation technique (BAPES) is based on finding a set of matched filters (one for each velocity component of interest) and filtering......Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window(OW) is very sbort. In this paper two adaptive techniques are tested and compared to the averaged perlodogram (Welch) for blood velocity estimation. The Blood Power...... the blood process over slow-time and averaging over depth to find the power spectral density estimate. In this paper, the two adaptive methods are explained, and performance Is assessed in controlled steady How experiments and in-vivo measurements. The three methods were tested on a circulating How rig...

  20. A Mathematical Technique for Estimating True Temperature Profiles of Data Obtained from Long Time Constant Thermocouples

    National Research Council Canada - National Science Library

    Young, Graeme

    1998-01-01

    A mathematical modeling technique is described for estimating true temperature profiles of data obtained from long time constant thermocouples, which were used in fuel fire tests designed to determine...

  1. Simple robust technique using time delay estimation for the control and synchronization of Lorenz systems

    International Nuclear Information System (INIS)

    Jin, Maolin; Chang, Pyung Hun

    2009-01-01

    This work presents two simple and robust techniques based on time delay estimation for the respective control and synchronization of chaos systems. First, one of these techniques is applied to the control of a chaotic Lorenz system with both matched and mismatched uncertainties. The nonlinearities in the Lorenz system is cancelled by time delay estimation and desired error dynamics is inserted. Second, the other technique is applied to the synchronization of the Lue system and the Lorenz system with uncertainties. The synchronization input consists of three elements that have transparent and clear meanings. Since time delay estimation enables a very effective and efficient cancellation of disturbances and nonlinearities, the techniques turn out to be simple and robust. Numerical simulation results show fast, accurate and robust performance of the proposed techniques, thereby demonstrating their effectiveness for the control and synchronization of Lorenz systems.

  2. Development and comparision of techniques for estimating design basis flood flows for nuclear power plants

    International Nuclear Information System (INIS)

    1980-05-01

    Estimation of the design basis flood for Nuclear Power Plants can be carried out using either deterministic or stochastic techniques. Stochastic techniques, while widely used for the solution of a variety of hydrological and other problems, have not been used to date (1980) in connection with the estimation of design basis flood for NPP siting. This study compares the two techniques against one specific river site (Galt on the Grand River, Ontario). The study concludes that both techniques lead to comparable results , but that stochastic techniques have the advantage of extracting maximum information from available data and presenting the results (flood flow) as a continuous function of probability together with estimation of confidence limits. (author)

  3. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    NARCIS (Netherlands)

    Zweerink, A.; Allaart, C.P.; Kuijer, J.P.A.; Wu, L.; Beek, A.M.; Ven, P.M. van de; Meine, M.; Croisille, P.; Clarysse, P.; Rossum, A.C. van; Nijveldt, R.

    2017-01-01

    OBJECTIVES: Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive

  4. Evaluation of small area crop estimation techniques using LANDSAT- and ground-derived data. [South Dakota

    Science.gov (United States)

    Amis, M. L.; Martin, M. V.; Mcguire, W. G.; Shen, S. S. (Principal Investigator)

    1982-01-01

    Studies completed in fiscal year 1981 in support of the clustering/classification and preprocessing activities of the Domestic Crops and Land Cover project. The theme throughout the study was the improvement of subanalysis district (usually county level) crop hectarage estimates, as reflected in the following three objectives: (1) to evaluate the current U.S. Department of Agriculture Statistical Reporting Service regression approach to crop area estimation as applied to the problem of obtaining subanalysis district estimates; (2) to develop and test alternative approaches to subanalysis district estimation; and (3) to develop and test preprocessing techniques for use in improving subanalysis district estimates.

  5. Soil loss estimation using GIS and Remote sensing techniques: A case of Koga watershed, Northwestern Ethiopia

    Directory of Open Access Journals (Sweden)

    Habtamu Sewnet Gelagay

    2016-06-01

    Full Text Available Soil loss by runoff is a severe and continuous ecological problem in Koga watershed. Deforestation, improper cultivation and uncontrolled grazing have resulted in accelerated soil erosion. Information on soil loss is essential to support agricultural productivity and natural resource management. Thus, this study was aimed to estimate and map the mean annual soil loss by using GIS and Remote sensing techniques. The soil loss was estimated by using Revised Universal Soil Equation (RUSLE model. Topographic map of 1:50,000 scale, Aster Digital Elevation Model (DEM of 20 m spatial resolution, digital soil map of 1:250,000 scale, thirteen years rainfall records of four stations, and land sat imagery (TM with spatial resolution of 30 m was used to derive RUSLE's soil loss variables. The RUSLE parameters were analyzed and integrated using raster calculator in the geo-processing tools in ArcGIS 10.1 environment to estimate and map the annual soil loss of the study area. The result revealed that the annual soil loss of the watershed extends from none in the lower and middle part of the watershed to 265 t ha−1 year−1 in the steeper slope part of the watershed with a mean annual soil loss of 47 t ha−1 year−1. The total annual soil loss in the watershed was 255283 t, of these, 181801 (71% tones cover about 6691 (24% hectare of land. Most of these soil erosion affected areas are spatially situated in the upper steepest slope part (inlet of the watershed. These are areas where Nitosols and Alisols with higher soil erodibility character (0.25 values are dominant. Hence, Slope gradient and length followed by soil erodibility factors were found to be the main factors of soil erosion. Thus, sustainable soil and water conservation practices should be adopted in steepest upper part of the study area by respecting and recognizing watershed logic, people and watershed potentials.

  6. Two stage DOA and Fundamental Frequency Estimation based on Subspace Techniques

    DEFF Research Database (Denmark)

    Zhou, Zhenhua; Christensen, Mads Græsbøll; So, Hing-Cheung

    2012-01-01

    In this paper, the problem of fundamental frequency and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signal is addressed. The estimation procedure consists of two stages. Firstly, by making use of the subspace technique and Markov-based eigenanalysis, a multi- channel...

  7. Two techniques for mapping and area estimation of small grains in California using Landsat digital data

    Science.gov (United States)

    Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.

    1984-01-01

    Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.

  8. In-vivo validation of fast spectral velocity estimation techniques – preliminary results

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2008-01-01

    Spectral Doppler is a common way to estimate blood velocities in medical ultrasound (US). The standard way of estimating spectrograms is by using Welch's method (WM). WM is dependent on a long observation window (OW) (about 100 transmissions) to produce spectrograms with sufficient spectral...... resolution and contrast. Two adaptive filterbank methods have been suggested to circumvent this problem: the Blood spectral Power Capon method (BPC) and the Blood Amplitude and Phase Estimation method (BAPES). Previously, simulations and flow rig experiments have indicated that the two adaptive methods can...... used to estimate the spectrograms: WM with a Hanning window (WMhw), WM with a boxcar window (WMbw), BPC and BAPES. For each approach the window length was varied: 128, 64, 32, 16, 8, 4 and 2 emissions/estimate. Thus, from the same data set of each volunteer 28 spectrograms were produced. The artery...

  9. Optimization of spatial frequency domain imaging technique for estimating optical properties of food and biological materials

    Science.gov (United States)

    Spatial frequency domain imaging technique has recently been developed for determination of the optical properties of food and biological materials. However, accurate estimation of the optical property parameters by the technique is challenging due to measurement errors associated with signal acquis...

  10. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    Science.gov (United States)

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These

  11. Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mohayai, Tanaz Angelina [IIT, Chicago; Snopok, Pavel [IIT, Chicago; Neuffer, David [Fermilab; Rogers, Chris [Rutherford

    2017-10-12

    The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.

  12. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.

    1982-12-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rainfall runoff model may lead in some cases to nonconservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 - 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  13. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  14. Optimizing Penile Length in Patients Undergoing Partial Penectomy for Penile Cancer: Novel Application of the Ventral Phalloplasty Oncoplastic Technique

    Directory of Open Access Journals (Sweden)

    Jared J. Wallen

    2014-10-01

    Full Text Available The ventral phalloplasty (VP has been well described in modern day penile prosthesis surgery. The main objectives of this maneuver are to increase perceived length and patient satisfaction and to counteract the natural 1-2 cm average loss in length when performing implantation of an inflatable penile prosthesis. Similarly, this video represents a new adaptation for partial penectomy patients. One can only hope that the addition of the VP for partial penectomy patients with good erectile function will increase their quality of life. The patient in this video is a 56-year-old male who presented with a 4.0x3.5x1.0 cm, pathologic stage T2 squamous cell carcinoma of the glans penis. After partial penectomy with VP and inguinal lymph node dissection, pathological specimen revealed negative margins, 3/5 right superficial nodes and 1/5 left superficial nodes positive for malignancy. The patient has been recommended post-operative systemic chemotherapy (with external beam radiotherapy based on the multiple node positivity and presence of extranodal extension. The patient’s pre-operative penile length was 9.5 cm, and after partial penectomy with VP, penile length is 7 cm.

  15. A TRMM-Calibrated infrared technique for rainfall estimation: application on rain events over eastern Mediterranean

    Directory of Open Access Journals (Sweden)

    H. Feidas

    2006-01-01

    Full Text Available The aim is to evaluate the use of a satellite infrared (IR technique for estimating rainfall over the eastern Mediterranean. The Convective-Stratiform Technique (CST, calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM Precipitation Radar (PR, is applied over the Eastern Mediterranean for four rain events during the six month period of October 2004 to March 2005. Estimates from this technique are verified over a rain gauge network for different time scales. Results show that PR observations can be applied to improve IR-based techniques significantly in the conditions of a regional scale area by selecting adequate calibration areas and periods. They reveal, however, the limitations of infrared remote sensing techniques, originally developed for tropical areas, when applied to precipitation retrievals in mid-latitudes.

  16. Third molar development: evaluation of nine tooth development registration techniques for age estimations.

    Science.gov (United States)

    Thevissen, Patrick W; Fieuws, Steffen; Willems, Guy

    2013-03-01

    Multiple third molar development registration techniques exist. Therefore the aim of this study was to detect which third molar development registration technique was most promising to use as a tool for subadult age estimation. On a collection of 1199 panoramic radiographs the development of all present third molars was registered following nine different registration techniques [Gleiser, Hunt (GH); Haavikko (HV); Demirjian (DM); Raungpaka (RA); Gustafson, Koch (GK); Harris, Nortje (HN); Kullman (KU); Moorrees (MO); Cameriere (CA)]. Regression models with age as response and the third molar registration as predictor were developed for each registration technique separately. The MO technique disclosed highest R(2) (F 51%, M 45%) and lowest root mean squared error (F 3.42 years; M 3.67 years) values, but differences with other techniques were small in magnitude. The amount of stages utilized in the explored staging techniques slightly influenced the age predictions. © 2013 American Academy of Forensic Sciences.

  17. Estimation of Length-Weight Relationship and Proximate Composition of Catfish (Clarias gariepinus Burchell, 1822 from Two Different Culture Facilities

    Directory of Open Access Journals (Sweden)

    Olaniyi Alaba Olopade

    2015-06-01

    Full Text Available This study was carried out to determine and compare the proximate composition and length weight relationship of C. gariepinus from two culture systems (earthen and concrete ponds. The fish samples were collected from three fish farms with same cultural condition in different areas of Obio-akpor Local Government Area of Rivers State, Nigeria. Result on the length- weight relationship revealed that C.gariepinus reared in concrete tank had a total length of 15.50- 49.00cm with a mean of 32.71cm and weight of 150-625g, while total length of C. gariepinus reared in the earthen pond ranged from 19.90-58.0cm with a mean of 39.8cm and weight of 195-825g. The T- test shows that the total length of earthen pond were significantly higher than the concrete tank and the weight in the earthen pond was significantly higher than the concrete tank. Parameters of proximate composition analysed were moisture, protein, lipid, carbohydrate, ash and fiber from the fish flesh. Protein content showed a significantly higher in the earthen ponds than the concrete tanks. Ash contents varied from 1.5±1.66-7.4±0.67% in the concrete tanks and were significantly higher than the earthen ponds which ranged from 3.1±0.94-4.5±2.11%. Lipid was significantly higher in earthen ponds than concrete tanks. Generally, the two culture systems have a significant influence on length–weight relationship and nutritional value of C. gariepinus. However, C. gariepinus reared in concrete tank had a heavier body weight than earthen pond and also C. gariepinus reared in earthen pond had highest nutritive values than the concrete tank.

  18. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  19. 'Length'at Length

    Indian Academy of Sciences (India)

    Admin

    He was interested to know how `large' is the set of numbers x for which the series is convergent. Here large refers to its length. But his set is not in the class ♢. Here is another problem discussed by Borel. Consider .... have an infinite collection of pairs of new shoes and want to choose one shoe from each pair. We have an ...

  20. Estimating the hemodynamic influence of variable main body-to-iliac limb length ratios in aortic endografts.

    Science.gov (United States)

    Georgakarakos, Efstratios; Xenakis, Antonios; Georgiadis, George S

    2018-02-01

    We conducted a computational study to assess the hemodynamic impact of variant main body-to-iliac limb length (L1/L2) ratios on certain hemodynamic parameters acting on the endograft (EG) either on the normal bifurcated (Bif) or the cross-limb (Cx) fashion. A customary bifurcated 3D model was computationally created and meshed using the commercially available ANSYS ICEM (Ansys Inc., Canonsburg, PA, USA) software. The total length of the EG, was kept constant, while the L1/L2 ratio ranged from 0.3 to 1.5 in the Bif and Cx reconstructed EG models. The compliance of the graft was modeled using a Fluid Structure Interaction method. Important hemodynamic parameters such as pressure drop along EG, wall shear stress (WSS) and helicity were calculated. The greatest pressure decrease across EG was calculated in the peak systolic phase. With increasing L1/L2 it was found that the Pressure Drop was increasing for the Cx configuration, while decreasing for the Bif. The greatest helicity (4.1 m/s2) was seen in peak systole of Cx with ratio of 1.5 whereas its greatest value (2 m/s2) was met in peak systole in the Bif with the shortest L1/L2 ratio (0.3). Similarly, the maximum WSS value was highest (2.74Pa) in the peak systole for the 1.5 L1/L2 of the Cx configuration, while the maximum WSS value equaled 2 Pa for all length ratios of the Bif modification (with the WSS found for L1/L2=0.3 being marginally higher). There was greater discrepancy in the WSS values for all L1/L2 ratios of the Cx bifurcation compared to Bif. Different L1/L2 rations are shown to have an impact on the pressure distribution along the entire EG while the length ratio predisposing to highest helicity or WSS values is also determined by the iliac limbs pattern of the EG. Since current custom-made EG solutions can reproduce variability in main-body/iliac limbs length ratios, further computational as well as clinical research is warranted to delineate and predict the hemodynamic and clinical effect of variable

  1. A comparison of population air pollution exposure estimation techniques with personal exposure estimates in a pregnant cohort.

    Science.gov (United States)

    Hannam, Kimberly; McNamee, Roseanne; De Vocht, Frank; Baker, Philip; Sibley, Colin; Agius, Raymond

    2013-08-01

    There is increasing evidence of the harmful effects for mother and fetus of maternal exposure to air pollutants. Most studies use large retrospective birth outcome datasets and make a best estimate of personal exposure (PE) during pregnancy periods. We compared estimates of personal NOx and NO2 exposure of pregnant women in the North West of England with exposure estimates derived using different modelling techniques. A cohort of 85 pregnant women was recruited from Manchester and Blackpool. Participants completed a time-activity log and questionnaire at 13-22 weeks gestation and were provided with personal Ogawa samplers to measure their NOx/NO2 exposure. PE was compared to monthly averages, the nearest stationary monitor to the participants' home, weighted average of the closest monitor to home and work location, proximity to major roads, as well as to background modelled concentrations (DEFRA), inverse distance weighting (IDW), ordinary kriging (OK), and a land use regression model with and without temporal adjustment. PE was most strongly correlated with monthly adjusted DEFRA (NO2r = 0.61, NOxr = 0.60), OK and IDW (NO2r = 0.60; NOxr = 0.62) concentrations. Correlations were stronger in Blackpool than in Manchester. Where there is evidence for high temporal variability in exposure, methods of exposure estimation which focus solely on spatial methods should be adjusted temporally, with an improvement in estimation expected to be better with increased temporal variability.

  2. Techniques for estimating health care costs with censored data: an overview for the health services researcher.

    Science.gov (United States)

    Wijeysundera, Harindra C; Wang, Xuesong; Tomlinson, George; Ko, Dennis T; Krahn, Murray D

    2012-01-01

    The aim of this study was to review statistical techniques for estimating the mean population cost using health care cost data that, because of the inability to achieve complete follow-up until death, are right censored. The target audience is health service researchers without an advanced statistical background. Data were sourced from longitudinal heart failure costs from Ontario, Canada, and administrative databases were used for estimating costs. The dataset consisted of 43,888 patients, with follow-up periods ranging from 1 to 1538 days (mean 576 days). The study was designed so that mean health care costs over 1080 days of follow-up were calculated using naïve estimators such as full-sample and uncensored case estimators. Reweighted estimators - specifically, the inverse probability weighted estimator - were calculated, as was phase-based costing. Costs were adjusted to 2008 Canadian dollars using the Bank of Canada consumer price index (http://www.bankofcanada.ca/en/cpi.html). Over the restricted follow-up of 1080 days, 32% of patients were censored. The full-sample estimator was found to underestimate mean cost ($30,420) compared with the reweighted estimators ($36,490). The phase-based costing estimate of $37,237 was similar to that of the simple reweighted estimator. The authors recommend against the use of full-sample or uncensored case estimators when censored data are present. In the presence of heavy censoring, phase-based costing is an attractive alternative approach.

  3. Estimating length of stay in publicly-funded residential and nursing care homes: a retrospective analysis using linked administrative data sets

    Directory of Open Access Journals (Sweden)

    Steventon Adam

    2012-10-01

    Full Text Available Abstract Background Information about how long people stay in care homes is needed to plan services, as length of stay is a determinant of future demand for care. As length of stay is proportional to cost, estimates are also needed to inform analysis of the long-term cost effectiveness of interventions aimed at preventing admissions to care homes. But estimates are rarely available due to the cost of repeatedly surveying individuals. Methods We used administrative data from three local authorities in England to estimate the length of publicly-funded care homes stays beginning in 2005 and 2006. Stays were classified into nursing home, permanent residential and temporary residential. We aggregated successive placements in different care home providers and, by linking to health data, across periods in hospital. Results The largest group of stays (38.9% were those intended to be temporary, such as for rehabilitation, and typically lasted 4 weeks. For people admitted to permanent residential care, median length of stay was 17.9 months. Women stayed longer than men, while stays were shorter if preceded by other forms of social care. There was significant variation in length of stay between the three local authorities. The typical person admitted to a permanent residential care home will cost a local authority over £38,000, less payments due from individuals under the means test. Conclusions These figures are not apparent from existing data sets. The large cost of care home placements suggests significant scope for preventive approaches. The administrative data revealed complexity in patterns of service use, which should be further explored as it may challenge the assumptions that are often made.

  4. A concise account of techniques available for shipboard sea state estimation

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2017-01-01

    This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions. In the frequ......This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions...

  5. Water temperature forecasting and estimation using fourier series and communication theory techniques

    International Nuclear Information System (INIS)

    Long, L.L.

    1976-01-01

    Fourier series and statistical communication theory techniques are utilized in the estimation of river water temperature increases caused by external thermal inputs. An example estimate assuming a constant thermal input is demonstrated. A regression fit of the Fourier series approximation of temperature is then used to forecast daily average water temperatures. Also, a 60-day prediction of daily average water temperature is made with the aid of the Fourier regression fit by using significant Fourier components

  6. A satellite infrared technique to estimate tropical convective and stratiform rainfall

    Science.gov (United States)

    Adler, Robert F.; Negri, Andrew J.

    1988-01-01

    A new method of estimating both convective and stratiform precipitation from satellite infrared data is described. This technique defines convective cores and assigns rain rate and rain area to these features based on the infrared brightness temperature and the cloud model approach of Adler and Mack (1984). The method was tested for four south Florida cases during the second Florida Area Cumulus Experiment, and the results are presented and compared with three other satellite rain estimation schemes.

  7. Cost Engineering Techniques and Their Applicability for Cost Estimation of Organic Rankine Cycle Systems

    Directory of Open Access Journals (Sweden)

    Sanne Lemmens

    2016-06-01

    Full Text Available The potential of organic Rankine cycle (ORC systems is acknowledged by both considerable research and development efforts and an increasing number of applications. Most research aims at improving ORC systems through technical performance optimization of various cycle architectures and working fluids. The assessment and optimization of technical feasibility is at the core of ORC development. Nonetheless, economic feasibility is often decisive when it comes down to considering practical instalments, and therefore an increasing number of publications include an estimate of the costs of the designed ORC system. Various methods are used to estimate ORC costs but the resulting values are rarely discussed with respect to accuracy and validity. The aim of this paper is to provide insight into the methods used to estimate these costs and open the discussion about the interpretation of these results. A review of cost engineering practices shows there has been a long tradition of industrial cost estimation. Several techniques have been developed, but the expected accuracy range of the best techniques used in research varies between 10% and 30%. The quality of the estimates could be improved by establishing up-to-date correlations for the ORC industry in particular. Secondly, the rapidly growing ORC cost literature is briefly reviewed. A graph summarizing the estimated ORC investment costs displays a pattern of decreasing costs for increasing power output. Knowledge on the actual costs of real ORC modules and projects remains scarce. Finally, the investment costs of a known heat recovery ORC system are discussed and the methodologies and accuracies of several approaches are demonstrated using this case as benchmark. The best results are obtained with factorial estimation techniques such as the module costing technique, but the accuracies may diverge by up to +30%. Development of correlations and multiplication factors for ORC technology in particular is

  8. Estimation of stature and length of limb segments in children and adolescents from whole-body dual-energy X-ray absorptiometry scans

    International Nuclear Information System (INIS)

    Abrahamyan, Davit O.; Gazarian, Aram; Braillon, Pierre M.

    2008-01-01

    Anthropometric standards vary among different populations, and renewal of these reference values is necessary. To produce formulae for the assessment of limb segment lengths. Whole-body dual-energy X-ray absorptiometry scans of 413 Caucasian children and adolescents (170 boys, 243 girls) aged from 6 to 18 years were retrospectively analysed. Body height and the lengths of four long bones (humerus, radius, femur and tibia) were measured. The validity (concurrent validity) and reproducibility (intraobserver reliability) of the measurement technique were tested. High linear correlations (r > 0.9) were found between the mentioned five longitudinal measures. Corresponding linear regression equations for the most important relationships were derived. The tests of validity and reproducibility revealed a good degree of precision of the applied technique. The reference formulae obtained from the analysis of whole-body DEXA scans will be useful for anthropologists, and forensic and nutrition specialists, as well as for prosthetists and paediatric orthopaedic surgeons. (orig.)

  9. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    Energy Technology Data Exchange (ETDEWEB)

    Zweerink, Alwin; Allaart, Cornelis P.; Wu, LiNa; Beek, Aernout M.; Rossum, Albert C. van; Nijveldt, Robin [VU University Medical Center, Department of Cardiology, and Institute for Cardiovascular Research (ICaR-VU), Amsterdam (Netherlands); Kuijer, Joost P.A. [VU University Medical Center, Department of Physics and Medical Technology, Amsterdam (Netherlands); Ven, Peter M. van de [VU University Medical Center, Department of Epidemiology and Biostatistics, Amsterdam (Netherlands); Meine, Mathias [University Medical Center, Department of Cardiology, Utrecht (Netherlands); Croisille, Pierre; Clarysse, Patrick [Univ Lyon, UJM-Saint-Etienne, INSA, CNRS UMR 5520, INSERM U1206, CREATIS, Saint-Etienne (France)

    2017-12-15

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. (orig.)

  10. Switching EKF technique for rotor and stator resistance estimation in speed sensorless control of IMs

    Energy Technology Data Exchange (ETDEWEB)

    Barut, Murat; Bogosyan, Seta [University of Alaska Fairbanks, Department of Electrical and Computer Engineering, Fairbanks, AK 99775 (United States); Gokasan, Metin [Istanbul Technical University, Electrical and Electronic Engineering Faculty, 34390 Istanbul (Turkey)

    2007-12-15

    High performance speed sensorless control of induction motors (IMs) calls for estimation and control schemes that offer solutions to parameter uncertainties as well as to difficulties involved with accurate flux/velocity estimation at very low and zero speed. In this study, a new EKF based estimation algorithm is proposed for the solution of both problems and is applied in combination with speed sensorless direct vector control (DVC). The technique is based on the consecutive execution of two EKF algorithms, by switching from one algorithm to another at every n sampling periods. The number of sampling periods, n, is determined based on the desired system performance. The switching EKF approach, thus applied, provides an accurate estimation of an increased number of parameters than would be possible with a single EKF algorithm. The simultaneous and accurate estimation of rotor, R{sub r}{sup '} and stator, R{sub s} resistances, both in the transient and steady state, is an important challenge in speed sensorless IM control and reported studies achieving satisfactory results are few, if any. With the proposed technique in this study, the sensorless estimation of R{sub r}{sup '} and R{sub s} is achieved in transient and steady state and in both high and low speed operation while also estimating the unknown load torque, velocity, flux and current components. The performance demonstrated by the simulation results at zero speed, as well as at low and high speed operation is very promising when compared with individual EKF algorithms performing either R{sub r}{sup '} or R{sub s} estimation or with the few other approaches taken in past studies, which require either signal injection and/or a change of algorithms based on the speed range. The results also motivate utilization of the technique for multiple parameter estimation in a variety of control methods. (author)

  11. Estimating numbers of greater prairie-chickens using mark-resight techniques

    Science.gov (United States)

    Clifton, A.M.; Krementz, D.G.

    2006-01-01

    Current monitoring efforts for greater prairie-chicken (Tympanuchus cupido pinnatus) populations indicate that populations are declining across their range. Monitoring the population status of greater prairie-chickens is based on traditional lek surveys (TLS) that provide an index without considering detectability. Estimators, such as immigration-emigration joint maximum-likelihood estimator from a hypergeometric distribution (IEJHE), can account for detectability and provide reliable population estimates based on resightings. We evaluated the use of mark-resight methods using radiotelemetry to estimate population size and density of greater prairie-chickens on 2 sites at a tallgrass prairie in the Flint Hills of Kansas, USA. We used average distances traveled from lek of capture to estimate density. Population estimates and confidence intervals at the 2 sites were 54 (CI 50-59) on 52.9 km 2 and 87 (CI 82-94) on 73.6 km2. The TLS performed at the same sites resulted in population ranges of 7-34 and 36-63 and always produced a lower population index than the mark-resight population estimate with a larger range. Mark-resight simulations with varying male:female ratios of marks indicated that this ratio was important in designing a population study on prairie-chickens. Confidence intervals for estimates when no marks were placed on females at the 2 sites (CI 46-50, 76-84) did not overlap confidence intervals when 40% of marks were placed on females (CI 54-64, 91-109). Population estimates derived using this mark-resight technique were apparently more accurate than traditional methods and would be more effective in detecting changes in prairie-chicken populations. Our technique could improve prairie-chicken management by providing wildlife biologists and land managers with a tool to estimate the population size and trends of lekking bird species, such as greater prairie-chickens.

  12. Full-length, glycosylated NSP4 is localized to plasma membrane caveolae by a novel raft isolation technique.

    Science.gov (United States)

    Storey, Stephen M; Gibbons, Thomas F; Williams, Cecelia V; Parr, Rebecca D; Schroeder, Friedhelm; Ball, Judith M

    2007-06-01

    Rotavirus NSP4, initially characterized as an endoplasmic reticulum intracellular receptor, is a multifunctional viral enterotoxin that induces diarrhea in murine pups. There have been recent reports of the secretion of a cleaved NSP4 fragment (residues 112 to 175) and of the association of NSP4 with LC3-positive autophagosomes, raft membranes, and microtubules. To determine if NSP4 traffics to a specific subset of rafts at the plasma membrane, we isolated caveolae from plasma membrane-enriched material that yielded caveola membranes free of endoplasmic reticulum and nonraft plasma membrane markers. Analyses of the newly isolated caveolae from rotavirus-infected MDCK cells revealed full-length, high-mannose glycosylated NSP4. The lack of Golgi network-specific processing of the caveolar NSP4 glycans supports studies showing that NSP4 bypasses the Golgi apparatus. Confocal imaging showed the colocalization of NSP4 with caveolin-1 early and late in infection, elucidating the temporal and spatial NSP4-caveolin-1 association during infection. These data were extended with fluorescent resonance energy transfer analyses that confirmed the NSP4 and caveolin-1 interaction in that the specific fluorescently tagged antibodies were within 10 nm of each other during infection. Cells transfected with NSP4 showed patterns of staining and colocalization with caveolin-1 similar to those of infected cells. This study presents an endoplasmic reticulum contaminant-free caveola isolation protocol; describes the presence of full-length, endoglycosidase H-sensitive NSP4 in plasma membrane caveolae; provides confirmation of the NSP4-caveolin interaction in the presence and absence of other viral proteins; and provides a final plasma membrane destination for Golgi network-bypassing NSP4 transport.

  13. Error analysis of ultrasonic tissue doppler velocity estimation techniques for quantification of velocity and strain.

    Science.gov (United States)

    Bennett, Michael J; McLaughlin, Steve; Anderson, Tom; McDicken, W Norman

    2007-01-01

    Recent work in the field of Doppler tissue imaging has focused mainly on the quantification of results involving the use of techniques of strain and strain-rate imaging. These results are based on measuring a velocity gradient between two points, a known distance apart, in the region-of-interest. Although many recent publications have demonstrated the potential of this technique in clinical terms, the method still suffers from low repeatability. The work presented here demonstrates, through the use of a rotating phantom arrangement and a custom developed single element ultrasound system, that this is a consequence of the fundamental accuracy of the technique used to estimate the original velocities. Results are presented comparing the performance of the conventional Kasai autocorrelation velocity estimator with those obtained using time domain cross-correlation and the complex cross-correlation model based estimator. The results demonstrate that the complex cross-correlation model based technique is able to offer lower standard deviations of the velocity gradient estimations compared with the Kasai algorithm.

  14. Adaptive finite element techniques for the Maxwell equations using implicit a posteriori error estimates

    NARCIS (Netherlands)

    Harutyunyan, D.; Izsak, F.; van der Vegt, Jacobus J.W.; Bochev, Mikhail A.

    For the adaptive solution of the Maxwell equations on three-dimensional domains with N´ed´elec edge finite element methods, we consider an implicit a posteriori error estimation technique. On each element of the tessellation an equation for the error is formulated and solved with a properly chosen

  15. Comparison of Available Bandwidth Estimation Techniques in Packet-Switched Mobile Networks

    DEFF Research Database (Denmark)

    López Villa, Dimas; Ubeda Castellanos, Carlos; Teyeb, Oumer Mohammed

    2006-01-01

    of information regarding the available bandwidth in the transport network, as it could end up being the bottleneck rather than the air interface. This paper provides a comparative study of three well known available bandwidth estimation techniques, i.e. TOPP, SLoPS and pathChirp, taking into account...

  16. The Optical Fractionator Technique to Estimate Cell Numbers in a Rat Model of Electroconvulsive Therapy

    DEFF Research Database (Denmark)

    Olesen, Mikkel Vestergaard; Needham, Esther Kjær; Pakkenberg, Bente

    2017-01-01

    are too high to count manually, and stereology is now the technique of choice whenever estimates of three-dimensional quantities need to be extracted from measurements on two-dimensional sections. All stereological methods are in principle unbiased; however, they rely on proper knowledge about...

  17. Development of a surface isolation estimation technique suitable for application of polar orbiting satellite data

    Science.gov (United States)

    Davis, P. A.; Penn, L. M. (Principal Investigator)

    1981-01-01

    A technique is developed for the estimation of total daily insolation on the basis of data derivable from operational polar-orbiting satellites. Although surface insolation and meteorological observations are used in the development, the algorithm is constrained in application by the infrequent daytime polar-orbiter coverage.

  18. Estimation of Anti-HIV Activity of HEPT Analogues Using MLR, ANN, and SVM Techniques

    Directory of Open Access Journals (Sweden)

    Basheerulla Shaik

    2013-01-01

    value than those of MLR and SVM techniques. Rm2= metrics and ridge regression analysis indicated that the proposed four-variable model MATS5e, RDF080u, T(O⋯O, and MATS5m as correlating descriptors is the best for estimating the anti-HIV activity (log 1/C present set of compounds.

  19. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  20. A review of sex estimation techniques during examination of skeletal remains in forensic anthropology casework.

    Science.gov (United States)

    Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K

    2016-04-01

    Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Estimation of Anti-HIV Activity of HEPT Analogues Using MLR, ANN, and SVM Techniques.

    Science.gov (United States)

    Shaik, Basheerulla; Zafar, Tabassum; Agrawal, Vijay K

    2013-01-01

    The present study deals with the estimation of the anti-HIV activity (log1/C) of a large set of 107 HEPT analogues using molecular descriptors which are responsible for the anti-HIV activity. The study has been undertaken by three techniques MLR, ANN, and SVM. The MLR model fits the train set with R (2)=0.856 while in ANN and SVM with higher values of R (2) = 0.850, 0.874, respectively. SVM model shows improvement to estimate the anti-HIV activity of trained data, while in test set ANN have higher R (2) value than those of MLR and SVM techniques. R m (2) = metrics and ridge regression analysis indicated that the proposed four-variable model MATS5e, RDF080u, T(O⋯O), and MATS5m as correlating descriptors is the best for estimating the anti-HIV activity (log 1/C) present set of compounds.

  2. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images.

    Science.gov (United States)

    Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin

    2017-12-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.

  3. An evaluation of population index and estimation techniques for tadpoles in desert pools

    Science.gov (United States)

    Jung, Robin E.; Dayton, Gage H.; Williamson, Stephen J.; Sauer, John R.; Droege, Sam

    2002-01-01

    Using visual (VI) and dip net indices (DI) and double-observer (DOE), removal (RE), and neutral red dye capture-recapture (CRE) estimates, we counted, estimated, and censused Couch's spadefoot (Scaphiopus couchii) and canyon treefrog (Hyla arenicolor) tadpole populations in Big Bend National Park, Texas. Initial dye experiments helped us determine appropriate dye concentrations and exposure times to use in mesocosm and field trials. The mesocosm study revealed higher tadpole detection rates, more accurate population estimates, and lower coefficients of variation among pools compared to those from the field study. In both mesocosm and field studies, CRE was the best method for estimating tadpole populations, followed by DOE and RE. In the field, RE, DI, and VI often underestimated populations in pools with higher tadpole numbers. DI improved with increased sampling. Larger pools supported larger tadpole populations, and tadpole detection rates in general decreased with increasing pool volume and surface area. Hence, pool size influenced bias in tadpole sampling. Across all techniques, tadpole detection rates differed among pools, indicating that sampling bias was inherent and techniques did not consistently sample the same proportion of tadpoles in each pool. Estimating bias (i.e., calculating detection rates) therefore was essential in assessing tadpole abundance. Unlike VI and DOE, DI, RE, and CRE could be used in turbid waters in which tadpoles are not visible. The tadpole population estimates we used accommodated differences in detection probabilities in simple desert pool environments but may not work in more complex habitats.

  4. Satellite angular velocity estimation based on star images and optical flow techniques.

    Science.gov (United States)

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-09-25

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  5. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    Directory of Open Access Journals (Sweden)

    Giancarmine Fasano

    2013-09-01

    Full Text Available An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  6. A Knowledge-Based Step Length Estimation Method Based on Fuzzy Logic and Multi-Sensor Fusion Algorithms for a Pedestrian Dead Reckoning System

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lai

    2016-05-01

    Full Text Available The demand for pedestrian navigation has increased along with the rapid progress in mobile and wearable devices. This study develops an accurate and usable Step Length Estimation (SLE method for a Pedestrian Dead Reckoning (PDR system with features including a wide range of step lengths, a self-contained system, and real-time computing, based on the multi-sensor fusion and Fuzzy Logic (FL algorithms. The wide-range SLE developed in this study was achieved by using a knowledge-based method to model the walking patterns of the user. The input variables of the FL are step strength and frequency, and the output is the estimated step length. Moreover, a waist-mounted sensor module has been developed using low-cost inertial sensors. Since low-cost sensors suffer from various errors, a calibration procedure has been utilized to improve accuracy. The proposed PDR scheme in this study demonstrates its ability to be implemented on waist-mounted devices in real time and is suitable for the indoor and outdoor environments considered in this study without the need for map information or any pre-installed infrastructure. The experiment results show that the maximum distance error was within 1.2% of 116.51 m in an indoor environment and was 1.78% of 385.2 m in an outdoor environment.

  7. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    Science.gov (United States)

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.

  8. A Comparison On Efficiency Of Sampling Techniques For Sediment Estimation In Rivers

    Science.gov (United States)

    Arabkhedri, M.; Lai, F. S.; Akma, I. Noor; Roslan, M. K. Mohamad

    2008-01-01

    In this study, we compared the efficiency of adaptive cluster sampling against SALT (a special kind of probability proportion to size) and simple random sampling for estimating the suspended sediment yield in Gorgan-Rood River, Iran. About 300 sample sets, grouped in different sizes and replicates, were extracted from a five-year daily suspended sediment records, by three study sampling methods including adaptive, SALT and calendar-based. In the next step, total loads were estimated by appropriate sampling estimators; then the loads were compared among themselves and with observed total load as well. The results suggested that all three studied sampling techniques' average estimates showed satisfactory accuracy. However, adaptive and SALT were more precise than simple random sampling. SALT and adaptive cluster sampling obtained the optimum accuracy and precision with an average of almost weekly samplings. The result of this study may suggest the application of the adaptive cluster sampling in river monitoring programs.

  9. Forensic age estimation based on development of third molars: a staging technique for magnetic resonance imaging.

    Science.gov (United States)

    De Tobel, J; Phlypo, I; Fieuws, S; Politis, C; Verstraete, K L; Thevissen, P W

    2017-12-01

    The development of third molars can be evaluated with medical imaging to estimate age in subadults. The appearance of third molars on magnetic resonance imaging (MRI) differs greatly from that on radiographs. Therefore a specific staging technique is necessary to classify third molar development on MRI and to apply it for age estimation. To develop a specific staging technique to register third molar development on MRI and to evaluate its performance for age estimation in subadults. Using 3T MRI in three planes, all third molars were evaluated in 309 healthy Caucasian participants from 14 to 26 years old. According to the appearance of the developing third molars on MRI, descriptive criteria and schematic representations were established to define a specific staging technique. Two observers, with different levels of experience, staged all third molars independently with the developed technique. Intra- and inter-observer agreement were calculated. The data were imported in a Bayesian model for age estimation as described by Fieuws et al. (2016). This approach adequately handles correlation between age indicators and missing age indicators. It was used to calculate a point estimate and a prediction interval of the estimated age. Observed age minus predicted age was calculated, reflecting the error of the estimate. One-hundred and sixty-six third molars were agenetic. Five percent (51/1096) of upper third molars and 7% (70/1044) of lower third molars were not assessable. Kappa for inter-observer agreement ranged from 0.76 to 0.80. For intra-observer agreement kappa ranged from 0.80 to 0.89. However, two stage differences between observers or between staging sessions occurred in up to 2.2% (20/899) of assessments, probably due to a learning effect. Using the Bayesian model for age estimation, a mean absolute error of 2.0 years in females and 1.7 years in males was obtained. Root mean squared error equalled 2.38 years and 2.06 years respectively. The performance to

  10. Estimation of a Moving Heat Source due to a Micromilling Process Using the Modified TFBGF Technique

    Directory of Open Access Journals (Sweden)

    Sidney Ribeiro

    2018-01-01

    Full Text Available Moving heat sources are present in numerous engineering problems as welding and machining processes, heat treatment, or biological heating. In all these cases, the heat input identification represents an important factor in the optimization of the process. The aim of this study is to investigate the heat flux delivered to a workpiece during a micromilling process. The temperature measurements were obtained using a thermocouple at an accessible region of the workpiece surface while micromilling a small channel. The analytical solution is calculated from a 3D transient heat conduction model with a moving heat source, called direct problem. The estimation of the moving heat source uses the Transfer Function Based on Green’s Function Method. This method is based on Green’s function and the equivalence between thermal and dynamic systems. The technique is simple without iterative processes and extremely fast. From the temperature on accessible regions it is possible to estimate the heat flux by an inverse procedure of the Fast Fourier Transform. A test of micromilling of 6365 aluminium alloy was made and the heat delivered to the workpiece was estimated. The estimation of the heat without use of optimization technique is the great advantage of the technique proposed.

  11. Upright MRI measurement of mechanical axis and frontal plane alignment as a new technique: a comparative study with weight bearing full length radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Liodakis, Emmanouil; Kenawey, Mohamed; Doxastaki, Iosifina; Krettek, Christian; Haasper, Carl; Hankemeier, Stefan [Medical School Hannover, Department of Trauma Surgery, Hannover (Germany)

    2011-07-15

    The purpose of this prospective study was to investigate the practicality, accuracy, and reliability of upright MR imaging as a new radiation-free technique for the measurement of mechanical axis. We used upright MRI in 15 consecutive patients (30 limbs, 44.7 {+-} 20.6 years old) to measure mechanical axis deviation (MAD), hip-knee-ankle (HKA) angle, leg length, and all remaining angles of the frontal plane alignment according to Paley (mLPFA, mLDTA, mMPTA, mLDTA, JLCA). The measurements were compared to weight bearing full length radiographs, which are considered to be the standard of reference for planning corrective surgery. FDA-approved medical planning software (MediCAD) was used for the above measurements. Intra- and inter-observer reproducibility using mean absolute differences was also calculated for both methods. The correlation coefficient between angles determined with upright MRI and weight bearing full length radiographs was high for mLPFA, mLDTA, mMPTA, mLDTA, and the HKA angle (r > 0.70). Mean interobserver and intraobserver agreements for upright MRI were also very high (r > 0.89). The leg length and the MAD were significantly underestimated by MRI (-3.2 {+-} 2.2 cm, p < 0.001 and -6.2 {+-} 4.4 mm, p = 0.006, respectively). With the exception of underestimation of leg length and MAD, upright MR imaging measurements of the frontal plane angles are precise and produce reliable, reproducible results. (orig.)

  12. Comparison of techniques for the estimation of daily global irradiation and a new technique for the estimation of hourly global irradiation

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-03-01

    Global irradiation and sunshine duration data recorded at Trieste (CNR, Istituto Talassografico di Trieste) during the 11 year period 1972-1982 are analyzed using the classical Angstrom equation H=H 0 (a+bS/S 0 ) and the equation H'=H 0 (a+bS/S 0 ') for incorporating the effects of (i) multiple reflections, and (ii) not burning of the sunshine recorder chart for small elevation of the sun. The values of the regression constants and the correlation coefficients are calculated using each yearly data set separately. Correlation coefficients of 0.89 or more are obtained for the 11 years. Substantial unsystematic scatter is obtained in the values of a as well as b for different years. The use of the equation H'=H 0 (a+bS/S 0 ') is not found either to decrease this scatter or to give better values of the correlation coefficients. Hourly global irradiation data are also analyzed. 11 year mean values of the ratio hourly/daily are plotted against the solar time for each of the 12 months of the year. The normal distribution curve is found to fit the data closely. The mean of the normal distribution is taken at the solar noon and the σ values are obtained for each month by matching the experimental and the theoretical values at the solar noon. The σ values so obtained are found to bear an excellent linear correlation (r=0.996) with S 0 , viz. σ=0.461+0.192S 0 . This provides a simple and elegant technique for estimating hourly irradiation from the daily values and may be of universal applicability. The technique enables the estimation of global irradiation for any smaller interval of time as well

  13. Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey

    Directory of Open Access Journals (Sweden)

    Abdelrahman Osman Elfaki

    2014-01-01

    Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.

  14. Modified Open Suprapectoral EndoButton Tension Slide Tenodesis Technique of Long Head of Biceps with Restored Tendon Tension–Length

    Science.gov (United States)

    Prabhu, Jagadish; Faqi, Mohammed Khalid; Awad, Rashad Khamis; Alkhalifa, Fahad

    2017-01-01

    Background: The vast majority of biceps tendon ruptures occurs at the proximal insertion and almost always involves the long head. There are several options for long head of biceps (LHB) tenodesis with advantage and disadvantages of each technique. We believe that the suprapectoral LHB tenodesis described in this article enables the restoration of the anatomic length-tension relation in a technically reproducible manner, when following the guidelines set forth in this article, and restores biceps contour and function adequately with a low risk of complications. Method: We present a case of a young man who had a sudden jerk of his flexed right elbow, while involved in water skiing sports and sustained complete rupture of proximal end of long head of biceps tendon. In this article, we describe a modified surgical technique of open supra-pectoral long head of biceps tenodesis using an EndoButton tension slide technique, reproducing an anatomic length-tension relationship. Results: By the end of one year, patient regained symmetrical muscle bulk, shape and contour of biceps compared to other side. There were no signs of dislodgement or loosening of the EndoButton on follow-up radiographs. He regained full muscle power in the biceps without any possible complications, such as humeral fracture, infection, or nerve injury, associated with this technique. Conclusion: This technique is a safe, easy to reproduce, cost-effective, less time consuming and an effective method that uses a small drill hole, conserving bone, minimizing trauma to the tendon, and decreasing postoperative complications. It does not need any special instrumentation and is suitable especially for use in centers where arthroscopy facility or training is not available. PMID:28567157

  15. Recursive estimation techniques for detection of small objects in infrared image data

    Science.gov (United States)

    Zeidler, J. R.; Soni, T.; Ku, W. H.

    1992-04-01

    This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.

  16. Length of Distal Resection Margin after Partial Mesorectal Excision for Upper Rectal Cancer Estimated by Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Bondeven, Peter; Hagemann-Madsen, Rikke Hjarnø; Bro, Lise

    BACKGROUND: Rectal cancer requires surgery for cure. Partial mesorectal excision (PME) is suggested for tumours in the upper rectum and implies transection of the mesorectum perpendicular to the bowel a minimum of 5 cm below the tumour. Reports have shown distal mesorectal tumour spread of up to 5...... cm from the primary tumour; therefore, guidelines for cancer of the upper rectum recommend PME with a distal resection margin (DRM) of at least 5 cm or total mesorectal excision (TME). PME exerts a hazard of removing less than 5 cm - leaving microscopic tumour cells that have spread in the mesorectum....... Studies at our department have shown inadequate DRM in 75 % of the patients estimated by post-operative MRI of the pelvis and by measurements of the histopathological specimen. Correspondingly, a higher rate of local recurrence in patients surgically treated with PME for rectal cancer - compared to TME...

  17. Techniques for estimating health care costs with censored data: an overview for the health services researcher

    Directory of Open Access Journals (Sweden)

    Wijeysundera HC

    2012-06-01

    Full Text Available Harindra C Wijeysundera,1–5 Xuesong Wang,5 George Tomlinson,2,4 Dennis T Ko,1,3–5 Murray D Krahn,2–4,61Division of Cardiology, Schulich Heart Centre and Department of Medicine, Sunnybrook Health Sciences Centre, University of Toronto, 2Toronto Health Economics and Technology Assessment (THETA Collaborative, University of Toronto, 3Department of Medicine, University of Toronto, 4Institute of Health Policy, Management and Evaluation, University of Toronto, 5Institute for Clinical Evaluative Sciences, 6Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, Ontario, CanadaObjective: The aim of this study was to review statistical techniques for estimating the mean population cost using health care cost data that, because of the inability to achieve complete follow-up until death, are right censored. The target audience is health service researchers without an advanced statistical background.Methods: Data were sourced from longitudinal heart failure costs from Ontario, Canada, and administrative databases were used for estimating costs. The dataset consisted of 43,888 patients, with follow-up periods ranging from 1 to 1538 days (mean 576 days. The study was designed so that mean health care costs over 1080 days of follow-up were calculated using naïve estimators such as full-sample and uncensored case estimators. Reweighted estimators – specifically, the inverse probability weighted estimator – were calculated, as was phase-based costing. Costs were adjusted to 2008 Canadian dollars using the Bank of Canada consumer price index (http://www.bankofcanada.ca/en/cpi.html.Results: Over the restricted follow-up of 1080 days, 32% of patients were censored. The full-sample estimator was found to underestimate mean cost ($30,420 compared with the reweighted estimators ($36,490. The phase-based costing estimate of $37,237 was similar to that of the simple reweighted estimator.Conclusion: The authors recommend against the use of full

  18. Artificial intelligence techniques applied to hourly global irradiance estimation from satellite-derived cloud index

    Energy Technology Data Exchange (ETDEWEB)

    Zarzalejo, L.F.; Ramirez, L.; Polo, J. [DER-CIEMAT, Madrid (Spain). Renewable Energy Dept.

    2005-07-01

    Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models. (author)

  19. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  20. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  1. Artificial intelligence techniques applied to hourly global irradiance estimation from satellite-derived cloud index

    International Nuclear Information System (INIS)

    Zarzalejo, Luis F.; Ramirez, Lourdes; Polo, Jesus

    2005-01-01

    Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models

  2. Car sharing demand estimation and urban transport demand modelling using stated preference techniques

    OpenAIRE

    Catalano, Mario; Lo Casto, Barbara; Migliore, Marco

    2008-01-01

    The research deals with the use of the stated preference technique (SP) and transport demand modelling to analyse travel mode choice behaviour for commuting urban trips in Palermo, Italy. The principal aim of the study was the calibration of a demand model to forecast the modal split of the urban transport demand, allowing for the possibility of using innovative transport systems like car sharing and car pooling. In order to estimate the demand model parameters, a specific survey was carried ...

  3. A comparison of low-cost monocular vision techniques for pothole distance estimation

    CSIR Research Space (South Africa)

    Nienaber, S

    2015-12-01

    Full Text Available in such images [1]. As none existed, the authors have compiled and made publicly available a set of annotated images with potholes taken from a driver’s vantage point [2]. Detecting distant potholes is considerably more challenging than detecting nearby... it is possible to determine the distance of actors and their limbs relative to other actors [4]. Depth estimation approaches are generally either active or passive. Active techniques generally analyse reflections of sound or light waves emitted...

  4. Hybrid run length limited code and pre-emphasis technique to reduce wander and jitter on on-off keying nonreturn-to-zero visible light communication systems

    Science.gov (United States)

    Lin, Tong; Huang, Zhitong; Ji, Yuefeng

    2016-11-01

    On bandwidth-limited visible light communication (VLC) transmission systems, direct current (DC) component loss, DC-unbalance of code, and severe high-frequency attenuation cause baseline wander (BLW) and data-dependent jitter (DDJ) phenomena, which deteriorate signal quality and result in a higher bit error rate (BER). We present a scheme based on hybrid run length limited codes and pre-emphasis techniques to decrease the intersymbol interference caused by BLW and DDJ phenomena. We experimentally demonstrate, utilizing 1-binary-digit-into-2-binary-digits (1B2B) codes and postcursor pre-emphasis techniques, that the impacts of BLW and DDJ on on-off keying nonreturn-to-zero VLC systems are alleviated and a 130 Mb/s data transmission rate with a BER performance of <10-4 can be achieved.

  5. First- and zero-sound velocity and Fermi liquid parameter F2s in liquid 3He determined by a path length modulation technique

    International Nuclear Information System (INIS)

    Hamot, P.J.; Lee, Y.; Sprague, D.T.

    1995-01-01

    We have measured the velocity of first- and zero-sound in liquid 3 He at 12.6 MHz over the pressure range of 0.6 to 14.5 bar using a path length modulation technique that we have recently developed. From these measurements, the pressure dependent value of the Fermi liquid parameter F 2 s was calculated and found to be larger at low pressure than previously reported. These new values of F 2 s indicate that transverse zero-sound is a propagating mode at all pressures. The new values are important for the interpretation of the frequencies of order parameter collective modes in the superfluid phases. The new acoustic technique permits measurements in regimes of very high attenuation with a sensitivity in phase velocity of about 10 ppm achieved by a feedback arrangement. The sound velocity is thus measured continuously throughout the highly attenuating crossover (ωt ∼ 1) regime, even at the lowest pressures

  6. A review of techniques for the estimation of magnitude and timing of exhumation in offshore basins

    Science.gov (United States)

    Corcoran, D. V.; Doré, A. G.

    2005-10-01

    Exhumation, the removal of overburden resulting from the vertical displacement of rocks from maximum burial depth, occurs at both regional and local scales in offshore sedimentary basins and has important implications for the prospectivity of petroliferous basins. In these basins, issues to be addressed by the petroleum geologist include, the timing of thermal 'switch-off' of source rock units, the compactional and diagenetic constraints imposed by the maximum burial depth of reservoirs (prior to uplift), the physical and mechanical characteristics of cap-rocks during and post-exhumation, the structural evolution of traps and the hydrocarbon emplacement history. Central to addressing these issues is the geoscientist's ability to identify exhumation events, estimate their magnitude and deduce their timing. A variety of individual techniques is available to assess the exhumation of sedimentary successions, but generic categorisation indicates that 'point' measurements of rock displacement, in the offshore arena, are made with respect to four frames of reference — tectonic, thermal, compactional or stratigraphic. These techniques are critically reviewed in the context of some of the exhumed offshore sedimentary basins peripheral to the Irish landmass. This review confirms that large uncertainty is associated with estimates from individual techniques but that the integration of seismic interpretation and regional stratigraphic data provides valuable constraints on estimates from the more indirect tectonic, thermal and compactional methods.

  7. The application of total vertical projections for the unbiased estimation of the length of blood vessels and other structures by magnetic resonance imaging.

    Science.gov (United States)

    Roberts, N; Howard, C V; Cruz-Orive, L M; Edwards, R H

    1991-01-01

    A new stereological method has recently been developed to estimate the total length of a bounded curve in 3D from a sample of projections about a vertical axis. Unlike other methods based on serial section reconstructions, the new method is unbiased (i.e., it has zero systematic error). A basic requirement, not difficult to fulfill in many cases, is that the masking of one structure by another is not appreciable. The application of the new method to real curvilinear structures using a clinical magnetic resonance (MR) imager is illustrated. The first structure measured was a twisted water-filled glass tube of known length. The accuracy of the method was assessed: With six vertical projections, the tube length was measured to within 2% of the true value. The second example was a living bonsai tree, and the third was a clinical application of MR angiography. The possibility of applying the method to other scientific disciplines, for example, the monitoring of plant root growth, is discussed.

  8. Estimation of gross primary production of the Amazon-Cerrado transitional forest by remote sensing techniques

    Directory of Open Access Journals (Sweden)

    Maísa Caldas Souza

    2014-03-01

    Full Text Available The gross primary production (GPP of ecosystems is an important variable in the study of global climate change. Generally, the GPP has been estimated by micrometeorological techniques. However, these techniques have a high cost of implantation and maintenance, making the use of orbital sensor data an option to be evaluated. Thus, the objective of this study was to evaluate the potential of the MODIS (Moderate Resolution Imaging Spectroradiometer MOD17A2 product and the vegetation photosynthesis model (VPM to predict the GPP of the Amazon-Cerrado transitional forest. The GPP predicted by MOD17A2 (GPP MODIS and VPM (GPP VPM were validated with the GPP estimated by eddy covariance (GPP EC. The GPP MODIS, GPP VPM and GPP EC have similar seasonality, with higher values in the wet season and lower in the dry season. However, the VPM performed was better than the MOD17A2 to estimate the GPP, due to use local climatic data for predict the light use efficiency, while the MOD17A2 use a global circulation model and the lookup table of each vegetation type to estimate the light use efficiency.

  9. [Research Progress of Vitreous Humor Detection Technique on Estimation of Postmortem Interval].

    Science.gov (United States)

    Duan, W C; Lan, L M; Guo, Y D; Zha, L; Yan, J; Ding, Y J; Cai, J F

    2018-02-01

    Estimation of postmortem interval (PMI) plays a crucial role in forensic study and identification work. Because of the unique anatomy location, vitreous humor is considered to be used for estima- ting PMI, which has aroused interest among scholars, and some researches have been carried out. The detection techniques of vitreous humor are constantly developed and improved which have been gradually applied in forensic science, meanwhile, the study of PMI estimation using vitreous humor is updated rapidly. This paper reviews various techniques and instruments applied to vitreous humor detection, such as ion selective electrode, capillary ion analysis, spectroscopy, chromatography, nano-sensing technology, automatic biochemical analyser, flow cytometer, etc., as well as the related research progress on PMI estimation in recent years. In order to provide a research direction for scholars and promote a more accurate and efficient application in PMI estimation by vitreous humor analysis, some inner problems are also analysed in this paper. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  10. Comparison of process estimation techniques for on-line calibration monitoring

    International Nuclear Information System (INIS)

    Shumaker, B. D.; Hashemian, H. M.; Morton, G. W.

    2006-01-01

    The goal of on-line calibration monitoring is to reduce the number of unnecessary calibrations performed each refueling cycle on pressure, level, and flow transmitters in nuclear power plants. The effort requires a baseline for determining calibration drift and thereby the need for a calibration. There are two ways to establish the baseline: averaging and modeling. Averaging techniques have proven to be highly successful in the applications when there are a large number of redundant transmitters; but, for systems with little or no redundancy, averaging methods are not always reliable. That is, for non-redundant transmitters, more sophisticated process estimation techniques are needed to augment or replace the averaging techniques. This paper explores three well-known process estimation techniques; namely Independent Component Analysis (ICA), Auto-Associative Neural Networks (AANN), and Auto-Associative Kernel Regression (AAKR). Using experience and data from an operating nuclear plant, the paper will present an evaluation of the effectiveness of these methods in detecting transmitter drift in actual plant conditions. (authors)

  11. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  12. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  13. A method for the estimation of hydration state during hemodialysis using a calf bioimpedance technique

    International Nuclear Information System (INIS)

    Zhu, F; Kuhlmann, M K; Kotanko, P; Seibert, E; Levin, N W; Leonard, E F

    2008-01-01

    Although many methods have been utilized to measure degrees of body hydration, and in particular to estimate normal hydration states (dry weight, DW) in hemodialysis (HD) patients, no accurate methods are currently available for clinical use. Biochemcial measurements are not sufficiently precise and vena cava diameter estimation is impractical. Several bioimpedance methods have been suggested to provide information to estimate clinical hydration and nutritional status, such as phase angle measurement and ratio of body fluid compartment volumes to body weight. In this study, we present a calf bioimpedance spectroscopy (cBIS) technique to monitor calf resistance and resistivity continuously during HD. Attainment of DW is defined by two criteria: (1) the primary criterion is flattening of the change in the resistance curve during dialysis so that at DW little further change is observed and (2) normalized resistivity is in the range of observation of healthy subjects. Twenty maintenance HD patients (12 M/8 F) were studied on 220 occasions. After three baseline (BL) measurements, with patients at their DW prescribed on clinical grounds (DW Clin ), the target post-dialysis weight was gradually decreased in the course of several treatments until the two dry weight criteria outlined above were met (DW cBIS ). Post-dialysis weight was reduced from 78.3 ± 28 to 77.1 ± 27 kg (p −2 Ω m 3 kg −1 (p cBIS was 0.3 ± 0.2%. The results indicate that cBIS utilizing a dynamic technique continuously during dialysis is an accurate and precise approach to specific end points for the estimation of body hydration status. Since no current techniques have been developed to detect DW as precisely, it is suggested as a standard to be evaluated clinically

  14. Comparison of single-spot technique and RGB imaging for erythema index estimation.

    Science.gov (United States)

    Saknite, I; Zavorins, A; Jakovels, D; Spigulis, J; Kisis, J

    2016-03-01

    A commercially available point measurement device, the Mexameter(®), and an experimental RGB imaging prototype device were used for erythema index estimation of 50 rosacea patients by analysing the level of skin redness on the forehead, both cheeks and both sides of a nose. Results are compared with Clinician's Erythema Assessment (CEA) values given by two dermatologists. The Mexameter uses 568 nm and 660 nm LEDs and a photodetector for estimation of erythema index, while the used prototype device acquired RGB images at 460 nm, 530 nm and 665 nm LED illumination. Several erythema index estimation algorithms were compared to determine which one gives the best contrast between increased erythema and normal skin. The erythema index estimations and CEA values correlated much better for the RGB imaging data than for those obtained by the conventional Mexameter technique that is widely used by dermatologists and in clinical trials. In result, we propose an erythema index estimation approach that represents increased erythema with higher accuracy than other available methods.

  15. ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters

    KAUST Repository

    Litvinenko, Alexander

    2016-10-25

    In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)

  16. A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections

    International Nuclear Information System (INIS)

    Zhang, You; Yin, Fang-Fang; Ren, Lei; Segars, W. Paul

    2013-01-01

    Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion

  17. A method for the estimation of hydration state during hemodialysis using a calf bioimpedance technique.

    Science.gov (United States)

    Zhu, F; Kuhlmann, M K; Kotanko, P; Seibert, E; Leonard, E F; Levin, N W

    2008-06-01

    Although many methods have been utilized to measure degrees of body hydration, and in particular to estimate normal hydration states (dry weight, DW) in hemodialysis (HD) patients, no accurate methods are currently available for clinical use. Biochemcial measurements are not sufficiently precise and vena cava diameter estimation is impractical. Several bioimpedance methods have been suggested to provide information to estimate clinical hydration and nutritional status, such as phase angle measurement and ratio of body fluid compartment volumes to body weight. In this study, we present a calf bioimpedance spectroscopy (cBIS) technique to monitor calf resistance and resistivity continuously during HD. Attainment of DW is defined by two criteria: (1) the primary criterion is flattening of the change in the resistance curve during dialysis so that at DW little further change is observed and (2) normalized resistivity is in the range of observation of healthy subjects. Twenty maintenance HD patients (12 M/8 F) were studied on 220 occasions. After three baseline (BL) measurements, with patients at their DW prescribed on clinical grounds (DW(Clin)), the target post-dialysis weight was gradually decreased in the course of several treatments until the two dry weight criteria outlined above were met (DW(cBIS)). Post-dialysis weight was reduced from 78.3 +/- 28 to 77.1 +/- 27 kg (p hydration status. Since no current techniques have been developed to detect DW as precisely, it is suggested as a standard to be evaluated clinically.

  18. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    Science.gov (United States)

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-09-01

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (Smax) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W2.93. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO4) paste injected in the wet shrimp in proportion to the body weight. The BaSO4 was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  19. Performance Comparison of Adaptive Estimation Techniques for Power System Small-Signal Stability Assessment

    Directory of Open Access Journals (Sweden)

    E. A. Feilat

    2010-12-01

    Full Text Available This paper demonstrates the assessment of the small-signal stability of a single-machine infinite- bus power system under widely varying loading conditions using the concept of synchronizing and damping torques coefficients. The coefficients are calculated from the time responses of the rotor angle, speed, and torque of the synchronous generator. Three adaptive computation algorithms including Kalman filtering, Adaline, and recursive least squares have been compared to estimate the synchronizing and damping torque coefficients. The steady-state performance of the three adaptive techniques is compared with the conventional static least squares technique by conducting computer simulations at different loading conditions. The algorithms are compared to each other in terms of speed of convergence and accuracy. The recursive least squares estimation offers several advantages including significant reduction in computing time and computational complexity. The tendency of an unsupplemented static exciter to degrade the system damping for medium and heavy loading is verified. Consequently, a power system stabilizer whose parameters are adjusted to compensate for variations in the system loading is designed using phase compensation method. The effectiveness of the stabilizer in enhancing the dynamic stability over wide range of operating conditions is verified through the calculation of the synchronizing and damping torque coefficients using recursive least square technique.

  20. On the estimation of the current density in space plasmas: Multi- versus single-point techniques

    Science.gov (United States)

    Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco

    2017-06-01

    Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.

  1. Efficiency of wear and decalcification technique for estimating the age of estuarine dolphin Sotalia guianensis.

    Science.gov (United States)

    Sydney, Nicolle V; Monteiro-Filho, Emygdio L A

    2011-03-01

    Most techniques used for estimating the age of Sotalia guianensis (van Beneden, 1864) (Cetacea; Delphinidae) are very expensive, and require sophisticated equipment for preparing histological sections of teeth. The objective of this study was to test a more affordable and much simpler method, involving of the manual wear of teeth followed by decalcification and observation under a stereomicroscope. This technique has been employed successfully with larger species of Odontoceti. Twenty-six specimens were selected, and one tooth of each specimen was worn and demineralized for growth layers reading. Growth layers were evidenced in all specimens; however, in 4 of the 26 teeth, not all the layers could be clearly observed. In these teeth, there was a significant decrease of growth layer group thickness, thus hindering the layers count. The juxtaposition of layers hindered the reading of larger numbers of layers by the wear and decalcification technique. Analysis of more than 17 layers in a single tooth proved inconclusive. The method applied here proved to be efficient in estimating the age of Sotalia guianensis individuals younger than 18 years. This method could simplify the study of the age structure of the overall population, and allows the use of the more expensive methodologies to be confined to more specific studies of older specimens. It also enables the classification of the calf, young and adult classes, which is important for general population studies.

  2. Signal Processing of Ground Penetrating Radar Using Spectral Estimation Techniques to Estimate the Position of Buried Targets

    Directory of Open Access Journals (Sweden)

    Shanker Man Shrestha

    2003-11-01

    Full Text Available Super-resolution is very important for the signal processing of GPR (ground penetration radar to resolve closely buried targets. However, it is not easy to get high resolution as GPR signals are very weak and enveloped by the noise. The MUSIC (multiple signal classification algorithm, which is well known for its super-resolution capacity, has been implemented for signal and image processing of GPR. In addition, conventional spectral estimation technique, FFT (fast Fourier transform, has also been implemented for high-precision receiving signal level. In this paper, we propose CPM (combined processing method, which combines time domain response of MUSIC algorithm and conventional IFFT (inverse fast Fourier transform to obtain a super-resolution and high-precision signal level. In order to support the proposal, detailed simulation was performed analyzing SNR (signal-to-noise ratio. Moreover, a field experiment at a research field and a laboratory experiment at the University of Electro-Communications, Tokyo, were also performed for thorough investigation and supported the proposed method. All the simulation and experimental results are presented.

  3. Use of the forced-oscillation technique to estimate spirometry values

    Directory of Open Access Journals (Sweden)

    Yamamoto S

    2017-10-01

    Full Text Available Shoichiro Yamamoto,1 Seigo Miyoshi,1 Hitoshi Katayama,1 Mikio Okazaki,2 Hisayuki Shigematsu,2 Yoshifumi Sano,2 Minoru Matsubara,3 Naohiko Hamaguchi,1 Takafumi Okura,1 Jitsuo Higaki1 1Department of Cardiology, Pulmonology, Hypertension, and Nephrology, 2Department of Cardiovascular and Thoracic Surgery, Ehime University Graduate School of Medicine, Toon, 3Department of Internal Medicine, Sumitomo Besshi Hospital, Niihama, Japan Purpose: Spirometry is sometimes difficult to perform in elderly patients and in those with severe respiratory distress. The forced-oscillation technique (FOT is a simple and noninvasive method of measuring respiratory impedance. The aim of this study was to determine if FOT data reflect spirometric indices.Patients and methods: Patients underwent both FOT and spirometry procedures prior to inclusion in development (n=1,089 and validation (n=552 studies. Multivariate linear regression analysis was performed to identify FOT parameters predictive of vital capacity (VC, forced VC (FVC, and forced expiratory volume in 1 second (FEV1. A regression equation was used to calculate estimated VC, FVC, and FEV1. We then determined whether the estimated data reflected spirometric indices. Agreement between actual and estimated spirometry data was assessed by Bland–Altman analysis.Results: Significant correlations were observed between actual and estimated VC, FVC, and FEV1 values (all r>0.8 and P<0.001. These results were deemed robust by a separate validation study (all r>0.8 and P<0.001. Bias between the actual data and estimated data for VC, FVC, and FEV1 in the development study was 0.007 L (95% limits of agreement [LOA] 0.907 and -0.893 L, -0.064 L (95% LOA 0.843 and -0.971 L, and -0.039 L (95% LOA 0.735 and -0.814 L, respectively. On the other hand, bias between the actual data and estimated data for VC, FVC, and FEV1 in the validation study was -0.201 L (95% LOA 0.62 and -1.022 L, -0.262 L (95% LOA 0.582 and -1.106 L, and

  4. Estimation of endogenous faecal calcium in buffalo (BOS bubalis) by isotope dilution technique

    International Nuclear Information System (INIS)

    Singh, S.; Sareen, V.K.; Marwaha, S.R.; Sekhon, B.; Bhatia, I.S.

    1973-01-01

    Detailed investigations on the isotope-dilution technique for the estimation of endogenous faecal calcium were conducted with buffalo calves fed on growing ration. The ration consisted of wheat straw, green lucerne and concentrate mix. The endogenous faecal calcium was 3.71 g/day, which is 17.8 percent of the total faecal calcium. The apparent and true digestibilities of Ca were calculated as 51 and 60 percent respectively. The endogenous faecal calcium can be estimated in buffalo calves by giving single subcutaneous injection of Ca 45 and collecting blood samples on 12th and 21st days only, and representative sample from the faeces collected from 13th through 22nd day after the injection. (author)

  5. Signal Subspace Smoothing Technique for Time Delay Estimation Using MUSIC Algorithm.

    Science.gov (United States)

    Sun, Meng; Wang, Yide; Le Bastard, Cédric; Pan, Jingjing; Ding, Yuehua

    2017-12-10

    In civil engineering, Time Delay Estimation (TDE) is one of the most important tasks for the media structure and quality evaluation. In this paper, the MUSIC algorithm is applied to estimate the time delay. In practice, the backscattered echoes are highly correlated (even coherent). In order to apply the MUSIC algorithm, an adaptation of signal subspace smoothing is proposed to decorrelate the correlation between echoes. Unlike the conventional sub-band averaging techniques, we propose to directly use the signal subspace, which can take full advantage of the signal subspace and reduce the influence of noise. Moreover, the proposed method is adapted to deal with any radar pulse shape. The proposed method is tested on both numerical and experimental data. Both results show the effectiveness of the proposed method.

  6. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  7. Accurate Bond Lengths to Hydrogen Atoms from Single-Crystal X-ray Diffraction by Including Estimated Hydrogen ADPs and Comparison to Neutron and QM/MM Benchmarks.

    Science.gov (United States)

    Dittrich, Birger; Lübben, Jens; Mebs, Stefan; Wagner, Armin; Luger, Peter; Flaig, Ralf

    2017-04-03

    Amino acid structures are an ideal test set for method-development studies in crystallography. High-resolution X-ray diffraction data for eight previously studied genetically encoding amino acids are provided, complemented by a non-standard amino acid. Structures were re-investigated to study a widely applicable treatment that permits accurate X-H bond lengths to hydrogen atoms to be obtained: this treatment combines refinement of positional hydrogen-atom parameters with aspherical scattering factors with constrained "TLS+INV" estimated hydrogen anisotropic displacement parameters (H-ADPs). Tabulated invariom scattering factors allow rapid modeling without further computations, and unconstrained Hirshfeld atom refinement provides a computationally demanding alternative when database entries are missing. Both should incorporate estimated H-ADPs, as free refinement frequently leads to over-parameterization and non-positive definite H-ADPs irrespective of the aspherical scattering model used. Using estimated H-ADPs, both methods yield accurate and precise X-H distances in best quantitative agreement with neutron diffraction data (available for five of the test-set molecules). This work thus solves the last remaining problem to obtain such results more frequently. Density functional theoretical QM/MM computations are able to play the role of an alternative benchmark to neutron diffraction. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    Science.gov (United States)

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-09-17

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.

  9. Application of genetic algorithm (GA) technique on demand estimation of fossil fuels in Turkey

    International Nuclear Information System (INIS)

    Canyurt, Olcay Ersel; Ozturk, Harun Kemal

    2008-01-01

    The main objective is to investigate Turkey's fossil fuels demand, projection and supplies by using the structure of the Turkish industry and economic conditions. This study develops scenarios to analyze fossil fuels consumption and makes future projections based on a genetic algorithm (GA). The models developed in the nonlinear form are applied to the coal, oil and natural gas demand of Turkey. Genetic algorithm demand estimation models (GA-DEM) are developed to estimate the future coal, oil and natural gas demand values based on population, gross national product, import and export figures. It may be concluded that the proposed models can be used as alternative solutions and estimation techniques for the future fossil fuel utilization values of any country. In the study, coal, oil and natural gas consumption of Turkey are projected. Turkish fossil fuel demand is increased dramatically. Especially, coal, oil and natural gas consumption values are estimated to increase almost 2.82, 1.73 and 4.83 times between 2000 and 2020. In the figures GA-DEM results are compared with World Energy Council Turkish National Committee (WECTNC) projections. The observed results indicate that WECTNC overestimates the fossil fuel consumptions. (author)

  10. Validity of Three-Dimensional Photonic Scanning Technique for Estimating Percent Body Fat.

    Science.gov (United States)

    Shitara, K; Kanehisa, H; Fukunaga, T; Yanai, T; Kawakami, Y

    2013-01-01

    Three-dimensional photonic scanning (3DPS) was recently developed to measure dimensions of a human body surface. The purpose of this study was to explore the validity of body volume measured by 3DPS for estimating the percent body fat (%fat). Design, setting, participants, and measurement: The body volumes were determined by 3DPS in 52 women. The body volume was corrected for residual lung volume. The %fat was estimated from body density and compared with the corresponding reference value determined by the dual-energy x-ray absorptiometry (DXA). No significant difference was found for the mean values of %fat obtained by 3DPS (22.2 ± 7.6%) and DXA (23.5 ± 4.9%). The root mean square error of %fat between 3DPS and reference technique was 6.0%. For each body segment, there was a significant positive correlation between 3DPS- and DXA-values, although the corresponding value for the head was slightly larger in 3DPS than in DXA. Residual lung volume was negatively correlated with the estimated error in %fat. The body volume determined with 3DPS is potentially useful for estimating %fat. A possible strategy for enhancing the measurement accuracy of %fat might be to refine the protocol for preparing the subject's hair prior to scanning and to improve the accuracy in the measurement of residual lung volume.

  11. Modified Multilook Cross Correlation technique for Doppler centroid estimation in SAR image signal processing

    Science.gov (United States)

    Bee Cheng, Sew

    Synthetic Aperture Radar (SAR) is one of the widely used remote sensing sensors which produces high resolution image by using advance signal processing technique. SAR managed to operate in all sorts of weather and cover wide range of area. To produce a high-quality image, accurate parameters such as Doppler centroid are required for precise SAR signal processing. In the azimuth matched filtering of SAR signal processing, Doppler centroid is an important azimuth parameter that helps to focus the image pixels. Doppler centroid has always been overlooked during SAR signal processing. It is due to the fact that estimation of Doppler centroid involved complicated calculation and increased computational load. Therefore, researcher used to apply only the approximate Doppler value which is not precise and cause defocus effort in the generated SAR image. In this study, several conventional Doppler centroid estimation algorithms are reviewed and developed using Matlab software program to extract the Doppler parameter from received SAR data, namely Spectrum Fit Algorithm, Wavelength Diversity Algorithm (WDA), Multilook Cross Correlation Algorithm (MLCC), and Multilook Beat Frequency Algorithm (MLBF). Two sets of SAR data are employed to evaluate the performance of each estimator, i.e. simulated point target data and RADARSAT-1 Vancouver scene raw data. These experiments gave a sense of accuracy for the estimated results together with computational time consumption. Point target is simulated to generate ideal case SAR data with pre-defined SAR system parameters.

  12. Estimating gypsum equirement under no-till based on machine learning technique

    Directory of Open Access Journals (Sweden)

    Alaine Margarete Guimarães

    Full Text Available Chemical stratification occurs under no-till systems, including pH, considering that higher levels are formed from the soil surface towards the deeper layers. The subsoil acidity is a limiting factor of the yield. Gypsum has been suggested when subsoil acidity limits the crops root growth, i.e., when the calcium (Ca level is low and/or the aluminum (Al level is toxic in the subsoil layers. However, there are doubts about the more efficient methods to estimate the gypsum requirement. This study was carried out to develop numerical models to estimate the gypsum requirement in soils under no-till system by the use of Machine Learning techniques. Computational analyses of the dataset were made applying the M5'Rules algorithm, based on regression models. The dataset comprised of soil chemical properties collected from experiments under no-till that received gypsum rates on the soil surface, throughout eight years after the application, in Southern Brazil. The results showed that the numerical models generated by rule induction M5'Rules algorithm were positively useful contributing for estimate the gypsum requirements under no-till. The models showed that Ca saturation in the effective cation exchange capacity (ECEC was a more important attribute than Al saturation to estimate gypsum requirement in no-till soils.

  13. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    Science.gov (United States)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  14. A semi-analytical method to estimate the effective slip length of spreading spherical-cap shaped droplets using Cox theory

    Science.gov (United States)

    Wörner, M.; Cai, X.; Alla, H.; Yue, P.

    2018-03-01

    The Cox–Voinov law on dynamic spreading relates the difference between the cubic values of the apparent contact angle (θ) and the equilibrium contact angle to the instantaneous contact line speed (U). Comparing spreading results with this hydrodynamic wetting theory requires accurate data of θ and U during the entire process. We consider the case when gravitational forces are negligible, so that the shape of the spreading drop can be closely approximated by a spherical cap. Using geometrical dependencies, we transform the general Cox law in a semi-analytical relation for the temporal evolution of the spreading radius. Evaluating this relation numerically shows that the spreading curve becomes independent from the gas viscosity when the latter is less than about 1% of the drop viscosity. Since inertia may invalidate the made assumptions in the initial stage of spreading, a quantitative criterion for the time when the spherical-cap assumption is reasonable is derived utilizing phase-field simulations on the spreading of partially wetting droplets. The developed theory allows us to compare experimental/computational spreading curves for spherical-cap shaped droplets with Cox theory without the need for instantaneous data of θ and U. Furthermore, the fitting of Cox theory enables us to estimate the effective slip length. This is potentially useful for establishing relationships between slip length and parameters in numerical methods for moving contact lines.

  15. Biodosimetry estimation using the ratio of the longest:shortest length in the premature chromosome condensation (PCC) method applying autocapture and automatic image analysis.

    Science.gov (United States)

    González, Jorge E; Romero, Ivonne; Gregoire, Eric; Martin, Cécile; Lamadrid, Ana I; Voisin, Philippe; Barquinero, Joan-Francesc; García, Omar

    2014-09-01

    The combination of automatic image acquisition and automatic image analysis of premature chromosome condensation (PCC) spreads was tested as a rapid biodosimeter protocol. Human peripheral lymphocytes were irradiated with (60)Co gamma rays in a single dose of between 1 and 20 Gy, stimulated with phytohaemaglutinin and incubated for 48 h, division blocked with Colcemid, and PCC-induced by Calyculin A. Images of chromosome spreads were captured and analysed automatically by combining the Metafer 4 and CellProfiler platforms. Automatic measurement of chromosome lengths allows the calculation of the length ratio (LR) of the longest and the shortest piece that can be used for dose estimation since this ratio is correlated with ionizing radiation dose. The LR of the longest and the shortest chromosome pieces showed the best goodness-of-fit to a linear model in the dose interval tested. The application of the automatic analysis increases the potential use of the PCC method for triage in the event of massive radiation causalities. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  16. The evolutionary rates of HCV estimated with subtype 1a and 1b sequences over the ORF length and in different genomic regions.

    Directory of Open Access Journals (Sweden)

    Manqiong Yuan

    Full Text Available Considerable progress has been made in the HCV evolutionary analysis, since the software BEAST was released. However, prior information, especially the prior evolutionary rate, which plays a critical role in BEAST analysis, is always difficult to ascertain due to various uncertainties. Providing a proper prior HCV evolutionary rate is thus of great importance.176 full-length sequences of HCV subtype 1a and 144 of 1b were assembled by taking into consideration the balance of the sampling dates and the even dispersion in phylogenetic trees. According to the HCV genomic organization and biological functions, each dataset was partitioned into nine genomic regions and two routinely amplified regions. A uniform prior rate was applied to the BEAST analysis for each region and also the entire ORF. All the obtained posterior rates for 1a are of a magnitude of 10(-3 substitutions/site/year and in a bell-shaped distribution. Significantly lower rates were estimated for 1b and some of the rate distribution curves resulted in a one-sided truncation, particularly under the exponential model. This indicates that some of the rates for subtype 1b are less accurate, so they were adjusted by including more sequences to improve the temporal structure.Among the various HCV subtypes and genomic regions, the evolutionary patterns are dissimilar. Therefore, an applied estimation of the HCV epidemic history requires the proper selection of the rate priors, which should match the actual dataset so that they can fit for the subtype, the genomic region and even the length. By referencing the findings here, future evolutionary analysis of the HCV subtype 1a and 1b datasets may become more accurate and hence prove useful for tracing their patterns.

  17. Estimation of the Latitude, the Gnomon`s Length and Position About Sinbeop-Jipyeong-Ilgu in the Late of Joseon Dynasty

    Science.gov (United States)

    Mihn, Byeong-Hee; Lee, Yong Sam; Kim, Sang Hyuk; Choi, Won-Ho; Ham, Seon Young

    2017-06-01

    In this study, the characteristics of a horizontal sundial from the Joseon Dynasty were investigated. Korea’s Treasure No. 840 (T840) is a Western-style horizontal sundial where hour-lines and solar-term-lines are engraved. The inscription of this sundial indicates that the latitude (altitude of the north celestial pole) is 37° 39´, but the gnomon is lost. In the present study, the latitude of the sundial and the length of the gnomon were estimated based only on the hour-lines and solar-termlines of the horizontal sundial. When statistically calculated from the convergent point obtained by extending the hourlines, the latitude of this sundial was 37° 15´ ± 26´, which showed a 24´ difference from the record of the inscription. When it was also assumed that a convergent point is changeable, the estimation of the sundial’s latitude was found to be sensitive to the variation of this point. This study found that T840 used a vertical gnomon, that is, perpendicular to the horizontal plane, rather than an inclined triangular gnomon, and a horn-shaped mark like a vertical gnomon is cut on its surface. The length of the gnomon engraved on the artifact was 43.1 mm, and in the present study was statistically calculated as 43.7 ± 0.7 mm. In addition, the position of the gnomon according to the original inscription and our calculation showed an error of 0.3 mm.

  18. Estimation of the Latitude, the Gnomon’s Length and Position About Sinbeop-Jipyeong-Ilgu in the Late of Joseon Dynasty

    Directory of Open Access Journals (Sweden)

    Byeong-Hee Mihn

    2017-06-01

    Full Text Available In this study, the characteristics of a horizontal sundial from the Joseon Dynasty were investigated. Korea’s Treasure No. 840 (T840 is a Western-style horizontal sundial where hour-lines and solar-term-lines are engraved. The inscription of this sundial indicates that the latitude (altitude of the north celestial pole is 37° 39´, but the gnomon is lost. In the present study, the latitude of the sundial and the length of the gnomon were estimated based only on the hour-lines and solar-termlines of the horizontal sundial. When statistically calculated from the convergent point obtained by extending the hourlines, the latitude of this sundial was 37° 15´ ± 26´, which showed a 24´ difference from the record of the inscription. When it was also assumed that a convergent point is changeable, the estimation of the sundial’s latitude was found to be sensitive to the variation of this point. This study found that T840 used a vertical gnomon, that is, perpendicular to the horizontal plane, rather than an inclined triangular gnomon, and a horn-shaped mark like a vertical gnomon is cut on its surface. The length of the gnomon engraved on the artifact was 43.1 mm, and in the present study was statistically calculated as 43.7 ± 0.7 mm. In addition, the position of the gnomon according to the original inscription and our calculation showed an error of 0.3 mm.

  19. Field Application of Cable Tension Estimation Technique Using the h-SI Method

    Directory of Open Access Journals (Sweden)

    Myung-Hyun Noh

    2015-01-01

    Full Text Available This paper investigates field applicability of a new system identification technique of estimating tensile force for a cable of long span bridges. The newly proposed h-SI method using the combination of the sensitivity updating algorithm and the advanced hybrid microgenetic algorithm can allow not only avoiding the trap of local minimum at initial searching stage but also finding the optimal solution in terms of better numerical efficiency than existing methods. First, this paper overviews the procedure of tension estimation through a theoretical formulation. Secondly, the validity of the proposed technique is numerically examined using a set of dynamic data obtained from benchmark numerical samples considering the effect of sag extensibility and bending stiffness of a sag-cable system. Finally, the feasibility of the proposed method is investigated through actual field data extracted from a cable-stayed Seohae Bridge. The test results show that the existing methods require precise initial data in advance but the proposed method is not affected by such initial information. In particular, the proposed method can improve accuracy and convergence rate toward final values. Consequently, the proposed method can be more effective than existing methods in terms of characterizing the tensile force variation for cable structures.

  20. TECHNIQUE AND DEVICE FOR THE EXPERIMENTAL ESTIMATION OF THE ACOUSTIC IMPEDANCE OF VISCOELASTIC MEDIUM

    Directory of Open Access Journals (Sweden)

    O. V. Murav’eva

    2017-01-01

    Full Text Available Measuring the characteristics of process fluids allows us to evaluate their quality, biological tissues – to differentiate healthy tissues and tissues with pathologies. Measuring the characteristics of process fluids allows us to evaluate their quality, biological tissues – to differentiate healthy tissues and tissues with pathologies. One of the complex acoustic parameters is the impedance, which allows one to fully evaluate the characteristics of viscoelastic media. Most of impedance methods of measurements require using two or more reference media and the availability of calibrated acoustic transducers. The aim of this work ware introduced a methods and construction for the experimental evaluation of the longitudinal and shear impedance of viscoelastic media based on measuring the parameters of the amplitude-frequency characteristics and calculating the elements of the electric circuit for replacing the piezoelectric element which vibrating in the test medium.The paper introduces a methods and construction of the experimental evaluation of the impedances of viscoelastic media. The suggested methods is allowed measuring longitudinal and shear impedances and determining velocities of longitudinal and transverse ultrasonic waves and the values of the elastic moduli of viscoelastic media, including in various aggregate states. The technique is fairly simple to implement and can be reproduced using simple laboratory equipment.The obtained values of the acoustic impedances of the investigated media are in satisfactory agreement with their reference data. In contrast to the known methods for determining the acoustic impedance, the developed technique allows us to estimate with sufficient accuracy the parameter of the shear impedance of viscoelastic media that is difficult to measure at the frequencies of the megahertz range, which determines the shear modulus of the material and characterizes its resistance to shear deformations. The results of

  1. Estimating the settling velocity of bioclastic sediment using common grain-size analysis techniques

    Science.gov (United States)

    Cuttler, Michael V. W.; Lowe, Ryan J.; Falter, James L.; Buscombe, Daniel D.

    2017-01-01

    Most techniques for estimating settling velocities of natural particles have been developed for siliciclastic sediments. Therefore, to understand how these techniques apply to bioclastic environments, measured settling velocities of bioclastic sedimentary deposits sampled from a nearshore fringing reef in Western Australia were compared with settling velocities calculated using results from several common grain-size analysis techniques (sieve, laser diffraction and image analysis) and established models. The effects of sediment density and shape were also examined using a range of density values and three different models of settling velocity. Sediment density was found to have a significant effect on calculated settling velocity, causing a range in normalized root-mean-square error of up to 28%, depending upon settling velocity model and grain-size method. Accounting for particle shape reduced errors in predicted settling velocity by 3% to 6% and removed any velocity-dependent bias, which is particularly important for the fastest settling fractions. When shape was accounted for and measured density was used, normalized root-mean-square errors were 4%, 10% and 18% for laser diffraction, sieve and image analysis, respectively. The results of this study show that established models of settling velocity that account for particle shape can be used to estimate settling velocity of irregularly shaped, sand-sized bioclastic sediments from sieve, laser diffraction, or image analysis-derived measures of grain size with a limited amount of error. Collectively, these findings will allow for grain-size data measured with different methods to be accurately converted to settling velocity for comparison. This will facilitate greater understanding of the hydraulic properties of bioclastic sediment which can help to increase our general knowledge of sediment dynamics in these environments.

  2. Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.

    Science.gov (United States)

    Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J

    2009-11-01

    Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.

  3. Identification of the Rayleigh surface waves for estimation of viscoelasticity using the surface wave elastography technique.

    Science.gov (United States)

    Zhang, Xiaoming

    2016-11-01

    The purpose of this Letter to the Editor is to demonstrate an effective method for estimating viscoelasticity based on measurements of the Rayleigh surface wave speed. It is important to identify the surface wave mode for measuring surface wave speed. A concept of start frequency of surface waves is proposed. The surface wave speeds above the start frequency should be used to estimate the viscoelasticity of tissue. The motivation was to develop a noninvasive surface wave elastography (SWE) technique for assessing skin disease by measuring skin viscoelastic properties. Using an optical based SWE system, the author generated a local harmonic vibration on the surface of phantom using an electromechanical shaker and measured the resulting surface waves on the phantom using an optical vibrometer system. The surface wave speed was measured using a phase gradient method. It was shown that different standing wave modes were generated below the start frequency because of wave reflection. However, the pure symmetric surface waves were generated from the excitation above the start frequency. Using the wave speed dispersion above the start frequency, the viscoelasticity of the phantom can be correctly estimated.

  4. Utilization the nuclear techniques use to estimate the water erosion in tobacco plantations in Cuba

    International Nuclear Information System (INIS)

    Gil, Reinaldo H.; Peralta, José L.; Carrazana, Jorge; Fleitas, Gema; Aguilar, Yulaidis; Rivero, Mario; Morejón, Yilian M.; Oliveira, Jorge

    2015-01-01

    Soil erosion is a relevant factor in land degradation, causing several negative impacts to different levels in the environment, agriculture, etc. The tobacco plantations in the western part of the country have been negatively affected by the water erosion due to natural and human factors. For the implementation of a strategy for sustainable land management a key element is to quantify the soil losses in order to establish policies for soil conservation. The nuclear techniques have advantages in comparison with the traditional methods to assess soil erosion and have been applied in different agricultural settings worldwide. The tobacco cultivation in Pinar del Río is placed on soils with high erosion levels, therefore is important to apply techniques which support the soil erosion rate quantification. This work shows the use of 137 Cs technique to characterize the soil erosion status in two sectors in a farm with tobacco plantations located in the south-western plain of Pinar del Rio province. The sampling strategy included the evaluation of selected transects in the slope direction for the studied site. The soil samples were collected in order to incorporate the whole 137 Cs profile. Different conversion models were applied and the Mass Balance Model II provided the more representative results, estimating the soil erosion rate from –18,28 to 8,15 t ha -1 año -1 . (author)

  5. Estimation of Apple Volume and Its Shape Indentation Using Image Processing Technique and Neural Network

    Directory of Open Access Journals (Sweden)

    M Jafarlou

    2014-04-01

    Full Text Available Physical properties of agricultural products such as volume are the most important parameters influencing grading and packaging systems. They should be measured accurately as they are considered for any good system design. Image processing and neural network techniques are both non-destructive and useful methods which are recently used for such purpose. In this study, the images of apples were captured from a constant distance and then were processed in MATLAB software and the edges of apple images were extracted. The interior area of apple image was divided into some thin trapezoidal elements perpendicular to longitudinal axis. Total volume of apple was estimated by the summation of incremental volumes of these elements revolved around the apple’s longitudinal axis. The picture of half cut apple was also captured in order to obtain the apple shape’s indentation volume, which was subtracted from the previously estimated total volume of apple. The real volume of apples was measured using water displacement method and the relation between the real volume and estimated volume was obtained. The t-test and Bland-Altman indicated that the difference between the real volume and the estimated volume was not significantly different (p>0.05 i.e. the mean difference was 1.52 cm3 and the accuracy of measurement was 92%. Utilizing neural network with input variables of dimension and mass has increased the accuracy up to 97% and the difference between the mean of volumes decreased to 0.7 cm3.

  6. Apple fruit diameter and length estimation by using the thermal and sunshine hours approach and its application to the digital orchard management information system.

    Directory of Open Access Journals (Sweden)

    Ming Li

    Full Text Available In apple cultivation, simulation models may be used to monitor fruit size during the growth and development process to predict production levels and to optimize fruit quality. Here, Fuji apples cultivated in spindle-type systems were used as the model crop. Apple size was measured during the growing period at an interval of about 20 days after full bloom, with three weather stations being used to collect orchard temperature and solar radiation data at different sites. Furthermore, a 2-year dataset (2011 and 2012 of apple fruit size measurements were integrated according to the weather station deployment sites, in addition to the top two most important environment factors, thermal and sunshine hours, into the model. The apple fruit diameter and length were simulated using physiological development time (PDT, an indicator that combines important environment factors, such as temperature and photoperiod, as the driving variable. Compared to the model of calendar-based development time (CDT, an indicator counting the days that elapse after full bloom, we confirmed that the PDT model improved the estimation accuracy to within 0.2 cm for fruit diameter and 0.1 cm for fruit length in independent years using a similar data collection method in 2013. The PDT model was implemented to realize a web-based management information system for a digital orchard, and the digital system had been applied in Shandong Province, China since 2013. This system may be used to compute the dynamic curve of apple fruit size based on data obtained from a nearby weather station. This system may provide an important decision support for farmers using the website and short message service to optimize crop production and, hence, economic benefit.

  7. Spectral element filtering techniques for large eddy simulation with dynamic estimation

    CERN Document Server

    Blackburn, H M

    2003-01-01

    Spectral element methods have previously been successfully applied to direct numerical simulation of turbulent flows with moderate geometrical complexity and low to moderate Reynolds numbers. A natural extension of application is to large eddy simulation of turbulent flows, although there has been little published work in this area. One of the obstacles to such application is the ability to deal successfully with turbulence modelling in the presence of solid walls in arbitrary locations. An appropriate tool with which to tackle the problem is dynamic estimation of turbulence model parameters, but while this has been successfully applied to simulation of turbulent wall-bounded flows, typically in the context of spectral and finite volume methods, there have been no published applications with spectral element methods. Here, we describe approaches based on element-level spectral filtering, couple these with the dynamic procedure, and apply the techniques to large eddy simulation of a prototype wall-bounded turb...

  8. Estimation of trace elements in some anti-diabetic medicinal plants using PIXE technique

    International Nuclear Information System (INIS)

    Naga Raju, G.J.; Sarita, P.; Ramana Murty, G.A.V.; Ravi Kumar, M.; Seetharami Reddy, B.; John Charles, M.; Lakshminarayana, S.; Seshi Reddy, T.; Reddy, S. Bhuloka; Vijayan, V.

    2006-01-01

    Trace elemental analysis was carried out in various parts of some anti-diabetic medicinal plants using PIXE technique. A 3 MeV proton beam was used to excite the samples. The elements Cl, K, Ca, Ti, Cr, Mn, Fe, Ni, Cu, Zn, Br, Rb and Sr were identified and their concentrations were estimated. The results of the present study provide justification for the usage of these medicinal plants in the treatment of diabetes mellitus (DM) since they are found to contain appreciable amounts of the elements K, Ca, Cr, Mn, Cu, and Zn, which are responsible for potentiating insulin action. Our results show that the analyzed medicinal plants can be considered as potential sources for providing a reasonable amount of the required elements other than diet to the patients of DM. Moreover, these results can be used to set new standards for prescribing the dosage of the herbal drugs prepared from these plant materials

  9. 40Ar-39Ar method for age estimation: principles, technique and application in orogenic regions

    International Nuclear Information System (INIS)

    Dalmejer, R.

    1984-01-01

    A variety of the K-Ar method for age estimation by 40 Ar/ 39 Ar recently developed is described. This method doesn't require direct analysis of potassium, its content is calculated as a function of 39 Ar, which is formed from 39 K under neutron activation. Errors resulted from interactions between potassium and calcium nuclei with neutrons are considered. The attention is paid to the technique of gradual heating, used in 40 Ar- 39 Ar method, and of obtaining age spectrum. Aplicabilities of isochronous diagram is discussed for the case of presence of excessive argon in a sample. Examples of 40 Ar- 39 Ar method application for dating events in orogenic regions are presented

  10. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Science.gov (United States)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  11. A pilot study of a simple screening technique for estimation of salivary flow.

    Science.gov (United States)

    Kanehira, Takashi; Yamaguchi, Tomotaka; Takehara, Junji; Kashiwazaki, Haruhiko; Abe, Takae; Morita, Manabu; Asano, Kouzo; Fujii, Yoshinori; Sakamoto, Wataru

    2009-09-01

    The purpose of this study was to develop a simple screening technique for estimation of salivary flow and to test the usefulness of the method for determining decreased salivary flow. A novel assay system comprising 3 spots containing 30 microg starch and 49.6 microg potassium iodide per spot on filter paper and a coloring reagent, based on the color reaction of iodine-starch and theory of paper chromatography, was designed. We investigated the relationship between resting whole salivary rates and the number of colored spots on the filter produced by 41 hospitalized subjects. A significant negative correlation was observed between the number of colored spots and the resting salivary flow rate (n = 41; r = -0.803; P < .01). For all complaints of decreased salivary flow (n = 9) having cutoff values <100 microL/min for the salivary flow rate, 3 colored spots appeared on the paper, whereas for healthy subjects there was < or =1 colored spot. This novel assay system might be effective for estimation of salivary flow not only in healthy but also in bedridden and disabled elderly people.

  12. Aliasing Signal Separation of Superimposed Abrasive Debris Based on Degenerate Unmixing Estimation Technique.

    Science.gov (United States)

    Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei

    2018-03-15

    Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system's lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system's ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection.

  13. The Use of Coupled Code Technique for Best Estimate Safety Analysis of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2006-01-01

    Issues connected with the thermal-hydraulics and neutronics of nuclear plants still challenge the design, safety and the operation of Light Water nuclear Reactors (LWR). The lack of full understanding of complex mechanisms related to the interaction between these issues imposed the adoption of conservative safety limits. Those safety margins put restrictions on the optimal exploitation of the plants and consequently reduced economic profit of the plant. In the light of the sustained development in computer technology, the possibilities of code capabilities have been enlarged substantially. Consequently, advanced safety evaluations and design optimizations that were not possible few years ago can now be performed. In fact, during the last decades Best Estimate (BE) neutronic and thermal-hydraulic calculations were so far carried out following rather parallel paths with only few interactions between them. Nowadays, it becomes possible to switch to new generation of computational tools, namely, Coupled Code technique. The application of such method is mandatory for the analysis of accident conditions where strong coupling between the core neutronics and the primary circuit thermal-hydraulics, and more especially when asymmetrical processes take place in the core leading to local space-dependent power generation. Through the current study, a demonstration of the maturity level achieved in the calculation of 3-D core performance during complex accident scenarios in NPPs is emphasized. Typical applications are outlined and discussed showing the main features and limitations of this technique. (author)

  14. Estimation of Shie Glacier Surface Movement Using Offset Tracking Technique with Cosmo-Skymed Images

    Science.gov (United States)

    Wang, Q.; Zhou, W.; Fan, J.; Yuan, W.; Li, H.; Sousa, J. J.; Guo, Z.

    2017-09-01

    Movement is one of the most important characteristics of glaciers which can cause serious natural disasters. For this reason, monitoring this massive blocks is a crucial task. Synthetic Aperture Radar (SAR) can operate all day in any weather conditions and the images acquired by SAR contain intensity and phase information, which are irreplaceable advantages in monitoring the surface movement of glaciers. Moreover, a variety of techniques like DInSAR and offset tracking, based on the information of SAR images, could be applied to measure the movement. Sangwang lake, a glacial lake in the Himalayas, has great potentially danger of outburst. Shie glacier is situated at the upstream of the Sangwang lake. Hence, it is significant to monitor Shie glacier surface movement to assess the risk of outburst. In this paper, 6 high resolution COSMO-SkyMed images spanning from August to December, 2016 are applied with offset tracking technique to estimate the surface movement of Shie glacier. The maximum velocity of Shie glacier surface movement is 51 cm/d, which was observed at the end of glacier tongue, and the velocity is correlated with the change of elevation. Moreover, the glacier surface movement in summer is faster than in winter and the velocity decreases as the local temperature decreases. Based on the above conclusions, the glacier may break off at the end of tongue in the near future. The movement results extracted in this paper also illustrate the advantages of high resolution SAR images in monitoring the surface movement of small glaciers.

  15. Estimation the Amount of Oil Palm Trees Production Using Remote Sensing Technique

    Science.gov (United States)

    Fitrianto, A. C.; Tokimatsu, K.; Sufwandika, M.

    2017-12-01

    Currently, fossil fuels were used as the main source of power supply to generate energy including electricity. Depletion in the amount of fossil fuels has been causing the increasing price of crude petroleum and the demand for alternative energy which is renewable and environment-friendly and it is defined from vegetable oils such palm oil, rapeseed and soybean. Indonesia known as the big palm oil producer which is the largest agricultural industry with total harvested oil palm area which is estimated grew until 8.9 million ha in 2015. On the other hand, lack of information about the age of oil palm trees and changes also their spatial distribution is mainly problem for energy planning. This research conducted to estimate fresh fruit bunch (FFB) of oil palm and their distribution using remote sensing technique. Cimulang oil palm plantation was choose as study area. First step, estimated the age of oil palm trees based on their canopy density as the result from Landsat 8 OLI analysis and classified into five class. From this result, we correlated oil palm age with their average FFB production per six months and classified into seed (0-3 years, 0kg), young (4-8 years, 68.77kg), teen (9-14 years, 109.08kg), and mature (14-25 years, 73.91kg). The result from satellite image analysis shows if Cimulang plantation area consist of teen old oil palm trees that it is covers around 81.5% of that area, followed by mature oil palm trees with 18.5% or corresponding to 100 hectares and have total production of FFB every six months around 7,974,787.24 kg.

  16. Fundamental length and relativistic length

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1988-01-01

    It si noted that the introduction of fundamental length contradicts the conventional representations concerning the contraction of the longitudinal size of fast-moving objects. The use of the concept of relativistic length and the following ''elongation formula'' permits one to solve this problem

  17. Basin Visual Estimation Technique (BVET) and Representative Reach Approaches to Wadeable Stream Surveys: Methodological Limitations and Future Directions

    Science.gov (United States)

    Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor

    2004-01-01

    Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...

  18. Flame Length

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — Flame length was modeled using FlamMap, an interagency fire behavior mapping and analysis program that computes potential fire behavior characteristics. The tool...

  19. An analysis on DNA fingerprints of thirty papaya cultivars (Carica papaya L.), grown in Thailand with the use of amplified fragment length polymorphisms technique.

    Science.gov (United States)

    Ratchadaporn, Janthasri; Sureeporn, Katengam; Khumcha, U

    2007-09-15

    The experiment was carried out at the Department of Horticulture, Ubon Ratchathani University, Ubon Ratchathani province, Northeast Thailand during June 2002 to May 2003 aims to identify DNA fingerprints of thirty papaya cultivars with the use of Amplified Fragment Length Polymorphisms (AFLP) technique. Papaya cultivars were collected from six different research centers in Thailand. Papaya plants of each cultivar were grown under field conditions up to four months then leaf numbers 2 and 3 of each cultivar (counted from top) were chosen for DNA extraction and the samples were used for AFLP analysis. Out of 64 random primers being used, 55 pairs gave an increase in DNA bands but only 12 pairs of random primers were randomly chosen for the final analysis of the experiment. The results showed that AFLP markers gave Polymorphic Information Contents (PIC) of three ranges i.e., AFLP markers of 235 lied on a PIC range of 0.003-0.05, 47 for a PIC range of 0.15-0.20 and 12 for a PIC range of 0.35-0.40. The results on dendrogram cluster analysis revealed that the thirty papaya cultivars were classified into six groups i.e., (1) Kaeg Dum and Malador (2) Kaeg Nuan (3) Pakchong and Solo (4) Taiwan (5) Co Coa Hai Nan and (6) Sitong. Nevertheless, in spite of the six papaya groups all papaya cultivars were genetically related to each other where diversity among the cultivars was not significantly found.

  20. A semester-long project for teaching basic techniques in molecular biology such as restriction fragment length polymorphism analysis to undergraduate and graduate students.

    Science.gov (United States)

    DiBartolomeis, Susan M

    2011-01-01

    Several reports on science education suggest that students at all levels learn better if they are immersed in a project that is long term, yielding results that require analysis and interpretation. I describe a 12-wk laboratory project suitable for upper-level undergraduates and first-year graduate students, in which the students molecularly locate and map a gene from Drosophila melanogaster called dusky and one of dusky's mutant alleles. The mapping strategy uses restriction fragment length polymorphism analysis; hence, students perform most of the basic techniques of molecular biology (DNA isolation, restriction enzyme digestion and mapping, plasmid vector subcloning, agarose and polyacrylamide gel electrophoresis, DNA labeling, and Southern hybridization) toward the single goal of characterizing dusky and the mutant allele dusky(73). Students work as individuals, pairs, or in groups of up to four students. Some exercises require multitasking and collaboration between groups. Finally, results from everyone in the class are required for the final analysis. Results of pre- and postquizzes and surveys indicate that student knowledge of appropriate topics and skills increased significantly, students felt more confident in the laboratory, and students found the laboratory project interesting and challenging. Former students report that the lab was useful in their careers.

  1. A Semester-Long Project for Teaching Basic Techniques in Molecular Biology Such as Restriction Fragment Length Polymorphism Analysis to Undergraduate and Graduate Students

    Science.gov (United States)

    DiBartolomeis, Susan M.

    2011-01-01

    Several reports on science education suggest that students at all levels learn better if they are immersed in a project that is long term, yielding results that require analysis and interpretation. I describe a 12-wk laboratory project suitable for upper-level undergraduates and first-year graduate students, in which the students molecularly locate and map a gene from Drosophila melanogaster called dusky and one of dusky's mutant alleles. The mapping strategy uses restriction fragment length polymorphism analysis; hence, students perform most of the basic techniques of molecular biology (DNA isolation, restriction enzyme digestion and mapping, plasmid vector subcloning, agarose and polyacrylamide gel electrophoresis, DNA labeling, and Southern hybridization) toward the single goal of characterizing dusky and the mutant allele dusky73. Students work as individuals, pairs, or in groups of up to four students. Some exercises require multitasking and collaboration between groups. Finally, results from everyone in the class are required for the final analysis. Results of pre- and postquizzes and surveys indicate that student knowledge of appropriate topics and skills increased significantly, students felt more confident in the laboratory, and students found the laboratory project interesting and challenging. Former students report that the lab was useful in their careers. PMID:21364104

  2. Right ventricular volume estimation with cine MRI; A comparative study between Simpson's rule and a new modified area-length method

    Energy Technology Data Exchange (ETDEWEB)

    Sawachika, Takashi (Yamaguchi Univ., Ube (Japan). School of Medicine)

    1993-04-01

    To quantitate right ventricular (RV) volumes easily using cine MRI, we developed a new method called 'modified area-length method (MOAL method)'. To validate this method, we compared it to the conventional Simpson's rule. Magnetom H15 (Siemens) was used and 6 normal volunteers and 21 patients with various RV sizes were imaged with ECG triggered gradient echo method (FISP, TR 50 ms, TE 12 ms, slice thickness 9 mm). For Simpson's rule transverse images of 12 sequential views which cover whole heart were acquired. For the MOAL method, two orthogonal views were imaged. One was the sagittal view which includes RV outflow tract and the other was the coronal view defined from the sagittal image to cover the whole RV. From these images the area (As, Ac) of RV and the longest distance between RV apex and pulmonary valve (Lmax) were determined. By correlating RV volumes measured by Simpson's rule to As*Ac/Lmax the RV volume could be estimated as follows: V=0.85*As*Ac/Lmax+4.55. Thus the MOAL method demonstrated excellent accuracy to quantitate RV volume and the acquisition time abbreviated to one fifth compared with Simpson's rule. This should be a highly promising method for routine clinical application. (author).

  3. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    Science.gov (United States)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is

  4. Estimating biophysical properties of eucalyptus plantations using optical remote sensing techniques

    Science.gov (United States)

    Soares, Joao V.; Xavier, Alexandre C.; de Almeida, Auro C.; da Costa Freitas, Corina

    1998-12-01

    The feasibility of the inversion of optical remote sensing products to measure critical biophysical properties of Eucalyptus Forests at regional scales is investigated here. The biophysical variables used were leaf area Index, LAI, Diameter at Breast Height, DBH, Height and Age of Eucalyptus stands pertaining to a combination of different genetic materials (E. urophylla x E. grandis hybrids) and propagating systems (seeds or cuttings) and management system (planting and coppicing). The field sampling was done daily during 3 months, from April to June 1997, and covered 130 stands of minimum sizes of 9 hectares, within an Eucalyptus farming area of about 800 km2, centered at 19 degrees South, 42 degrees West, Brazil. The stands ranged from 12 to 84 months old. The measurements of LAI were done using two pairs of LAI-2000 (LICOR) under conditions of diffuse light. The Normalized Difference Vegetation Index, NDVI, and the Soil Adjusted Vegetation Index, SAVI, were derived from a LANDSAT-TM image acquired on June 5, 1997. Furthermore, a mixture model technique was applied to derive three new parameters: fraction of green vegetation, FGV, fraction of shadow, FSH, and fraction of soil, FS. Regression analysis were done between biophysical variables and remote sensing products. Linear correlation with coefficients of determination, R2, as high as 0.8 were found between LAI versus FGV and LAI versus SAVI, on all genetic materials. In general, SAVI was shown to give better estimates of LAI than NDVI, which is explained by the openings in the canopy as the Eucalyptus grow older. The correlation with the other biophysical variables (Height and DBH) were also shown to be significant, although the R2 ranged from 0.4 to 0.6. The correlation between FGV and SAVI was higher than 90% such that they can be used to estimate Eucalyptus biophysical parameters with the same statistical significance.

  5. Comparison of two non-linear prediction techniques for estimation of some intact rock parameters

    Science.gov (United States)

    Yagiz, Saffet; Sezer, Ebru; Gokceoglu, Candan

    2010-05-01

    Traditionally, some regression techniques have been used for prediction of some rock properties using their physical and index parameters. For this purpose, numerous models and empirical equations have been proposed in the literature to predict the uniaxial compressive strength (UCS) and the elasticity modules (E) of intact rocks. Two of the powerful modeling techniques for this purpose is that the non-linear multivariable regression (NLMR) and the artificial neural networks (ANN). The aim of the study is to develop some models to predict the UCS and E of rocks using predictive tools. Further, to investigate whether two-cycle or four-cycle slake durability index as an input parameter into the models demonstrates better characterization capacity for carbonate rocks, and also, to introduce two new performance ranking approaches via performance index and degree of consistency to select the best predictor among the developed models, complex and their rank cannot be solved by using a simple ranking approach introduced previously in the literature. To obtain these purposes, seven type of carbonate rocks was collected from quarries in the southwestern Turkey and their properties including the uniaxial compressive strength, the Schmidt hammer, effective porosity, dry unit weight, P-wave velocity, the modulus of elasticity, and both two and four-cycle of slake durability indices were determined for establishing a dataset used for construction of the models. As a result of this study, it is found that four-cycle slake durability index exhibits more characterization capacity for carbonate rock in the models in comparison with two-cycle slake durability index. Also, the ANN models having two outputs (UCS and E) exhibit more accurate estimation capacity than the NLMR models. In addition, newly introduced performance ranking index and degree of consistency may be accepted as useful indicators to be considered to obtain the performance ranking of complex models. Consequently

  6. ESTIMATION OF SHIE GLACIER SURFACE MOVEMENT USING OFFSET TRACKING TECHNIQUE WITH COSMO-SKYMED IMAGES

    Directory of Open Access Journals (Sweden)

    Q. Wang

    2017-09-01

    Full Text Available Movement is one of the most important characteristics of glaciers which can cause serious natural disasters. For this reason, monitoring this massive blocks is a crucial task. Synthetic Aperture Radar (SAR can operate all day in any weather conditions and the images acquired by SAR contain intensity and phase information, which are irreplaceable advantages in monitoring the surface movement of glaciers. Moreover, a variety of techniques like DInSAR and offset tracking, based on the information of SAR images, could be applied to measure the movement. Sangwang lake, a glacial lake in the Himalayas, has great potentially danger of outburst. Shie glacier is situated at the upstream of the Sangwang lake. Hence, it is significant to monitor Shie glacier surface movement to assess the risk of outburst. In this paper, 6 high resolution COSMO-SkyMed images spanning from August to December, 2016 are applied with offset tracking technique to estimate the surface movement of Shie glacier. The maximum velocity of Shie glacier surface movement is 51 cm/d, which was observed at the end of glacier tongue, and the velocity is correlated with the change of elevation. Moreover, the glacier surface movement in summer is faster than in winter and the velocity decreases as the local temperature decreases. Based on the above conclusions, the glacier may break off at the end of tongue in the near future. The movement results extracted in this paper also illustrate the advantages of high resolution SAR images in monitoring the surface movement of small glaciers.

  7. Estimation of radiation exposure of different dose saving techniques in 128-slice computed tomography coronary angiography.

    Science.gov (United States)

    Ketelsen, Dominik; Fenchel, Michael; Buchgeister, Markus; Thomas, Christoph; Boehringer, Nadine; Tsiflikas, Ilias; Kaempf, Michael; Syha, Roland; Claussen, Claus D; Heuschmid, Martin

    2012-02-01

    To estimate the effective dose of cardiac CT with different dose saving strategies dependent on varying heart rates. For dose measurements, an Alderson-Rando-phantom equipped with thermoluminescent dosimeters was used. The effective dose was calculated according to ICRP 103. Exposure was performed on a 128-slice single source scanner providing a rotation time of 0.30s and standard protocols with 120 kV and 160 mAs/rot. Protocols were evaluated without ECG-pulsing, with two different ECG-pulsing techniques, and automated exposure control with a simulated heart rate of 60 and 100 beats per minute. Depending on different dose saving techniques and heart rate, the effective whole-body dose of a cardiac scan ranged from 2.8 to 9.5 mSv and from 4.3 to 16.0 mSv for males and females, respectively. The radiation-sensitive breast tissue in the primary scan range results in an increased female dose of 66.7 ± 6.0%. Prospective triggering has the greatest potential to reduce the effective dose to 27.8%, compared to a comparable scan protocol with retrospective ECG-triggering with no ECG-pulsing. Furthermore, the heart rate influences the radiation exposure by increasing significantly at lower heart rates. Due to this broad variability in radiation exposure of a cardiac CT, the radiologist and the CT technician should be aware of the different dose reduction strategies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Estimation of Sub Hourly Glacier Albedo Values Using Artificial Intelligence Techniques

    Science.gov (United States)

    Moya Quiroga, Vladimir; Mano, Akira; Asaoka, Yoshihiro; Udo, Keiko; Kure, Shuichi; Mendoza, Javier

    2013-04-01

    Glaciers are the most important fresh water reservoirs storing about 67% of total fresh water. Unfortunately, they are retreating and some small glaciers have already disappeared. Thus, snow glacier melt (SGM) estimation plays an important role in water resources management. Whether SGM is estimated by complete energy balance or a simplified method, albedo is an important data present in most of the methods. However, this is a variable value depending on the ground surface and local conditions. The present research presents a new approach for estimating sub hourly albedo values using different artificial intelligence techniques such as artificial neural networks and decision trees along with measured and easy to obtain data. . The models were developed using measured data from the Zongo-Ore station located in the Bolivian tropical glacier Zongo (68°10' W, 16°15' S). This station automatically records every 30 minutes several meteorological parameters such as incoming short wave radiation, outgoing short wave radiation, temperature or relative humidity. The ANN model used was the Multi Layer Perceptron, while the decision tree used was the M5 model. Both models were trained using the WEKA software and validated using the cross validation method. After analysing the model performances, it was concluded that the decision tree models have a better performance. The model with the best performance was then validated with measured data from the Equatorian tropical glacier Antizana (78°09'W, 0°28'S). The model predicts the sub hourly albedo with an overall mean absolute error of 0.103. The highest errors occur for albedo measured values higher than 0.9. Considering that this is an extreme value coincident with low measured values of incoming short wave radiation, it is reasonable to assume that such values include errors due to censored data. Assuming a maximum albedo of 0.9 improved the accuracy of the model reducing the MAE to less than 0.1. Considering that the

  9. Estimation of Actual Evapotranspiration Using an Agro-Hydrological Model and Remote Sensing Techniques

    Directory of Open Access Journals (Sweden)

    mostafa yaghoobzadeh

    2017-02-01

    Full Text Available Introduction: Accurate estimation of evapotranspiration plays an important role in quantification of water balance at awatershed, plain and regional scale. Moreover, it is important in terms ofmanaging water resources such as water allocation, irrigation management, and evaluating the effects of changing land use on water yields. Different methods are available for ET estimation including Bowen ratio energy balance systems, eddy correlation systems, weighing lysimeters.Water balance techniques offer powerful alternatives for measuring ET and other surface energy fluxes. In spite of the elegance, high accuracy and theoretical attractions of these techniques for measuring ET, their practical use over large areas might be limited. They can be very expensive for practical applications at regional scales under heterogeneous terrains composed of different agro-ecosystems. To overcome aforementioned limitations by use of satellite measurements are appropriate approach. The feasibility of using remotely sensed crop parameters in combination of agro-hydrological models has been investigated in recent studies. The aim of the present study was to determine evapotranspiration by two methods, remote sensing and soil, water, atmosphere, and plant (SWAP model for wheat fields located in Neishabour plain. The output of SWAP has been validated by means of soil water content measurements. Furthermore, the actual evapotranspiration estimated by SWAP has been considered as the “reference” in the comparison between SEBAL energy balance models. Materials and Methods: Surface Energy Balance Algorithm for Land (SEBAL was used to estimate actual ET fluxes from Modis satellite images. SEBAL is a one-layer energy balance model that estimates latent heat flux and other energy balance components without information on soil, crop, and management practices. The near surface energy balance equation can be approximated as: Rn = G + H + λET Where Rn: net radiation (Wm2; G

  10. Wind Turbine Tower Vibration Modeling and Monitoring by the Nonlinear State Estimation Technique (NSET

    Directory of Open Access Journals (Sweden)

    Peng Guo

    2012-12-01

    Full Text Available With appropriate vibration modeling and analysis the incipient failure of key components such as the tower, drive train and rotor of a large wind turbine can be detected. In this paper, the Nonlinear State Estimation Technique (NSET has been applied to model turbine tower vibration to good effect, providing an understanding of the tower vibration dynamic characteristics and the main factors influencing these. The developed tower vibration model comprises two different parts: a sub-model used for below rated wind speed; and another for above rated wind speed. Supervisory control and data acquisition system (SCADA data from a single wind turbine collected from March to April 2006 is used in the modeling. Model validation has been subsequently undertaken and is presented. This research has demonstrated the effectiveness of the NSET approach to tower vibration; in particular its conceptual simplicity, clear physical interpretation and high accuracy. The developed and validated tower vibration model was then used to successfully detect blade angle asymmetry that is a common fault that should be remedied promptly to improve turbine performance and limit fatigue damage. The work also shows that condition monitoring is improved significantly if the information from the vibration signals is complemented by analysis of other relevant SCADA data such as power performance, wind speed, and rotor loads.

  11. Comparison of volatility function technique for risk-neutral densities estimation

    Science.gov (United States)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  12. Artificial Intelligence Techniques for the Estimation of Direct Methanol Fuel Cell Performance

    Science.gov (United States)

    Hasiloglu, Abdulsamet; Aras, Ömür; Bayramoglu, Mahmut

    2016-04-01

    Artificial neural networks and neuro-fuzzy inference systems are well known artificial intelligence techniques used for black-box modelling of complex systems. In this study, Feed-forward artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are used for modelling the performance of direct methanol fuel cell (DMFC). Current density (I), fuel cell temperature (T), methanol concentration (C), liquid flow-rate (q) and air flow-rate (Q) are selected as input variables to predict the cell voltage. Polarization curves are obtained for 35 different operating conditions according to a statistically designed experimental plan. In modelling study, various subsets of input variables and various types of membership function are considered. A feed -forward architecture with one hidden layer is used in ANN modelling. The optimum performance is obtained with the input set (I, T, C, q) using twelve hidden neurons and sigmoidal activation function. On the other hand, first order Sugeno inference system is applied in ANFIS modelling and the optimum performance is obtained with the input set (I, T, C, q) using sixteen fuzzy rules and triangular membership function. The test results show that ANN model estimates the polarization curve of DMFC more accurately than ANFIS model.

  13. Heart Failure: Diagnosis, Severity Estimation and Prediction of Adverse Events Through Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Evanthia E. Tripoliti

    2017-01-01

    Full Text Available Heart failure is a serious condition with high prevalence (about 2% in the adult population in developed countries, and more than 8% in patients older than 75 years. About 3–5% of hospital admissions are linked with heart failure incidents. Heart failure is the first cause of admission by healthcare professionals in their clinical practice. The costs are very high, reaching up to 2% of the total health costs in the developed countries. Building an effective disease management strategy requires analysis of large amount of data, early detection of the disease, assessment of the severity and early prediction of adverse events. This will inhibit the progression of the disease, will improve the quality of life of the patients and will reduce the associated medical costs. Toward this direction machine learning techniques have been employed. The aim of this paper is to present the state-of-the-art of the machine learning methodologies applied for the assessment of heart failure. More specifically, models predicting the presence, estimating the subtype, assessing the severity of heart failure and predicting the presence of adverse events, such as destabilizations, re-hospitalizations, and mortality are presented. According to the authors' knowledge, it is the first time that such a comprehensive review, focusing on all aspects of the management of heart failure, is presented.

  14. Comparing Small Area Techniques for Estimating Poverty Measures: the Case Study of Austria and Spain

    Directory of Open Access Journals (Sweden)

    Federico Crescenzi

    2016-06-01

    Full Text Available The Europe 2020 Strategy has formulated key policy objectives or so-called “headline targets” which the European Union as a whole and Member States are individually committed to achieving by 2020. One of the five headline targets is directly related to the key quality aspects of life, namely social inclusion; within these targets, the European Union Statistics on Income and Living Condition (EU-SILC headline indicators atrisk-of-poverty or social exclusion and its components will be included in the budgeting of structural funds, one of the main instruments through which policy targets are attained. For this purpose, Directorate-General Regional Policy of the European Commission is aiming to use sub-national/regional level data (NUTS 2. Starting from this, the focus of the present paper is on the “regional dimension” of well-being. We propose to adopt a methodology based on the Empirical Best Linear Unbiased Predictor (EBLUP with an extension to the spatial dimension (SEBLUP; moreover, we compare this small area technique with the cumulation method. The application is conducted on the basis of EU-SILC data from Austria and Spain. Results report that, in general, estimates computed with the cumulation method show standard errors which are smaller than those computed with EBLUP or SEBLUP. The gain of pooling SILC data over three years is, therefore, relevant, and may allow researchers to prefer this method.

  15. Estimation of Crop Coefficient of Corn (Kccorn under Climate Change Scenarios Using Data Mining Technique

    Directory of Open Access Journals (Sweden)

    Kampanad Bhaktikul

    2012-01-01

    Full Text Available The main objectives of this study are to determine the crop coefficient of corn (Kccorn using data mining technique under climate change scenarios, and to develop the guidelines for future water management based on climate change scenarios. Variables including date, maximum temperature, minimum temperature, precipitation, humidity, wind speed, and solar radiation from seven meteorological stations during 1991 to 2000 were used. Cross-Industry Standard Process for Data Mining (CRISP-DM was applied for data collection and analyses. The procedures compose of investigation of input data, model set up using Artificial Neural Networks (ANNs, model evaluation, and finally estimation of the Kccorn. Three climate change scenarios of carbon dioxide (CO2 concentration level: 360 ppm, 540 ppm, and 720 ppm were set. The results indicated that the best number of node of input layer - hidden layer - output layer was 7-13-1. The correlation coefficient of model was 0.99. The predicted Kccorn revealed that evapotranspiration (ETcorn pattern will be changed significantly upon CO2 concentration level. From the model predictions, ETcorn will be decreased 3.34% when CO2 increased from 360 ppm to 540 ppm. For the double CO2 concentration from 360 ppm to 720 ppm, ETcorn will be increased 16.13%. The future water management guidelines to cope with the climate change are suggested.

  16. Estimation of coronary wave intensity analysis using noninvasive techniques and its application to exercise physiology.

    Science.gov (United States)

    Broyd, Christopher J; Nijjer, Sukhjinder; Sen, Sayan; Petraco, Ricardo; Jones, Siana; Al-Lamee, Rasha; Foin, Nicolas; Al-Bustami, Mahmud; Sethi, Amarjit; Kaprielian, Raffi; Ramrakha, Punit; Khan, Masood; Malik, Iqbal S; Francis, Darrel P; Parker, Kim; Hughes, Alun D; Mikhail, Ghada W; Mayet, Jamil; Davies, Justin E

    2016-03-01

    Wave intensity analysis (WIA) has found particular applicability in the coronary circulation where it can quantify traveling waves that accelerate and decelerate blood flow. The most important wave for the regulation of flow is the backward-traveling decompression wave (BDW). Coronary WIA has hitherto always been calculated from invasive measures of pressure and flow. However, recently it has become feasible to obtain estimates of these waveforms noninvasively. In this study we set out to assess the agreement between invasive and noninvasive coronary WIA at rest and measure the effect of exercise. Twenty-two patients (mean age 60) with unobstructed coronaries underwent invasive WIA in the left anterior descending artery (LAD). Immediately afterwards, noninvasive LAD flow and pressure were recorded and WIA calculated from pulsed-wave Doppler coronary flow velocity and central blood pressure waveforms measured using a cuff-based technique. Nine of these patients underwent noninvasive coronary WIA assessment during exercise. A pattern of six waves were observed in both modalities. The BDW was similar between invasive and noninvasive measures [peak: 14.9 ± 7.8 vs. -13.8 ± 7.1 × 10(4) W·m(-2)·s(-2), concordance correlation coefficient (CCC): 0.73, P Exercise increased the BDW: at maximum exercise peak BDW was -47.0 ± 29.5 × 10(4) W·m(-2)·s(-2) (P Physiological Society.

  17. Fundamental length

    International Nuclear Information System (INIS)

    Pradhan, T.

    1975-01-01

    The concept of fundamental length was first put forward by Heisenberg from purely dimensional reasons. From a study of the observed masses of the elementary particles known at that time, it is sumrised that this length should be of the order of magnitude 1 approximately 10 -13 cm. It was Heisenberg's belief that introduction of such a fundamental length would eliminate the divergence difficulties from relativistic quantum field theory by cutting off the high energy regions of the 'proper fields'. Since the divergence difficulties arise primarily due to infinite number of degrees of freedom, one simple remedy would be the introduction of a principle that limits these degrees of freedom by removing the effectiveness of the waves with a frequency exceeding a certain limit without destroying the relativistic invariance of the theory. The principle can be stated as follows: It is in principle impossible to invent an experiment of any kind that will permit a distintion between the positions of two particles at rest, the distance between which is below a certain limit. A more elegant way of introducing fundamental length into quantum theory is through commutation relations between two position operators. In quantum field theory such as quantum electrodynamics, it can be introduced through the commutation relation between two interpolating photon fields (vector potentials). (K.B.)

  18. Bi Input-extended Kalman filter based estimation technique for speed-sensorless control of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Barut, Murat, E-mail: muratbarut27@yahoo.co [Nigde University, Department of Electrical and Electronics Engineering, 51245 Nigde (Turkey)

    2010-10-15

    This study offers a novel extended Kalman filter (EKF) based estimation technique for the solution of the on-line estimation problem related to uncertainties in the stator and rotor resistances inherent to the speed-sensorless high efficiency control of induction motors (IMs) in the wide speed range as well as extending the limited number of states and parameter estimations possible with a conventional single EKF algorithm. For this aim, the introduced estimation technique in this work utilizes a single EKF algorithm with the consecutive execution of two inputs derived from the two individual extended IM models based on the stator resistance and rotor resistance estimation, differently from the other approaches in past studies, which require two separate EKF algorithms operating in a switching or braided manner; thus, it has superiority over the previous EKF schemes in this regard. The proposed EKF based estimation technique performing the on-line estimations of the stator currents, the rotor flux, the rotor angular velocity, and the load torque involving the viscous friction term together with the rotor and stator resistance is also used in the combination with the speed-sensorless direct vector control of IM and tested with simulations under the challenging 12 scenarios generated instantaneously via step and/or linear variations of the velocity reference, the load torque, the stator resistance, and the rotor resistance in the range of high and zero speed, assuming that the measured stator phase currents and voltages are available. Even under those variations, the performance of the speed-sensorless direct vector control system established on the novel EKF based estimation technique is observed to be quite good.

  19. Modelling and analysis of ozone concentration by artificial intelligent techniques for estimating air quality

    Science.gov (United States)

    Taylan, Osman

    2017-02-01

    High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.

  20. Hyperspectral remote sensing techniques for grass nutrient estimations in savannah ecosystems

    CSIR Research Space (South Africa)

    Ramoelo, Abel

    2010-03-01

    Full Text Available at various scales such as local, regional and global scale. Traditional field techniques to measure grass nutrient concentration have been reported to be laborious and time consuming. Remote sensing techniques provide opportunity to map grass nutrient...

  1. Uranium solution mining cost estimating technique: means for rapid comparative analysis of deposits

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    Twelve graphs provide a technique for determining relative cost ranges for uranium solution mining projects. The use of the technique can provide a consistent framework for rapid comparative analysis of various properties of mining situations. The technique is also useful to determine the sensitivities of cost figures to incremental changes in mining factors or deposit characteristics

  2. TU-H-207A-09: An Automated Technique for Estimating Patient-Specific Regional Imparted Energy and Dose From TCM CT Exams Across 13 Protocols

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, J; Tian, X; Segars, P [Duke University, Durham, NC (United States); Boone, J [UC Davis Medical Center, Sacramento, CA (United States); Samei, E [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: To develop an automated technique for estimating patient-specific regional imparted energy and dose from tube current modulated (TCM) computed tomography (CT) exams across a diverse set of head and body protocols. Methods: A library of 58 adult computational anthropomorphic extended cardiac-torso (XCAT) phantoms were used to model a patient population. A validated Monte Carlo program was used to simulate TCM CT exams on the entire library of phantoms for three head and 10 body protocols. The net imparted energy to the phantoms, normalized by dose length product (DLP), and the net tissue mass in each of the scan regions were computed. A knowledgebase containing relationships between normalized imparted energy and scanned mass was established. An automated computer algorithm was written to estimate the scanned mass from actual clinical CT exams. The scanned mass estimate, DLP of the exam, and knowledgebase were used to estimate the imparted energy to the patient. The algorithm was tested on 20 chest and 20 abdominopelvic TCM CT exams. Results: The normalized imparted energy increased with increasing kV for all protocols. However, the normalized imparted energy was relatively unaffected by the strength of the TCM. The average imparted energy was 681 ± 376 mJ for abdominopelvic exams and 274 ± 141 mJ for chest exams. Overall, the method was successful in providing patientspecific estimates of imparted energy for 98% of the cases tested. Conclusion: Imparted energy normalized by DLP increased with increasing tube potential. However, the strength of the TCM did not have a significant effect on the net amount of energy deposited to tissue. The automated program can be implemented into the clinical workflow to provide estimates of regional imparted energy and dose across a diverse set of clinical protocols.

  3. Low-complexity DOA estimation from short data snapshots for ULA systems using the annihilating filter technique

    Science.gov (United States)

    Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali

    2017-12-01

    This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.

  4. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Demosthenous, M.; Manos, G. C.

    The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective is to ...... of freedom system loaded by white noise, estimating the coefficient of restitution as explained, and comparing the estimates with the value used in the simulations. Several estimates for the coefficient of restitution are considered, and reasonable results are achieved....

  5. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Demosthenous, Milton; Manos, George C.

    1994-01-01

    The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective is to ...... of freedom system loaded by white noise, estimating the coefficient of restitution as explained, and comparing the estimates with the value used in the simulations. Several estimates for the coefficient of restitution are considered, and reasonable results are achieved....

  6. A Broadband Microwave Radiometer Technique at X-band for Rain and Drop Size Distribution Estimation

    Science.gov (United States)

    Meneghini, R.

    2005-01-01

    Radiometric brightess temperatures below about 12 GHz provide accurate estimates of path attenuation through precipitation and cloud water. Multiple brightness temperature measurements at X-band frequencies can be used to estimate rainfall rate and parameters of the drop size distribution once correction for cloud water attenuation is made. Employing a stratiform storm model, calculations of the brightness temperatures at 9.5, 10 and 12 GHz are used to simulate estimates of path-averaged median mass diameter, number concentration and rainfall rate. The results indicate that reasonably accurate estimates of rainfall rate and information on the drop size distribution can be derived over ocean under low to moderate wind speed conditions.

  7. Application of Ambient Analysis Techniques for the Estimation of Electromechanical Oscillations from Measured PMU Data in Four Different Power Systems

    DEFF Research Database (Denmark)

    Vanfretti, Luigi; Dosiek, Luke; Pierre, John W.

    2011-01-01

    for electromechanicalmode estimation in different power systems. We apply these techniques to phasor measurement unit (PMU) data from stored archives of several hours originating from the US Eastern Interconnection (EI), the Western Electricity Coordinating Council (WECC), the Nordic Power System, and time...

  8. Estimating rumen degradability of forages from semi-natural grasslands, using nylon bag and gas production techniques

    NARCIS (Netherlands)

    Bruinenberg, M.H.; Gelder, van A.H.; Gonzalez Perez, P.; Hindle, V.A.; Cone, J.W.

    2004-01-01

    To determine the ruminal digestion of forages from extensively managed semi-natural grasslands, degradation characteristics and kinetics of silages of three different forages in the rumen of lactating dairy cows were estimated in vitro using the gas production technique (GPT), and in situ using the

  9. Evaluation of Computational Techniques for Parameter Estimation and Uncertainty Analysis of Comprehensive Watershed Models

    Science.gov (United States)

    Yen, H.; Arabi, M.; Records, R.

    2012-12-01

    The structural complexity of comprehensive watershed models continues to increase in order to incorporate inputs at finer spatial and temporal resolutions and simulate a larger number of hydrologic and water quality responses. Hence, computational methods for parameter estimation and uncertainty analysis of complex models have gained increasing popularity. This study aims to evaluate the performance and applicability of a range of algorithms from computationally frugal approaches to formal implementations of Bayesian statistics using Markov Chain Monte Carlo (MCMC) techniques. The evaluation procedure hinges on the appraisal of (i) the quality of final parameter solution in terms of the minimum value of the objective function corresponding to weighted errors; (ii) the algorithmic efficiency in reaching the final solution; (iii) the marginal posterior distributions of model parameters; (iv) the overall identifiability of the model structure; and (v) the effectiveness in drawing samples that can be classified as behavior-giving solutions. The proposed procedure recognize an important and often neglected issue in watershed modeling that solutions with minimum objective function values may not necessarily reflect the behavior of the system. The general behavior of a system is often characterized by the analysts according to the goals of studies using various error statistics such as percent bias or Nash-Sutcliffe efficiency coefficient. Two case studies are carried out to examine the efficiency and effectiveness of four Bayesian approaches including Metropolis-Hastings sampling (MHA), Gibbs sampling (GSA), uniform covering by probabilistic rejection (UCPR), and differential evolution adaptive Metropolis (DREAM); a greedy optimization algorithm dubbed dynamically dimensioned search (DDS); and shuffle complex evolution (SCE-UA), a widely implemented evolutionary heuristic optimization algorithm. The Soil and Water Assessment Tool (SWAT) is used to simulate hydrologic and

  10. AFSC/RACE/GAP/Palsson: Gulf of Alaska and Aleutian Islands Biennial Bottom Trawl Survey estimates of catch per unit effort, biomass, population at length, and associated tables

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The GOA/AI Bottom Trawl Estimate database contains abundance estimates for the Alaska Biennial Bottom Trawl Surveys conducted in the Gulf of Alaska and the Aleutian...

  11. A Comparison of Regression Techniques for Estimation of Above-Ground Winter Wheat Biomass Using Near-Surface Spectroscopy

    Directory of Open Access Journals (Sweden)

    Jibo Yue

    2018-01-01

    Full Text Available Above-ground biomass (AGB provides a vital link between solar energy consumption and yield, so its correct estimation is crucial to accurately monitor crop growth and predict yield. In this work, we estimate AGB by using 54 vegetation indexes (e.g., Normalized Difference Vegetation Index, Soil-Adjusted Vegetation Index and eight statistical regression techniques: artificial neural network (ANN, multivariable linear regression (MLR, decision-tree regression (DT, boosted binary regression tree (BBRT, partial least squares regression (PLSR, random forest regression (RF, support vector machine regression (SVM, and principal component regression (PCR, which are used to analyze hyperspectral data acquired by using a field spectrophotometer. The vegetation indexes (VIs determined from the spectra were first used to train regression techniques for modeling and validation to select the best VI input, and then summed with white Gaussian noise to study how remote sensing errors affect the regression techniques. Next, the VIs were divided into groups of different sizes by using various sampling methods for modeling and validation to test the stability of the techniques. Finally, the AGB was estimated by using a leave-one-out cross validation with these powerful techniques. The results of the study demonstrate that, of the eight techniques investigated, PLSR and MLR perform best in terms of stability and are most suitable when high-accuracy and stable estimates are required from relatively few samples. In addition, RF is extremely robust against noise and is best suited to deal with repeated observations involving remote-sensing data (i.e., data affected by atmosphere, clouds, observation times, and/or sensor noise. Finally, the leave-one-out cross-validation method indicates that PLSR provides the highest accuracy (R2 = 0.89, RMSE = 1.20 t/ha, MAE = 0.90 t/ha, NRMSE = 0.07, CV (RMSE = 0.18; thus, PLSR is best suited for works requiring high

  12. Porosity and hydraulic conductivity estimation of the basaltic aquifer in Southern Syria by using nuclear and electrical well logging techniques

    Science.gov (United States)

    Asfahani, Jamal

    2017-08-01

    An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.

  13. Lake Metabolism: Comparison of Lake Metabolic Rates Estimated from a Diel CO2- and the Common Diel O2-Technique

    Science.gov (United States)

    Peeters, Frank; Atamanchuk, Dariia; Tengberg, Anders; Encinas-Fernández, Jorge; Hofmann, Hilmar

    2016-01-01

    Lake metabolism is a key factor for the understanding of turnover of energy and of organic and inorganic matter in lake ecosystems. Long-term time series on metabolic rates are commonly estimated from diel changes in dissolved oxygen. Here we present long-term data on metabolic rates based on diel changes in total dissolved inorganic carbon (DIC) utilizing an open-water diel CO2-technique. Metabolic rates estimated with this technique and the traditional diel O2-technique agree well in alkaline Lake Illmensee (pH of ~8.5), although the diel changes in molar CO2 concentrations are much smaller than those of the molar O2 concentrations. The open-water diel CO2- and diel O2-techniques provide independent measures of lake metabolic rates that differ in their sensitivity to transport processes. Hence, the combination of both techniques can help to constrain uncertainties arising from assumptions on vertical fluxes due to gas exchange and turbulent diffusion. This is particularly important for estimates of lake respiration rates because these are much more sensitive to assumptions on gradients in vertical fluxes of O2 or DIC than estimates of lake gross primary production. Our data suggest that it can be advantageous to estimate respiration rates assuming negligible gradients in vertical fluxes rather than including gas exchange with the atmosphere but neglecting vertical mixing in the water column. During two months in summer the average lake net production was close to zero suggesting at most slightly autotrophic conditions. However, the lake emitted O2 and CO2 during the entire time period suggesting that O2 and CO2 emissions from lakes can be decoupled from the metabolism in the near surface layer. PMID:28002477

  14. Lake Metabolism: Comparison of Lake Metabolic Rates Estimated from a Diel CO2- and the Common Diel O2-Technique.

    Directory of Open Access Journals (Sweden)

    Frank Peeters

    Full Text Available Lake metabolism is a key factor for the understanding of turnover of energy and of organic and inorganic matter in lake ecosystems. Long-term time series on metabolic rates are commonly estimated from diel changes in dissolved oxygen. Here we present long-term data on metabolic rates based on diel changes in total dissolved inorganic carbon (DIC utilizing an open-water diel CO2-technique. Metabolic rates estimated with this technique and the traditional diel O2-technique agree well in alkaline Lake Illmensee (pH of ~8.5, although the diel changes in molar CO2 concentrations are much smaller than those of the molar O2 concentrations. The open-water diel CO2- and diel O2-techniques provide independent measures of lake metabolic rates that differ in their sensitivity to transport processes. Hence, the combination of both techniques can help to constrain uncertainties arising from assumptions on vertical fluxes due to gas exchange and turbulent diffusion. This is particularly important for estimates of lake respiration rates because these are much more sensitive to assumptions on gradients in vertical fluxes of O2 or DIC than estimates of lake gross primary production. Our data suggest that it can be advantageous to estimate respiration rates assuming negligible gradients in vertical fluxes rather than including gas exchange with the atmosphere but neglecting vertical mixing in the water column. During two months in summer the average lake net production was close to zero suggesting at most slightly autotrophic conditions. However, the lake emitted O2 and CO2 during the entire time period suggesting that O2 and CO2 emissions from lakes can be decoupled from the metabolism in the near surface layer.

  15. Estimating Global Seafloor Total Organic Carbon Using a Machine Learning Technique and Its Relevance to Methane Hydrates

    Science.gov (United States)

    Lee, T. R.; Wood, W. T.; Dale, J.

    2017-12-01

    Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.

  16. Determination of the length and position of the lower oesophageal sphincter (LOS) by correlation of external measurements with combined radiographic and manometric estimations in the cat

    International Nuclear Information System (INIS)

    Hashim, M.A.; Waterman, A.E.

    1992-01-01

    Fifty DSH cats were studied radiographically and a highly significant linear correlation was found between the length of the oesophagus measured to the diaphragmatic line on the radiographs and the externally measured distance from the lower jaw incisor teeth to the anterior border of the head of 10th rib. A subsequent manometric study utilizing this correlation in 40 cats suggests that the functional lower oesophageal sphincter (LOS) is situated almost at the level of the diaphragm in the cat. Significant differences were found between the length of the LOS in cats anaesthetized with ketamine compared to alphaxalone-alphadolone or xylazine-ketamine-atropine. The mean lengths of the LOS was 1.42 +/- 0.3 cm. The findings of this study indicate that external measurements can be used to position catheters for accurate oesophageal manometry in the cat

  17. Estimating forest attribute parameters for small areas using nearest neighbors techniques

    Science.gov (United States)

    Ronald E. McRoberts

    2012-01-01

    Nearest neighbors techniques have become extremely popular, particularly for use with forest inventory data. With these techniques, a population unit prediction is calculated as a linear combination of observations for a selected number of population units in a sample that are most similar, or nearest, in a space of ancillary variables to the population unit requiring...

  18. Sensitivity analysis of a pulse nutrient addition technique for estimating nutrient uptake in large streams

    Science.gov (United States)

    Laurence Lin; J.R. Webster

    2012-01-01

    The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...

  19. Application of optimal estimation techniques to FFTF decay heat removal analysis

    International Nuclear Information System (INIS)

    Nutt, W.T.; Additon, S.L.; Parziale, E.A.

    1979-01-01

    The verification and adjustment of plant models for decay heat removal analysis using a mix of engineering judgment and formal techniques from control theory are discussed. The formal techniques facilitate dealing with typical test data which are noisy, redundant and do not measure all of the plant model state variables directly. Two pretest examples are presented. 5 refs

  20. Validity of Using Two Numerical Analysis Techniques To Estimate Item and Ability Parameters via MMLE: Gauss-Hermite Quadrature Formula and Mislevy's Histogram Solution.

    Science.gov (United States)

    Seong, Tae-Je

    The similarity of item and ability parameter estimations was investigated using two numerical analysis techniques via marginal maximum likelihood estimation (MMLE) with a large simulated data set (n=1,000 examinees) and changing the number of quadrature points. MMLE estimation uses a numerical analysis technique to integrate examinees' abilities…

  1. Accuracy study of time delay estimation techniques in laser pulse ranger

    Science.gov (United States)

    Yang, Jinliang; Wang, Xingshu; Gao, Yang

    2013-12-01

    Time-of-flight measurement by using laser pulses is an alternative method in laser range finding and laser scanning, the echo pulses originating from backscattering of the emitted laser pulse on targets is detected by optical receiver. The distance of target can be obtained by measuring the round-trip time. Time-of-arrival estimation may be based on schemes such as constant-fraction discriminator (CFD) in analog electronics. In contrast, as sampled signals are available, time delay estimation may be based on schemes like direct cross-correlation function (CCF) and average square difference function (ASDF) in digital electronics. By the way, constant-fraction discriminator can also be used in digital electronics. All this three methods are analyzed and compared with each other. It is shown that estimators based on CCF and ASDF are more precise than conventional CFD based estimator.

  2. Estimation of VOC emissions from produced-water treatment ponds in Uintah Basin oil and gas field using modeling techniques

    Science.gov (United States)

    Tran, H.; Mansfield, M. L.; Lyman, S. N.; O'Neil, T.; Jones, C. P.

    2015-12-01

    Emissions from produced-water treatment ponds are poorly characterized sources in oil and gas emission inventories that play a critical role in studying elevated winter ozone events in the Uintah Basin, Utah, U.S. Information gaps include un-quantified amounts and compositions of gases emitted from these facilities. The emitted gases are often known as volatile organic compounds (VOCs) which, beside nitrogen oxides (NOX), are major precursors for ozone formation in the near-surface layer. Field measurement campaigns using the flux-chamber technique have been performed to measure VOC emissions from a limited number of produced water ponds in the Uintah Basin of eastern Utah. Although the flux chamber provides accurate measurements at the point of sampling, it covers just a limited area of the ponds and is prone to altering environmental conditions (e.g., temperature, pressure). This fact raises the need to validate flux chamber measurements. In this study, we apply an inverse-dispersion modeling technique with evacuated canister sampling to validate the flux-chamber measurements. This modeling technique applies an initial and arbitrary emission rate to estimate pollutant concentrations at pre-defined receptors, and adjusts the emission rate until the estimated pollutant concentrations approximates measured concentrations at the receptors. The derived emission rates are then compared with flux-chamber measurements and differences are analyzed. Additionally, we investigate the applicability of the WATER9 wastewater emission model for the estimation of VOC emissions from produced-water ponds in the Uintah Basin. WATER9 estimates the emission of each gas based on properties of the gas, its concentration in the waste water, and the characteristics of the influent and treatment units. Results of VOC emission estimations using inverse-dispersion and WATER9 modeling techniques will be reported.

  3. Comparison of methane emission estimates from multiple measurement techniques at natural gas production pads

    Directory of Open Access Journals (Sweden)

    Clay Samuel Bell

    2017-12-01

    Full Text Available This study presents the results of a campaign that estimated methane emissions at 268 gas production facilities in the Fayetteville shale gas play using onsite measurements (261 facilities and two downwind methods – the dual tracer flux ratio method (Tracer Facility Estimate – TFE, 17 facilities and the EPA Other Test Method 33a (OTM33A Facility Estimate – OFE, 50 facilities. A study onsite estimate (SOE for each facility was developed by combining direct measurements and simulation of unmeasured emission sources, using operator activity data and emission data from literature. The SOE spans 0–403 kg/h and simulated methane emissions from liquid unloadings account for 88% of total emissions estimated by the SOE, with 76% (95% CI [51%–92%] contributed by liquid unloading at two facilities. TFE and SOE show overlapping 95% CI between individual estimates at 15 of 16 (94% facilities where the measurements were paired, while OFE and SOE show overlapping 95% CI between individual estimates at 28 of 43 (65% facilities. However, variance-weighted least-squares (VWLS regressions performed on sets of paired estimates indicate statistically significant differences between methods. The SOE represents a lower bound of emissions at facilities where onsite direct measurements of continuously emitting sources are the primary contributor to the SOE, a sub-selection of facilities which minimizes expected inter-method differences for intermittent pneumatic controllers and the impact of episodically-emitting unloadings. At 9 such facilities, VWLS indicates that TFE estimates systematically higher emissions than SOE (TFE-to-SOE ratio = 1.6, 95% CI [1.2 to 2.1]. At 20 such facilities, VWLS indicates that OFE estimates systematically lower emissions than SOE (OFE-to-SOE ratio of 0.41 [0.26 to 0.90]. Given that SOE at these facilities is a lower limit on emissions, these results indicate that OFE is likely a less accurate method than SOE or TFE for this type

  4. A Semester-Long Project for Teaching Basic Techniques in Molecular Biology Such as Restriction Fragment Length Polymorphism Analysis to Undergraduate and Graduate Students

    OpenAIRE

    DiBartolomeis, Susan M.

    2011-01-01

    Several reports on science education suggest that students at all levels learn better if they are immersed in a project that is long term, yielding results that require analysis and interpretation. I describe a 12-wk laboratory project suitable for upper-level undergraduates and first-year graduate students, in which the students molecularly locate and map a gene from Drosophila melanogaster called dusky and one of dusky's mutant alleles. The mapping strategy uses restriction fragment length ...

  5. Applying a particle filtering technique for canola crop growth stage estimation in Canada

    Science.gov (United States)

    Sinha, Abhijit; Tan, Weikai; Li, Yifeng; McNairn, Heather; Jiao, Xianfeng; Hosseini, Mehdi

    2017-10-01

    Accurate crop growth stage estimation is important in precision agriculture as it facilitates improved crop management, pest and disease mitigation and resource planning. Earth observation imagery, specifically Synthetic Aperture Radar (SAR) data, can provide field level growth estimates while covering regional scales. In this paper, RADARSAT-2 quad polarization and TerraSAR-X dual polarization SAR data and ground truth growth stage data are used to model the influence of canola growth stages on SAR imagery extracted parameters. The details of the growth stage modeling work are provided, including a) the development of a new crop growth stage indicator that is continuous and suitable as the state variable in the dynamic estimation procedure; b) a selection procedure for SAR polarimetric parameters that is sensitive to both linear and nonlinear dependency between variables; and c) procedures for compensation of SAR polarimetric parameters for different beam modes. The data was collected over three crop growth seasons in Manitoba, Canada, and the growth model provides the foundation of a novel dynamic filtering framework for real-time estimation of canola growth stages using the multi-sensor and multi-mode SAR data. A description of the dynamic filtering framework that uses particle filter as the estimator is also provided in this paper.

  6. Comparison of the in situ, in vitro and Enzimatic (Cellulase) Techniques for Digestibility Estimation of Forages in Sheep

    OpenAIRE

    Torres G., Giovanna; Arbaiza F., Teresa; Carcelén C., Fernando; Lucas A., Orlando

    2012-01-01

    The objective of the study was to compare the efficiency of the in situ, in vitro and enzymatic (cellulose) techniques in estimating the digestibility of forage with different nutritional quality in sheep. Samples of three qualities of forage were collected: high (rye grass of 2-4 weeks), medium (rye grass of 8 weeks and alfalfa hay), and low (oat straw). The samples were dried, grounded and passed through 1 mm sieve for the in vitro and cellulose technique and 3 mm sieve for the in situ tech...

  7. Stereological estimates of nuclear volume and other quantitative variables in supratentorial brain tumors. Practical technique and use in prognostic evaluation

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Braendgaard, H; Chistiansen, A O

    1991-01-01

    The use of morphometry and modern stereology in malignancy grading of brain tumors is only poorly investigated. The aim of this study was to present these quantitative methods. A retrospective feasibility study of 46 patients with supratentorial brain tumors was carried out to demonstrate...... the practical technique. The continuous variables were correlated with the subjective, qualitative WHO classification of brain tumors, and the prognostic value of the parameters was assessed. Well differentiated astrocytomas (n = 14) had smaller estimates of the volume-weighted mean nuclear volume and mean...... techniques in the prognostic evaluation of primary brain tumors....

  8. Experimental Nondestructive Test for Estimation of Buckling Load on Unstiffened Cylindrical Shells Using Vibration Correlation Technique

    Directory of Open Access Journals (Sweden)

    Kaspars Kalnins

    2015-01-01

    Full Text Available Nondestructive methods, to calculate the buckling load of imperfection sensitive thin-walled structures, such as large-scale aerospace structures, are one of the most important techniques for the evaluation of new structures and validation of numerical models. The vibration correlation technique (VCT allows determining the buckling load for several types of structures without reaching the instability point, but this technique is still under development for thin-walled plates and shells. This paper presents and discusses an experimental verification of a novel approach using vibration correlation technique for the prediction of realistic buckling loads of unstiffened cylindrical shells loaded under axial compression. Four different test structures were manufactured and loaded up to buckling: two composite laminated cylindrical shells and two stainless steel cylinders. In order to characterize a relationship with the applied load, the first natural frequency of vibration and mode shape is measured during testing using a 3D laser scanner. The proposed vibration correlation technique allows one to predict the experimental buckling load with a very good approximation without actually reaching the instability point. Additional experimental tests and numerical models are currently under development to further validate the proposed approach for composite and metallic conical structures.

  9. A virtually blind spectrum efficient channel estimation technique for mimo-ofdm system

    International Nuclear Information System (INIS)

    Ullah, M.O.

    2015-01-01

    Multiple-Input Multiple-Output antennas in conjunction with Orthogonal Frequency-Division Multiplexing is a dominant air interface for 4G and 5G cellular communication systems. Additionally, MIMO- OFDM based air interface is the foundation for latest wireless Local Area Networks, wireless Personal Area Networks, and digital multimedia broadcasting. Whether it is a single antenna or a multi-antenna OFDM system, accurate channel estimation is required for coherent reception. Training-based channel estimation methods require multiple pilot symbols and therefore waste a significant portion of channel bandwidth. This paper describes a virtually blind spectrum efficient channel estimation scheme for MIMO-OFDM systems which operates well below the Nyquist criterion. (author)

  10. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  11. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Diksha [Division of Imaging, Diagnostics and Software Reliability, Center for Devices and Radiological Health, Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993 (United States); Sze, Christina; Bhandari, Harish; Nagarkar, Vivek [Radiation Monitoring Devices Inc., Watertown, MA (United States); Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics and Software Reliability, Center for Devices and Radiological Health, Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993 (United States)

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  12. Estimating primary productivity of tropical oil palm in Malaysia using remote sensing technique and ancillary data

    Science.gov (United States)

    Kanniah, K. D.; Tan, K. P.; Cracknell, A. P.

    2014-10-01

    The amount of carbon sequestration by vegetation can be estimated using vegetation productivity. At present, there is a knowledge gap in oil palm net primary productivity (NPP) at a regional scale. Therefore, in this study NPP of oil palm trees in Peninsular Malaysia was estimated using remote sensing based light use efficiency (LUE) model with inputs from local meteorological data, upscaled leaf area index/fractional photosynthetically active radiation (LAI/fPAR) derived using UK-DMC 2 satellite data and a constant maximum LUE value from the literature. NPP values estimated from the model was then compared and validated with NPP estimated using allometric equations developed by Corley and Tinker (2003), Henson (2003) and Syahrinudin (2005) with diameter at breast height, age and the height of the oil palm trees collected from three estates in Peninsular Malaysia. Results of this study show that oil palm NPP derived using a light use efficiency model increases with respect to the age of oil palm trees, and it stabilises after ten years old. The mean value of oil palm NPP at 118 plots as derived using the LUE model is 968.72 g C m-2 year-1 and this is 188% - 273% higher than the NPP derived from the allometric equations. The estimated oil palm NPP of young oil palm trees is lower compared to mature oil palm trees (age of oil palm trees as estimated using the allomeric equations. It was found in this study that LUE models could not capture NPP variation of oil palm trees if LAI/fPAR is used. On the other hand, tree height and DBH are found to be important variables that can capture changes in oil palm NPP as a function of age.

  13. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  14. In utero exposure to diethylstilboestrol or 4-n-nonylphenol in rats: Number of Sertoli cells, diameter and length of seminiferous, tubules estimated by stereological methods

    DEFF Research Database (Denmark)

    Dalgaard, Majken; Pilegaard, Kirsten; Ladefoged, Ole

    2002-01-01

    The effects on testis weight and histopathology were studied in 11-day-old male Wistar rats after prenatal exposure to peanut oil (control), diethylstilboestrol 30 mug/kg b.wt./day, or 4-n-nonylphenol 75 mg/kg b.wt./day from gestational day 11 to 18. Additionally, the diameter and length of semin......The effects on testis weight and histopathology were studied in 11-day-old male Wistar rats after prenatal exposure to peanut oil (control), diethylstilboestrol 30 mug/kg b.wt./day, or 4-n-nonylphenol 75 mg/kg b.wt./day from gestational day 11 to 18. Additionally, the diameter and length...

  15. Technique Feature Analysis or Involvement Load Hypothesis: Estimating Their Predictive Power in Vocabulary Learning.

    Science.gov (United States)

    Gohar, Manoochehr Jafari; Rahmanian, Mahboubeh; Soleimani, Hassan

    2018-02-05

    Vocabulary learning has always been a great concern and has attracted the attention of many researchers. Among the vocabulary learning hypotheses, involvement load hypothesis and technique feature analysis have been proposed which attempt to bring some concepts like noticing, motivation, and generation into focus. In the current study, 90 high proficiency EFL students were assigned into three vocabulary tasks of sentence making, composition, and reading comprehension in order to examine the power of involvement load hypothesis and technique feature analysis frameworks in predicting vocabulary learning. It was unraveled that involvement load hypothesis cannot be a good predictor, and technique feature analysis was a good predictor in pretest to posttest score change and not in during-task activity. The implications of the results will be discussed in the light of preparing vocabulary tasks.

  16. Accurate Bond Lengths to Hydrogen Atoms from Single?Crystal X?ray Diffraction by Including Estimated Hydrogen ADPs and Comparison to Neutron and QM/MM Benchmarks

    OpenAIRE

    Dittrich, Birger; L?bben, Jens; Mebs, Stefan; Wagner, Armin; Luger, Peter; Flaig, Ralf

    2017-01-01

    Abstract Amino acid structures are an ideal test set for method?development studies in crystallography. High?resolution X?ray diffraction data for eight previously studied genetically encoding amino acids are provided, complemented by a non?standard amino acid. Structures were re?investigated to study a widely applicable treatment that permits accurate X?H bond lengths to hydrogen atoms to be obtained: this treatment combines refinement of positional hydrogen?atom parameters with aspherical s...

  17. Using Quantitative Data Analysis Techniques for Bankruptcy Risk Estimation for Corporations

    Directory of Open Access Journals (Sweden)

    Ştefan Daniel ARMEANU

    2012-01-01

    Full Text Available Diversification of methods and techniques for quantification and management of risk has led to the development of many mathematical models, a large part of which focused on measuring bankruptcy risk for businesses. In financial analysis there are many indicators which can be used to assess the risk of bankruptcy of enterprises but to make an assessment it is needed to reduce the number of indicators and this can be achieved through principal component, cluster and discriminant analyses techniques. In this context, the article aims to build a scoring function used to identify bankrupt companies, using a sample of companies listed on Bucharest Stock Exchange.

  18. Estimating fermentation characteristics and nutritive value of ensiled and dried pomegranate seeds for ruminants using in vitro gas production technique

    OpenAIRE

    A. Ahmadzadeh; R. Salamatdoustnobar; N. Maheri-Sis; M. Taher-Maddah

    2012-01-01

    The purpose of this study was to determine the chemical composition and estimation of fermentation characteristics and nutritive value of ensiled and dried pomegranate seeds using in vitro gas production technique. Samples were collected, mixed, processed (ensiled and dried) and incubated in vitro with rumen liquor taken from three fistulated Iranian native (Taleshi) steers at 2, 4, 6, 8, 12, 16, 24, 36, 48, 72 and 96 h. The results showed that ensiling lead to significant increase in gas pro...

  19. A MATLAB program for estimation of unsaturated hydraulic soil parameters using an infiltrometer technique

    DEFF Research Database (Denmark)

    Mollerup, Mikkel; Hansen, Søren; Petersen, Carsten

    2008-01-01

    We combined an inverse routine for assessing the hydraulic soil parameters of the Campbell/Mualem model with the power series solution developed by Philip for describing one-dimensional vertical infiltration into a homogenous soil. We based the estimation routine on a proposed measurement procedu...

  20. An indirect technique for estimating reliability of analog and mixed-signal systems during operational life

    NARCIS (Netherlands)

    Khan, M.A.; Kerkhoff, Hans G.

    2013-01-01

    Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an

  1. Evaluation of Shipboard Wave Estimation Techniques through Model-scale Experiments

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Galeazzi, Roberto; H. Brodtkorb, Astrid

    2016-01-01

    The paper continues a study on the wave buoy analogy that uses shipboard measurements to estimate sea states. In the present study, the wave buoy analogy is formulated directly in the time domain and relies only partly on wave-vessel response amplitude operators (RAOs), which is in contrast to al...

  2. Changing Urbania: Estimating Changes in Urban Land and Urban Population Using Refined Areal Interpolation Techniques

    Science.gov (United States)

    Zoraghein, H.; Leyk, S.; Balk, D.

    2017-12-01

    The analysis of changes in urban land and population is important because the majority of future population growth will take place in urban areas. The U.S. Census historically classifies urban land using population density and various land-use criteria. This study analyzes the reliability of census-defined urban lands for delineating the spatial distribution of urban population and estimating its changes over time. To overcome the problem of incompatible enumeration units between censuses, regular areal interpolation methods including Areal Weighting (AW) and Target Density Weighting (TDW), with and without spatial refinement, are implemented. The goal in this study is to estimate urban population in Massachusetts in 1990 and 2000 (source zones), within tract boundaries of the 2010 census (target zones), respectively, to create a consistent time series of comparable urban population estimates from 1990 to 2010. Spatial refinement is done using ancillary variables such as census-defined urban areas, the National Land Cover Database (NLCD) and the Global Human Settlement Layer (GHSL) as well as different combinations of them. The study results suggest that census-defined urban areas alone are not necessarily the most meaningful delineation of urban land. Instead it appears that alternative combinations of the above-mentioned ancillary variables can better depict the spatial distribution of urban land, and thus make it possible to reduce the estimation error in transferring the urban population from source zones to target zones when running spatially-refined temporal areal interpolation.

  3. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  4. Early-Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques.

    Science.gov (United States)

    Tsagkari, Mirela; Couturier, Jean-Luc; Kokossis, Antonis; Dubois, Jean-Luc

    2016-09-08

    Biorefineries offer a promising alternative to fossil-based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital-intensive projects that involve state-of-the-art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well-documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early-stage capital cost estimation tool suitable for biorefinery processes. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  5. Phase Estimation Techniques for Active Optics Systems Used in Real-Time Wavefront Reconstruction.

    Science.gov (United States)

    1980-12-01

    square MSE Mean square error MMSE Minimum mean square error n Number of columns in detector array Dummy variable in Eq. (4-1) n, Noise counts ) n(t... Factorial xiii AFIT/GEO/EE/80D-4 Abstract * Wavefront estimation from shearing interferometry measurements is considered in detail. Two analyses

  6. A triple isotope technique for estimation of fat and vitamin B12 malabsorption in Chrohn's disease

    International Nuclear Information System (INIS)

    Pedersen, N.T.; Rannem, T.

    1991-01-01

    A test for simultaneous estimation of vitamin B 12 and fat absorption from stool samples was investigated in 25 patients with severe diarrhoea after operation for Chrohn's disease. 51 CrCl 3 was ingested as a non-absorbable marker, 58 Co-cyanocobalamin as vitamin B 12 tracer, and 14 C-triolein as lipid tracer. Faeces were collected separately for three days. Some stool-to-stool variation in the 58 Co/ 51 Cr and 14 C/ 51 Cr ratios was seen. When the 58 Co-B 12 and 14 C-triolein excretion was estimated in samples of the two stools with the highest activities of 51 Cr, the variations of the estimates were less than ±10% and ±15% of the doses ingested, respectively. 12 of the 25 patients were not able to collect faeces and urine quantitatively and separately. However, in all patients faeces with sufficient radioactivity for simultaneous estimation of faecal 58 Co-B 12 and 14 C-triolein excretion from stool samples were obtained. 16 refs., 3 figs., 1 tab

  7. Evaluation of damping estimates in the presence of closely spaced modes using operational modal analysis techniques

    DEFF Research Database (Denmark)

    Bajric, Anela; Brincker, Rune; Thöns, Sebastian

    2015-01-01

    ). The evaluation is based on identification using random response from white noise loading of a three degree-of-freedom (3DOF) system numerically established from specified modal parameters for a range of natural frequencies. The numerical model provides comparisons of the effectiveness of damping estimation...

  8. Estimation of Compressive Strength of High Strength Concrete Using Non-Destructive Technique and Concrete Core Strength

    Directory of Open Access Journals (Sweden)

    Minkwan Ju

    2017-12-01

    Full Text Available Estimating the compressive strength of high strength concrete (HSC is an essential investigation for the maintenance of nuclear power plant (NPP structures. This study intends to evaluate the compressive strength of HSC using two approaches: non-destructive tests and concrete core strength. For non-destructive tests, samples of HSC were mixed to a specified design strength of 40, 60 and 100 MPa. Based on a dual regression relation between ultrasonic pulse velocity (UPV and rebound hammer (RH measurements, an estimation expression is developed. In comparison to previously published estimation equations, the equation proposed in this study shows the highest accuracy and the lowest root mean square error (RMSE. For the estimation of compressive strength using concrete core specimens, three different concrete core diameters were examined: 30, 50, and 100 mm. Based on 61 measured compressive strengths of core specimens, a simple strength correction factor is investigated. The compressive strength of a concrete core specimen decreases as the core diameter reduces. Such a relation is associated with the internal damage of concrete cores and the degree of coarse aggregate within the core diameter from the extracting process of the cores. The strength estimation expressions was formulated using the non-destructive technique and the core strength estimation can be updated with further test results and utilized for the maintenance of NPP.

  9. A spatial compression technique for head-related transfer function interpolation and complexity estimation

    DEFF Research Database (Denmark)

    Shekarchi, Sayedali; Christensen-Dalsgaard, Jakob; Hallam, John

    2015-01-01

    A head-related transfer function (HRTF) model employing Legendre polynomials (LPs) is evaluated as an HRTF spatial complexity indicator and interpolation technique in the azimuth plane. LPs are a set of orthogonal functions derived on the sphere which can be used to compress an HRTF dataset...

  10. Estimation of fracture parameters in foam core materials using thermal techniques

    DEFF Research Database (Denmark)

    Dulieu-Barton, J. M.; Berggreen, Christian; Boyenval Langlois, C.

    2010-01-01

    The paper presents some initial work on establishing the stress state at a crack tip in PVC foam material using a non-contact infra-red technique known as thermoelastic stress analysis (TSA). A parametric study of the factors that may affect the thermoelastic response of the foam material...

  11. Estimating bridge stiffness using a forced-vibration technique for timber bridge health monitoring

    Science.gov (United States)

    James P. Wacker; Xiping Wang; Brian Brashaw; Robert J. Ross

    2006-01-01

    This paper describes an effort to refine a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the frequency response of several simple-span, sawn timber beam (with plank deck) bridges located in St. Louis County, Minnesota. Static load deflections were also measured to...

  12. Propensity Score Estimation with Data Mining Techniques: Alternatives to Logistic Regression

    Science.gov (United States)

    Keller, Bryan S. B.; Kim, Jee-Seon; Steiner, Peter M.

    2013-01-01

    Propensity score analysis (PSA) is a methodological technique which may correct for selection bias in a quasi-experiment by modeling the selection process using observed covariates. Because logistic regression is well understood by researchers in a variety of fields and easy to implement in a number of popular software packages, it has…

  13. A novel technique for estimation of skew in binary text document ...

    Indian Academy of Sciences (India)

    The method uses the boundary growing approach to extract the lowermost and uppermost coordinates of pixels of characters of text lines present in the document, which can be subjected to linear regression analysis (LRA) to determine the skew angle of a skewed document. Further, the proposed technique works fine for ...

  14. Phase velocity estimation technique based on adaptive beamforming for ultrasonic guided waves propagating along cortical long bones

    Science.gov (United States)

    Okumura, Shigeaki; Nguyen, Vu-Hieu; Taki, Hirofumi; Haïat, Guillaume; Naili, Salah; Sato, Toru

    2017-07-01

    The axial transmission technique, which is used to estimate the phase velocity of an ultrasonic guided wave propagating along cortical bone is a promising tool for bone quality assessment. Lamb waves are ultrasonic guided waves that consist of multiple modes. The number of existing modes and the signal-to-noise ratio required for phase velocity estimation depend on the frequency of the signal. Hence, we employ an adaptive beamforming technique with spatial averaging to control signal-to-noise ratio and resolution by situating subarrays within the full array. Because the determination of the optimal size for spatial averaging is difficult, we propose a new algorithm that does not require a specific size with a new false-phase-velocity rejection technique. Using a 2.0-mm-thick copper plate, the proposed method accurately estimates phase velocity with fitting errors of 0.26 and 1.3%, as shown by simulation and experimental results, respectively. The measurement frequency ranges are more than twice wider than those measured by the conventional method.

  15. Ground Receiving Station Reference Pair Selection Technique for a Minimum Configuration 3D Emitter Position Estimation Multilateration System

    Directory of Open Access Journals (Sweden)

    Abdulmalik Shehu Yaro

    2017-01-01

    Full Text Available Multilateration estimates aircraft position using the Time Difference Of Arrival (TDOA with a lateration algorithm. The Position Estimation (PE accuracy of the lateration algorithm depends on several factors which are the TDOA estimation error, the lateration algorithm approach, the number of deployed GRSs and the selection of the GRS reference used for the PE process. Using the minimum number of GRSs for 3D emitter PE, a technique based on the condition number calculation is proposed to select the suitable GRS reference pair for improving the accuracy of the PE using the lateration algorithm. Validation of the proposed technique was performed with the GRSs in the square and triangular GRS configuration. For the selected emitter positions, the result shows that the proposed technique can be used to select the suitable GRS reference pair for the PE process. A unity condition number is achieved for GRS pair most suitable for the PE process. Monte Carlo simulation result, in comparison with the fixed GRS reference pair lateration algorithm, shows a reduction in PE error of at least 70% for both GRS in the square and triangular configuration.

  16. Validation of a mathematical model for Bell 427 Helicopter using parameter estimation techniques and flight test data

    Science.gov (United States)

    Crisan, Emil Gabriel

    Certification requirements, optimization and minimum project costs, design of flight control laws and the implementation of flight simulators are among the principal applications of system identification in the aeronautical industry. This document examines the practical application of parameter estimation techniques to the problem of estimating helicopter stability and control derivatives from flight test data provided by Bell Helicopter Textron Canada. The purpose of this work is twofold: a time-domain application of the Output Error method using the Gauss-Newton algorithm and a frequency-domain identification method to obtain the aerodynamic and control derivatives of a helicopter. The adopted model for this study is a fully coupled, 6 degree of freedom (DoF) state space model. The technique used for rotorcraft identification in time-domain was the Maximum Likelihood Estimation method, embodied in a modified version of NASA's Maximum Likelihood Estimator program (MMLE3) obtained from the National Research Council (NRC). The frequency-domain system identification procedure is incorporated in a comprehensive package of user-oriented programs referred to as CIFERRTM. The coupled, 6 DoF model does not include the high frequency main rotor modes (flapping, lead-lag, twisting), yet it is capable of modeling rotorcraft dynamics fairly accurately as resulted from the model verification. The identification results demonstrate that MMLE3 is a powerful and effective tool for extracting reliable helicopter models from flight test data. The results obtained in frequency-domain approach demonstrated that CIFERRTM could achieve good results even on limited data.

  17. A performance analysis of echographic ultrasonic techniques for non-invasive temperature estimation in hyperthermia range using phantoms with scatterers.

    Science.gov (United States)

    Bazán, I; Vazquez, M; Ramos, A; Vera, A; Leija, L

    2009-03-01

    Optimization of efficiency in hyperthermia requires a precise and non-invasive estimation of internal distribution of temperature. Although there are several research trends for ultrasonic temperature estimation, efficient equipments for its use in the clinical practice are not still available. The main objective of this work was to research about the limitations and potential improvements of previously reported signal processing options in order to identify research efforts to facilitate their future clinical use as a thermal estimator. In this document, we have a critical analysis of potential performance of previous ultrasonic research trends for temperature estimation inside materials, using different processing techniques proposed in frequency, time and phase domains. It was carried out in phantom with scatterers, assessing at their specific applicability, linearity and limitations in hyperthermia range. Three complementary evaluation indexes: technique robustness, Mat-lab processing time and temperature resolution, with specific application protocols, were defined and employed for a comparative quantification of the behavior of the techniques. The average increment per degrees C and mm was identified for each technique (3 KHz/ degrees C in the frequency analysis, 0.02 rad/ degrees C in the phase domain, while increments in the time domain of only 1.6 ns/ degrees C were found). Their linearity with temperature rising was measured using linear and quadratic regressions and they were correlated with the obtained data. New improvements in time and frequency signal processing in order to reveal the potential thermal and spatial resolutions of these techniques are proposed and their subsequent improved estimation results are shown for simulated and measured A-scans registers. As an example of these processing novelties, an excellent potential resolution of 0.12 degrees C into hyperthermia range, with near-to-linear frequency dependence, could be achieved

  18. Capacity Estimation and Near-Capacity Achieving Techniques for Digitally Modulated Communication Systems

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov

    This thesis studies potential improvements that can be made to the current data rates of digital communication systems. The physical layer of the system will be investigated in band-limited scenarios, where high spectral efficiency is necessary in order to meet the ever-growing data rate demand....... Several issues are tackled, both with theoretical and more practical aspects. The theoretical part is mainly concerned with estimating the constellation constrained capacity (CCC) of channels with discrete input, which is an inherent property of digital communication systems. The channels under...... investigation will include linear interference channels of high dimensionality (such as multiple-input multiple-output), and the non-linear optical fiber channel, which has been gathering more and more attention from the information theory community in recent years. In both cases novel CCC estimates and lower...

  19. Approaching bathymetry estimation from high resolution multispectral satellite images using a neuro-fuzzy technique

    Science.gov (United States)

    Corucci, Linda; Masini, Andrea; Cococcioni, Marco

    2011-01-01

    This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.

  20. Weight estimates and packaging techniques for the microwave radiometer spacecraft. [shuttle compatible design

    Science.gov (United States)

    Jensen, J. K.; Wright, R. L.

    1981-01-01

    Estimates of total spacecraft weight and packaging options were made for three conceptual designs of a microwave radiometer spacecraft. Erectable structures were found to be slightly lighter than deployable structures but could be packaged in one-tenth the volume. The tension rim concept, an unconventional design approach, was found to be the lightest and transportable to orbit in the least number of shuttle flights.

  1. Automatic Estimation of Live Coffee Leaf Infection based on Image Processing Techniques

    OpenAIRE

    Hitimana, Eric; Gwun, Oubong

    2014-01-01

    Image segmentation is the most challenging issue in computer vision applications. And most difficulties for crops management in agriculture ar e the lack of appropriate methods for detecting the leaf damage for pests’ treatment. In this paper we proposed an automatic method for leaf damage detection and severity estimation o f coffee leaf by avoiding defoliation. After enhancing the contrast of the original image using ...

  2. Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling

    Science.gov (United States)

    Xing, Yafei; Macq, Benoit

    2017-11-01

    With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.

  3. Bias in little owl population estimates using playback techniques during surveys

    Directory of Open Access Journals (Sweden)

    Zuberogoitia, I.

    2011-12-01

    Full Text Available To test the efficiency of playback methods to survey little owl (Athene noctua populations we carried out two studies: (1 we recorded the replies of radio–tagged little owls to calls in a small area; (2 we recorded call broadcasts to estimate the effectiveness of the method to detect the presence of little owl. In the first study, we detected an average of 8.12 owls in the 30′ survey period, a number that is close to the real population; we also detected significant little owl movements from the initial location (before the playback to the next locations during the survey period. However, we only detected an average of 2.25 and 5.37 little owls in the first 5′ and 10′, respectively, of the survey time. In the second study, we detected 137 little owl territories in 105 positive sample units. The occupation rate was 0.35, the estimated occupancy was 0.393, and the probability of detection was 0.439. The estimated cumulative probability of detection suggests that a minimum of four sampling times would be needed in an extensive survey to detect 95% of the areas occupied by little owls.

  4. Non-Invasive Blood Pressure Estimation from ECG Using Machine Learning Techniques.

    Science.gov (United States)

    Simjanoska, Monika; Gjoreski, Martin; Gams, Matjaž; Madevska Bogdanova, Ana

    2018-04-11

    Blood pressure (BP) measurements have been used widely in clinical and private environments. Recently, the use of ECG monitors has proliferated; however, they are not enabled with BP estimation. We have developed a method for BP estimation using only electrocardiogram (ECG) signals. Raw ECG data are filtered and segmented, and, following this, a complexity analysis is performed for feature extraction. Then, a machine-learning method is applied, combining a stacking-based classification module and a regression module for building systolic BP (SBP), diastolic BP (DBP), and mean arterial pressure (MAP) predictive models. In addition, the method allows a probability distribution-based calibration to adapt the models to a particular user. Using ECG recordings from 51 different subjects, 3129 30-s ECG segments are constructed, and seven features are extracted. Using a train-validation-test evaluation, the method achieves a mean absolute error (MAE) of 8.64 mmHg for SBP, 18.20 mmHg for DBP, and 13.52 mmHg for the MAP prediction. When models are calibrated, the MAE decreases to 7.72 mmHg for SBP, 9.45 mmHg for DBP and 8.13 mmHg for MAP. The experimental results indicate that, when a probability distribution-based calibration is used, the proposed method can achieve results close to those of a certified medical device for BP estimation.

  5. An innovative technique for estimating water saturation from capillary pressure in clastic reservoirs

    Science.gov (United States)

    Adeoti, Lukumon; Ayolabi, Elijah Adebowale; James, Logan

    2017-11-01

    A major drawback of old resistivity tools is the poor vertical resolution and estimation of hydrocarbon when applying water saturation (Sw) from historical resistivity method. In this study, we have provided an alternative method called saturation height function to estimate hydrocarbon in some clastic reservoirs in the Niger Delta. The saturation height function was derived from pseudo capillary pressure curves generated using modern wells with complete log data. Our method was based on the determination of rock type from log derived porosity-permeability relationship, supported by volume of shale for its classification into different zones. Leverette-J functions were derived for each rock type. Our results show good correlation between Sw from resistivity based method and Sw from pseudo capillary pressure curves in wells with modern log data. The resistivity based model overestimates Sw in some wells while Sw from the pseudo capillary pressure curves validates and predicts more accurate Sw. In addition, the result of Sw from pseudo capillary pressure curves replaces that of resistivity based model in a well where the resistivity equipment failed. The plot of hydrocarbon pore volume (HCPV) from J-function against HCPV from Archie shows that wells with high HCPV have high sand qualities and vice versa. This was further used to predict the geometry of stratigraphic units. The model presented here freshly addresses the gap in the estimation of Sw and is applicable to reservoirs of similar rock type in other frontier basins worldwide.

  6. Genetic divergence of rubber tree estimated by multivariate techniques and microsatellite markers

    Directory of Open Access Journals (Sweden)

    Lígia Regina Lima Gouvêa

    2010-01-01

    Full Text Available Genetic diversity of 60 Hevea genotypes, consisting of Asiatic, Amazonian, African and IAC clones, and pertaining to the genetic breeding program of the Agronomic Institute (IAC, Brazil, was estimated. Analyses were based on phenotypic multivariate parameters and microsatellites. Five agronomic descriptors were employed in multivariate procedures, such as Standard Euclidian Distance, Tocher clustering and principal component analysis. Genetic variability among the genotypes was estimated with 68 selected polymorphic SSRs, by way of Modified Rogers Genetic Distance and UPGMA clustering. Structure software in a Bayesian approach was used in discriminating among groups. Genetic diversity was estimated through Nei's statistics. The genotypes were clustered into 12 groups according to the Tocher method, while the molecular analysis identified six groups. In the phenotypic and microsatellite analyses, the Amazonian and IAC genotypes were distributed in several groups, whereas the Asiatic were in only a few. Observed heterozygosity ranged from 0.05 to 0.96. Both high total diversity (H T' = 0.58 and high gene differentiation (Gst' = 0.61 were observed, and indicated high genetic variation among the 60 genotypes, which may be useful for breeding programs. The analyzed agronomic parameters and SSRs markers were effective in assessing genetic diversity among Hevea genotypes, besides proving to be useful for characterizing genetic variability.

  7. A differential absorption technique to estimate atmospheric total water vapor amounts

    Science.gov (United States)

    Frouin, Robert; Middleton, Elizabeth

    1990-01-01

    Vertically integrated water-vapor amounts can be remotely determined by measuring the solar radiance reflected by the earth's surface with satellites or aircraft-based instruments. The technique is based on the method by Fowle (1912, 1913) and utilizes the 0.940-micron water-vapor band to retrieve total-water-vapor data that is independent of surface reflectance properties and other atmospheric constituents. A channel combination is proposed to provide more accurate results, the SE-590 spectrometer is used to verify the data, and the effects of atmospheric photon backscattering is examined. The spectrometer and radiosonde data confirm the accuracy of using a narrow and a wide channel centered on the same wavelength to determine water vapor amounts. The technique is suitable for cloudless conditions and can contribute to atmospheric corrections of land-surface parameters.

  8. Data Mining Techniques to Estimate Plutonium, Initial Enrichment, Burnup, and Cooling Time in Spent Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Trellue, Holly Renee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fugate, Michael Lynn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tobin, Stephen Joesph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-19

    The Next Generation Safeguards Initiative (NGSI), Office of Nonproliferation and Arms Control (NPAC), National Nuclear Security Administration (NNSA) of the U.S. Department of Energy (DOE) has sponsored a multi-laboratory, university, international partner collaboration to (1) detect replaced or missing pins from spent fuel assemblies (SFA) to confirm item integrity and deter diversion, (2) determine plutonium mass and related plutonium and uranium fissile mass parameters in SFAs, and (3) verify initial enrichment (IE), burnup (BU), and cooling time (CT) of facility declaration for SFAs. A wide variety of nondestructive assay (NDA) techniques were researched to achieve these goals [Veal, 2010 and Humphrey, 2012]. In addition, the project includes two related activities with facility-specific benefits: (1) determination of heat content and (2) determination of reactivity (multiplication). In this research, a subset of 11 integrated NDA techniques was researched using data mining solutions at Los Alamos National Laboratory (LANL) for their ability to achieve the above goals.

  9. Skill Assessment of An Hybrid Technique To Estimate Quantitative Precipitation Forecast For Galicia (nw Spain)

    Science.gov (United States)

    Lage, A.; Taboada, J. J.

    Precipitation is the most obvious of the weather elements in its effects on normal life. Numerical weather prediction (NWP) is generally used to produce quantitative precip- itation forecast (QPF) beyond the 1-3 h time frame. These models often fail to predict small-scale variations of rain because of spin-up problems and their coarse spatial and temporal resolution (Antolik, 2000). Moreover, there are some uncertainties about the behaviour of the NWP models in extreme situations (de Bruijn and Brandsma, 2000). Hybrid techniques, combining the benefits of NWP and statistical approaches in a flexible way, are very useful to achieve a good QPF. In this work, a new technique of QPF for Galicia (NW of Spain) is presented. This region has a percentage of rainy days per year greater than 50% with quantities that may cause floods, with human and economical damages. The technique is composed of a NWP model (ARPS) and a statistical downscaling process based on an automated classification scheme of at- mospheric circulation patterns for the Iberian Peninsula (J. Ribalaygua and R. Boren, 1995). Results show that QPF for Galicia is improved using this hybrid technique. [1] Antolik, M.S. 2000 "An Overview of the National Weather Service's centralized statistical quantitative precipitation forecasts". Journal of Hydrology, 239, pp:306- 337. [2] de Bruijn, E.I.F and T. Brandsma "Rainfall prediction for a flooding event in Ireland caused by the remnants of Hurricane Charley". Journal of Hydrology, 239, pp:148-161. [3] Ribalaygua, J. and Boren R. "Clasificación de patrones espaciales de precipitación diaria sobre la España Peninsular". Informes N 3 y 4 del Servicio de Análisis e Investigación del Clima. Instituto Nacional de Meteorología. Madrid. 53 pp.

  10. Determining multiple length scales in rocks

    Science.gov (United States)

    Song, Yi-Qiao; Ryu, Seungoh; Sen, Pabitra N.

    2000-07-01

    Carbonate reservoirs in the Middle East are believed to contain about half of the world's oil. The processes of sedimentation and diagenesis produce in carbonate rocks microporous grains and a wide range of pore sizes, resulting in a complex spatial distribution of pores and pore connectivity. This heterogeneity makes it difficult to determine by conventional techniques the characteristic pore-length scales, which control fluid transport properties. Here we present a bulk-measurement technique that is non-destructive and capable of extracting multiple length scales from carbonate rocks. The technique uses nuclear magnetic resonance to exploit the spatially varying magnetic field inside the pore space itself-a `fingerprint' of the pore structure. We found three primary length scales (1-100µm) in the Middle-East carbonate rocks and determined that the pores are well connected and spatially mixed. Such information is critical for reliably estimating the amount of capillary-bound water in the rock, which is important for efficient oil production. This method might also be used to complement other techniques for the study of shaly sand reservoirs and compartmentalization in cells and tissues.

  11. A new technique with high reproducibility to estimate renal oxygenation using BOLD-MRI in chronic kidney disease.

    Science.gov (United States)

    Piskunowicz, Maciej; Hofmann, Lucie; Zuercher, Emilie; Bassi, Isabelle; Milani, Bastien; Stuber, Matthias; Narkiewicz, Krzysztof; Vogt, Bruno; Burnier, Michel; Pruijm, Menno

    2015-04-01

    To assess inter-observer variability of renal blood oxygenation level-dependent MRI (BOLD-MRI) using a new method of analysis, called the concentric objects (CO) technique, in comparison with the classical ROI (region of interest)-based technique. MR imaging (3T) was performed before and after furosemide in 10 chronic kidney disease (CKD) patients (mean eGFR 43±24ml/min/1.73m(2)) and 10 healthy volunteers (eGFR 101±28ml/min1.73m(2)), and R2* maps were determined on four coronal slices. In the CO-technique, R2* values were based on a semi-automatic procedure that divided each kidney in six equal layers, whereas in the ROI-technique, all circles (ROIs) were placed manually in the cortex and medulla. The mean R2*values as assessed by two independent investigators were compared. With the CO-technique, inter-observer variability was 0.7%-1.9% across all layers in non-CKD, versus 1.6%-3.8% in CKD. With the ROI-technique, median variability for cortical and medullary R2* values was 3.6 and 6.8% in non-CKD, versus 4.7 and 12.5% in CKD; similar results were observed after furosemide. The CO-technique offers a new, investigator-independent, highly reproducible alternative to the ROI-based technique to estimate renal tissue oxygenation in CKD. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Non-invasive estimate of blood glucose and blood pressure from a photoplethysmograph by means of machine learning techniques.

    Science.gov (United States)

    Monte-Moreno, Enric

    2011-10-01

    This work presents a system for a simultaneous non-invasive estimate of the blood glucose level (BGL) and the systolic (SBP) and diastolic (DBP) blood pressure, using a photoplethysmograph (PPG) and machine learning techniques. The method is independent of the person whose values are being measured and does not need calibration over time or subjects. The architecture of the system consists of a photoplethysmograph sensor, an activity detection module, a signal processing module that extracts features from the PPG waveform, and a machine learning algorithm that estimates the SBP, DBP and BGL values. The idea that underlies the system is that there is functional relationship between the shape of the PPG waveform and the blood pressure and glucose levels. As described in this paper we tested this method on 410 individuals without performing any personalized calibration. The results were computed after cross validation. The machine learning techniques tested were: ridge linear regression, a multilayer perceptron neural network, support vector machines and random forests. The best results were obtained with the random forest technique. In the case of blood pressure, the resulting coefficients of determination for reference vs. prediction were R(SBP)(2)=0.91, R(DBP)(2)=0.89, and R(BGL)(2)=0.90. For the glucose estimation, distribution of the points on a Clarke error grid placed 87.7% of points in zone A, 10.3% in zone B, and 1.9% in zone D. Blood pressure values complied with the grade B protocol of the British Hypertension society. An effective system for estimate of blood glucose and blood pressure from a photoplethysmograph is presented. The main advantage of the system is that for clinical use it complies with the grade B protocol of the British Hypertension society for the blood pressure and only in 1.9% of the cases did not detect hypoglycemia or hyperglycemia. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. New Technique for TOC Estimation Based on Thermal Core Logging in Low-Permeable Formations (Bazhen fm.)

    Science.gov (United States)

    Popov, Evgeny; Popov, Yury; Spasennykh, Mikhail; Kozlova, Elena; Chekhonin, Evgeny; Zagranovskaya, Dzhuliya; Belenkaya, Irina; Alekseev, Aleksey

    2016-04-01

    A practical method of organic-rich intervals identifying within the low-permeable dispersive rocks based on thermal conductivity measurements along the core is presented. Non-destructive non-contact thermal core logging was performed with optical scanning technique on 4 685 full size core samples from 7 wells drilled in four low-permeable zones of the Bazhen formation (B.fm.) in the Western Siberia (Russia). The method employs continuous simultaneous measurements of rock anisotropy, volumetric heat capacity, thermal anisotropy coefficient and thermal heterogeneity factor along the cores allowing the high vertical resolution (of up to 1-2 mm). B.fm. rock matrix thermal conductivity was observed to be essentially stable within the range of 2.5-2.7 W/(m*K). However, stable matrix thermal conductivity along with the high thermal anisotropy coefficient is characteristic for B.fm. sediments due to the low rock porosity values. It is shown experimentally that thermal parameters measured relate linearly to organic richness rather than to porosity coefficient deviations. Thus, a new technique employing the transformation of the thermal conductivity profiles into continuous profiles of total organic carbon (TOC) values along the core was developed. Comparison of TOC values, estimated from the thermal conductivity values, with experimental pyrolytic TOC estimations of 665 samples from the cores using the Rock-Eval and HAWK instruments demonstrated high efficiency of the new technique for the organic rich intervals separation. The data obtained with the new technique are essential for the SR hydrocarbon generation potential, for basin and petroleum system modeling application, and estimation of hydrocarbon reserves. The method allows for the TOC richness to be accurately assessed using the thermal well logs. The research work was done with financial support of the Russian Ministry of Education and Science (unique identification number RFMEFI58114X0008).

  14. Estimating surface soil moisture from SMAP observations using a Neural Network technique.

    Science.gov (United States)

    Kolassa, J; Reichle, R H; Liu, Q; Alemohammad, S H; Gentine, P; Aida, K; Asanuma, J; Bircher, S; Caldwell, T; Colliander, A; Cosh, M; Collins, C Holifield; Jackson, T J; Martínez-Fernández, J; McNairn, H; Pacheco, A; Thibeault, M; Walker, J P

    2018-01-01

    A Neural Network (NN) algorithm was developed to estimate global surface soil moisture for April 2015 to March 2017 with a 2-3 day repeat frequency using passive microwave observations from the Soil Moisture Active Passive (SMAP) satellite, surface soil temperatures from the NASA Goddard Earth Observing System Model version 5 (GEOS-5) land modeling system, and Moderate Resolution Imaging Spectroradiometer-based vegetation water content. The NN was trained on GEOS-5 soil moisture target data, making the NN estimates consistent with the GEOS-5 climatology, such that they may ultimately be assimilated into this model without further bias correction. Evaluated against in situ soil moisture measurements, the average unbiased root mean square error (ubRMSE), correlation and anomaly correlation of the NN retrievals were 0.037 m 3 m -3 , 0.70 and 0.66, respectively, against SMAP core validation site measurements and 0.026 m 3 m -3 , 0.58 and 0.48, respectively, against International Soil Moisture Network (ISMN) measurements. At the core validation sites, the NN retrievals have a significantly higher skill than the GEOS-5 model estimates and a slightly lower correlation skill than the SMAP Level-2 Passive (L2P) product. The feasibility of the NN method was reflected by a lower ubRMSE compared to the L2P retrievals as well as a higher skill when ancillary parameters in physically-based retrievals were uncertain. Against ISMN measurements, the skill of the two retrieval products was more comparable. A triple collocation analysis against Advanced Microwave Scanning Radiometer 2 (AMSR2) and Advanced Scatterometer (ASCAT) soil moisture retrievals showed that the NN and L2P retrieval errors have a similar spatial distribution, but the NN retrieval errors are generally lower in densely vegetated regions and transition zones.

  15. Estimating leaf area index in Southeast Alaska: a comparison of two techniques.

    Directory of Open Access Journals (Sweden)

    Carolyn A Eckrich

    Full Text Available The relationship between canopy structure and light transmission to the forest floor is of particular interest for studying the effects of succession, timber harvest, and silviculture prescriptions on understory plants and trees. Indirect measurements of leaf area index (LAI estimated using gap fraction analysis with linear and hemispheric sensors have been commonly used to assess radiation interception by the canopy, although the two methods often yield inconsistent results. We compared simultaneously obtained measurements of LAI from a linear ceptometer and digital hemispheric photography in 21 forest stands on Prince of Wales Island, Alaska. We assessed the relationship between these estimates and allometric LAI based on tree diameter at breast height (LAIDBH. LAI values measured at 79 stations in thinned, un-thinned controls, old-growth and clearcut stands were highly correlated between the linear sensor (AccuPAR and hemispheric photography, but the latter was more negatively biased compared to LAIDBH. In contrast, AccuPAR values were more similar to LAIDBH in all stands with basal area less than 30 m(2ha(-1. Values produced by integrating hemispheric photographs over the zenith angles 0-75° (Ring 5 were highly correlated with those integrated over the zenith angles 0-60° (Ring 4, although the discrepancies between the two measures were significant. On average, the AccuPAR estimates were 53% higher than those derived from Ring 5, with most of the differences in closed canopy stands (unthinned controls and old-growth and less so in clearcuts. Following typical patterns of canopy closure, AccuPAR LAI values were higher in dense control stands than in old-growth, whereas the opposite was derived from Ring 5 analyses. Based on our results we advocate the preferential use of linear sensors where canopy openness is low, canopies are tall, and leaf distributions are clumped and angles are variable, as is common in the conifer forests of coastal

  16. On-board and Ground Visual Pose Estimation Techniques for UAV Control

    OpenAIRE

    Martínez Luna, Carol Viviana; Mondragon Bernal, Ivan Fernando; Olivares Méndez, Miguel Ángel; Campoy Cervera, Pascual

    2011-01-01

    In this paper, two techniques to control UAVs (Unmanned Aerial Vehicles), based on visual information are presented. The first one is based on the detection and tracking of planar structures from an on-board camera, while the second one is based on the detection and 3D reconstruction of the position of the UAV based on an external camera system. Both strategies are tested with a VTOL (Vertical take-off and landing) UAV, and results show good behavior of the visual systems (precision in the es...

  17. Estimating Horizontal Displacement between DEMs by Means of Particle Image Velocimetry Techniques

    Directory of Open Access Journals (Sweden)

    Juan F. Reinoso

    2015-12-01

    Full Text Available To date, digital terrain model (DTM accuracy has been studied almost exclusively by computing its height variable. However, the largely ignored horizontal component bears a great influence on the positional accuracy of certain linear features, e.g., in hydrological features. In an effort to fill this gap, we propose a means of measurement different from the geomatic approach, involving fluid mechanics (water and air flows or aerodynamics. The particle image velocimetry (PIV algorithm is proposed as an estimator of horizontal differences between digital elevation models (DEM in grid format. After applying a scale factor to the displacement estimated by the PIV algorithm, the mean error predicted is around one-seventh of the cell size of the DEM with the greatest spatial resolution, and around one-nineteenth of the cell size of the DEM with the least spatial resolution. Our methodology allows all kinds of DTMs to be compared once they are transformed into DEM format, while also allowing comparison of data from diverse capture methods, i.e., LiDAR versus photogrammetric data sources.

  18. Development of Predictive Techniques for Estimating Liquid Water-Hydrate Equilibrium of Water-Hydrocarbon System

    Directory of Open Access Journals (Sweden)

    Amir H. Mohammadi

    2009-01-01

    Full Text Available In this communication, we review recent studies by these authors for modeling the -H equilibrium. With the aim of estimating the solubility of pure hydrocarbon hydrate former in pure water in equilibrium with gas hydrates, a thermodynamic model is introduced based on equality of water fugacity in the liquid water and hydrate phases. The solid solution theory of Van der Waals-Platteeuw is employed for calculating the fugacity of water in the hydrate phase. The Henry's law approach and the activity coefficient method are used to calculate the fugacities of the hydrocarbon hydrate former and water in the liquid water phase, respectively. The results of this model are successfully compared with some selected experimental data from the literature. A mathematical model based on feed-forward artificial neural network algorithm is then introduced to estimate the solubility of pure hydrocarbon hydrate former in pure water being in equilibrium with gas hydrates. Independent experimental data (not employed in training and testing steps are used to examine the reliability of this algorithm successfully.

  19. Estimation techniques and simulation platforms for 77 GHz FMCW ACC radars

    Science.gov (United States)

    Bazzi, A.; Kärnfelt, C.; Péden, A.; Chonavel, T.; Galaup, P.; Bodereau, F.

    2012-01-01

    This paper presents two radar simulation platforms that have been developed and evaluated. One is based on the Advanced Design System (ADS) and the other on Matlab. Both platforms are modeled using homodyne front-end 77 GHz radar, based on commercially available monolithic microwave integrated circuits (MMIC). Known linear modulation formats such as the frequency modulation continuous wave (FMCW) and three-segment FMCW have been studied, and a new variant, the dual FMCW, is proposed for easier association between beat frequencies, while maintaining an excellent distance estimation of the targets. In the signal processing domain, new algorithms are proposed for the three-segment FMCW and for the dual FMCW. While both of these algorithms present the choice of either using complex or real data, the former allows faster signal processing, whereas the latter enables a simplified front-end architecture. The estimation performance of the modulation formats has been evaluated using the Cramer-Rao and Barankin bounds. It is found that the dual FMCW modulation format is slightly better than the other two formats tested in this work. A threshold effect is found at a signal-to-noise ratio (SNR) of 12 dB which means that, to be able to detect a target, the SNR should be above this value. In real hardware, the SNR detection limit should be set to about at least 15 dB.

  20. Estimates of the non-market value of sea turtles in Tobago using stated preference techniques.

    Science.gov (United States)

    Cazabon-Mannette, Michelle; Schuhmann, Peter W; Hailey, Adrian; Horrocks, Julia

    2017-05-01

    Economic benefits are derived from sea turtle tourism all over the world. Sea turtles also add value to underwater recreation and convey non-use values. This study examines the non-market value of sea turtles in Tobago. We use a choice experiment to estimate the value of sea turtle encounters to recreational SCUBA divers and the contingent valuation method to estimate the value of sea turtles to international tourists. Results indicate that turtle encounters were the most important dive attribute among those examined. Divers are willing to pay over US$62 per two tank dive for the first turtle encounter. The mean WTP for turtle conservation among international visitors to Tobago was US$31.13 which reflects a significant non-use value associated with actions targeted at keeping sea turtles from going extinct. These results illustrate significant non-use and non-consumptive use value of sea turtles, and highlight the importance of sea turtle conservation efforts in Tobago and throughout the Caribbean region. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Observationally-constrained estimates of aerosol optical depths (AODs) over East Asia via data assimilation techniques

    Science.gov (United States)

    Lee, K.; Lee, S.; Song, C. H.

    2015-12-01

    Not only aerosol's direct effect on climate by scattering and absorbing the incident solar radiation, but also they indirectly perturbs the radiation budget by influencing microphysics and dynamics of clouds. Aerosols also have a significant adverse impact on human health. With an importance of aerosols in climate, considerable research efforts have been made to quantify the amount of aerosols in the form of the aerosol optical depth (AOD). AOD is provided with ground-based aerosol networks such as the Aerosol Robotic NETwork (AERONET), and is derived from satellite measurements. However, these observational datasets have a limited areal and temporal coverage. To compensate for the data gaps, there have been several studies to provide AOD without data gaps by assimilating observational data and model outputs. In this study, AODs over East Asia simulated with the Community Multi-scale Air Quality (CMAQ) model and derived from the Geostationary Ocean Color Imager (GOCI) observation are interpolated via different data assimilation (DA) techniques such as Cressman's method, Optimal Interpolation (OI), and Kriging for the period of the Distributed Regional Aerosol Gridded Observation Networks (DRAGON) Campaign (March - May 2012). Here, the interpolated results using the three DA techniques are validated intensively by comparing with AERONET AODs to examine the optimal DA method providing the most reliable AODs over East Asia.

  2. Estimating cardiac substructures exposure from diverse radiotherapy techniques in treating left-sided breast cancer.

    Science.gov (United States)

    Zhang, Li; Mei, Xin; Chen, Xingxing; Hu, Weigang; Hu, Silong; Zhang, Yingjian; Shao, Zhimin; Guo, Xiaomao; Tuan, Jeffrey; Yu, Xiaoli

    2015-05-01

    The study compares the physical and biologically effective doses (BED) received by the heart and cardiac substructures using three-dimensional conformal RT (3D-CRT), intensity-modulated radiotherapy (IMRT), and simple IMRT (s-IMRT) in postoperative radiotherapy for patients with left-sided breast cancer. From October 2008 to February 2009, 14 patients with histologically confirmed left-sided breast cancer were enrolled and underwent contrast-enhanced computed tomography (CT) simulation and 18F-FDG positron emission tomography-CT to outline the left cardiac ventricle (LV) and other substructures. The linear-quadratic model was used to convert the physical doses received by critical points of inner heart to BED.The maximal dose, minimum dose, dose received by 99% of volume (D99) and dose received by 95% of volume (D95) in target areas were significantly better using IMRT and s-IMRT when compared with 3D-CRT (P technique, IMRT and s-IMRT had superior target dose coverage and dose uniformity. IMRT significantly reduced the maximal RT dose to heart and LV. IMRT and s-IMRT techniques did not reduce the volume of heart and LV receiving high doses.

  3. Estimation of sea level variations with GPS/GLONASS-reflectometry technique

    Science.gov (United States)

    Padokhin, A. M.; Kurbatov, G. A.; Andreeva, E. S.; Nesterov, I. A.; Nazarenko, M. O.; Berbeneva, N. A.; Karlysheva, A. V.

    2017-11-01

    In the present paper we study GNSS - reflectometry methods for estimation of sea level variations using a single GNSSreceiver, which are based on the multipath propagation effects caused by the reflection of navigational signals from the sea surface. Such multipath propagation results in the appearance of the interference pattern in the Signal-to-Noise Ratio (SNR) of GNSS signals at small satellite elevation angles, which parameters are determined by the wavelength of the navigational signal and height of the antenna phase center above the reflecting sea surface. In current work we used GPS and GLONASS signals and measurements at two working frequencies of both systems to study sea level variations which almost doubles the amount of observations compared to GPS-only tide gauge. For UNAVCO sc02 station and collocated Friday Harbor NOAA tide gauge we show good agreement between GNSS-reflectometry and traditional mareograph sea level data.

  4. Implementation and Test of an Online Embedded Grid Impedance Estimation Technique for PV Inverters

    DEFF Research Database (Denmark)

    Asiminoaei, Lucian; Teodorescu, Remus; Blaabjerg, Frede

    2005-01-01

    New and stronger power quality requirements are issued due to the increased amount of photovoltaic (PV) installations. In this paper different methods are used for continuous grid monitoring in PV inverters. By injecting a noncharacteristic harmonic current and measuring the grid voltage response...... it is possible to evaluate the grid impedance directly by the PV inverter, providing a fast and low-cost implementation. This principle theoretically provides an accurate result of the grid impedance but when using it in the context of PV integration, different implementation issues strongly affect the quality...... of the results. This paper also presents a new impedance estimation method including typical implementation problems encountered, and it also presents adopted solutions for online grid impedance measurement. Practical tests on an existing PV inverter validate the chosen solution....

  5. A parametric model and estimation techniques for the inharmonicity and tuning of the piano.

    Science.gov (United States)

    Rigaud, François; David, Bertrand; Daudet, Laurent

    2013-05-01

    Inharmonicity of piano tones is an essential property of their timbre that strongly influences the tuning, leading to the so-called octave stretching. It is proposed in this paper to jointly model the inharmonicity and tuning of pianos on the whole compass. While using a small number of parameters, these models are able to reflect both the specificities of instrument design and tuner's practice. An estimation algorithm is derived that can run either on a set of isolated note recordings, but also on chord recordings, assuming that the played notes are known. It is applied to extract parameters highlighting some tuner's choices on different piano types and to propose tuning curves for out-of-tune pianos or piano synthesizers.

  6. Comparative methane estimation from cattle based on total CO2 production using different techniques

    Directory of Open Access Journals (Sweden)

    Md N. Haque

    2017-06-01

    Full Text Available The objective of this study was to compare the precision of CH4 estimates using calculated CO2 (HP by the CO2 method (CO2T and measured CO2 in the respiration chamber (CO2R. The CO2R and CO2T study was conducted as a 3 × 3 Latin square design where 3 Dexter heifers were allocated to metabolic cages for 3 periods. Each period consisted of 2 weeks of adaptation followed by 1 week of measurement with the CO2R and CO2T. The average body weight of the heifer was 226 ± 11 kg (means ± SD. They were fed a total mixed ration, twice daily, with 1 of 3 supplements: wheat (W, molasses (M, or molasses mixed with sodium bicarbonate (Mbic. The dry mater intake (DMI; kg/day was significantly greater (P < 0.001 in the metabolic cage compared with that in the respiration chamber. The daily CH4 (L/day emission was strongly correlated (r = 0.78 between CO2T and CO2R. The daily CH4 (L/kg DMI emission by the CO2T was in the same magnitude as by the CO2R. The measured CO2 (L/day production in the respiration chamber was not different (P = 0.39 from the calculated CO2 production using the CO2T. This result concludes a reasonable accuracy and precision of CH4 estimation by the CO2T compared with the CO2R.

  7. Use of adsorption and gas chromatographic techniques in estimating biodegradation of indigenous crude oils

    International Nuclear Information System (INIS)

    Kokub, D.; Allahi, A.; Shafeeq, M.; Khalid, Z.M.; Malik, K.A.; Hussain, A.

    1993-01-01

    Indigenous crude oils could be degraded and emulsified upto varying degree by locally isolated bacteria. Degradation and emulsification was found to be dependent upon the chemical composition of the crude oils. Tando Alum and Khashkheli crude oils were emulsified in 27 and 33 days of incubation respectively. While Joyamair crude oil and not emulsify even mainly due to high viscosity of this oil. Using adsorption chromatographic technique, oil from control (uninoculated) and bio degraded flasks was fractioned into the deasphaltened oil containing saturate, aromatic, NSO (nitrogen, sulphur, oxygen) containing hydrocarbons) and soluble asphaltenes. Saturate fractions from control and degraded oil were further analysed by gas liquid chromatography. From these analyses, it was observed that saturate fraction was preferentially utilized and the crude oils having greater contents of saturate fraction were better emulsified than those low in this fraction. Utilization of various fractions of crude oils was in the order saturate> aromatic> NSO. (author)

  8. The Technique for the Numerical Tolerances Estimations in the Construction of Compensated Accelerating Structures

    CERN Document Server

    Paramonov, V V

    2004-01-01

    The requirements to the cells manufacturing precision and tining in the multi-cells accelerating structures construction came from the required accelerating field uniformity, based on the beam dynamics demands. The standard deviation of the field distribution depends on accelerating and coupling modes frequencies deviations, stop-band width and coupling coefficient deviations. These deviations can be determined from 3D fields distribution for accelerating and coupling modes and the cells surface displacements. With modern software it can be done separately for every specified part of the cell surface. Finally, the cell surface displacements are defined from the cell dimensions deviations. This technique allows both to define qualitatively the critical regions and to optimize quantitatively the tolerances definition.

  9. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand intensive care adult patient data-base, 2008–2009

    Science.gov (United States)

    2012-01-01

    Background For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Methods Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008–2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. Results The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for

  10. Excitation-Dependent Carrier lifetime and Diffusion Length in Bulk CdTe Determined by Time-Resolved Optical Pump-Probe Techniques.

    Energy Technology Data Exchange (ETDEWEB)

    Kuciauskas, Darius [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Scajev, Patrik [Vilnius University; Miasojedovas, Saulius [Vilnius University; Mekys, Algirdas [Vilnius University; Lynn, Kelvin G. [Washington State University; Swain, Santosh K. [Washington State University; Jarasiunas, Kestutis [Vilnius University

    2018-01-11

    We applied time-resolved pump-probe spectroscopy based on free carrier absorption and light diffraction on a transient grating for direct measurements of the carrier lifetime and diffusion coefficient D in high-resistivity single crystal CdTe (codoped with In and Er). The bulk carrier lifetime t decreased from 670 +/-50 ns to 60 +/- 10 ns with increase of excess carrier density N from 10^16 to 5 x 10^18 cm-3 due to the excitation-dependent radiative recombination rate. In this N range, the carrier diffusion length dropped from 14 um to 6 um due to lifetime decrease. Modeling of in-depth (axial) and in-plane (lateral) carrier diffusion provided the value of surface recombination velocity S = 6 x 10^5 cm/s for the untreated surface. At even higher excitations, in the 10^19-3 x 10^20 cm-3 density range, D increase from 5 to 20 cm^2/s due to carrier degeneracy was observed.

  11. Comparison of internal radiation doses estimated by MIRD and voxel techniques for a ''family'' of phantoms

    International Nuclear Information System (INIS)

    Smith, T.

    2000-01-01

    The aim of this study was to use a new system of realistic voxel phantoms, based on computed tomography scanning of humans, to assess its ability to specify the internal dosimetry of selected human examples in comparison with the well-established MIRD system of mathematical anthropomorphic phantoms. Differences in specific absorbed fractions between the two systems were inferred by using organ dose estimates as the end point for comparison. A ''family'' of voxel phantoms, comprising an 8-week-old baby, a 7-year-old child and a 38-year-old adult, was used and a close match to these was made by interpolating between organ doses estimated for pairs of the series of six MIRD phantoms. Using both systems, doses were calculated for up to 22 organs for four radiopharmaceuticals with widely differing biodistribution and emission characteristics (technetium-99m pertechnetate, administered without thyroid blocking; iodine-123 iodide; indium-111 antimyosin; oxygen-15 water). Organ dose estimates under the MIRD system were derived using the software MIRDOSE 3, which incorporates specific absorbed fraction (SAF) values for the MIRD phantom series. The voxel system uses software based on the same dose calculation formula in conjunction with SAF values determined by Monte Carlo analysis at the GSF of the three voxel phantoms. Effective doses were also compared. Substantial differences in organ weights were observed between the two systems, 18% differing by more than a factor of 2. Out of a total of 238 organ dose comparisons, 5% differed by more than a factor of 2 between the systems; these included some doses to walls of the GI tract, a significant result in relation to their high tissue weighting factors. Some of the largest differences in dose were associated with organs of lower significance in terms of radiosensitivity (e.g. thymus). In this small series, voxel organ doses tended to exceed MIRD values, on average, and a 10% difference was significant when all 238 organ doses

  12. Techniques and software tools for estimating ultrasonic signal-to-noise ratios

    Science.gov (United States)

    Chiou, Chien-Ping; Margetan, Frank J.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.

    2016-02-01

    At Iowa State University's Center for Nondestructive Evaluation (ISU CNDE), the use of models to simulate ultrasonic inspections has played a key role in R&D efforts for over 30 years. To this end a series of wave propagation models, flaw response models, and microstructural backscatter models have been developed to address inspection problems of interest. One use of the combined models is the estimation of signal-to-noise ratios (S/N) in circumstances where backscatter from the microstructure (grain noise) acts to mask sonic echoes from internal defects. Such S/N models have been used in the past to address questions of inspection optimization and reliability. Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was recently initiated to improve existing research-grade software by adding graphical user interface (GUI) to become user friendly tools for the rapid estimation of S/N for ultrasonic inspections of metals. The software combines: (1) a Python-based GUI for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signal and backscattered grain noise characteristics. The latter makes use of several models including: the Multi-Gaussian Beam Model for computing sonic fields radiated by commercial transducers; the Thompson-Gray Model for the response from an internal defect; the Independent Scatterer Model for backscattered grain noise; and the Stanke-Kino Unified Model for attenuation. The initial emphasis was on reformulating the research-grade code into a suitable modular form, adding the graphical user interface and performing computations rapidly and robustly. Thus the initial inspection problem being addressed is relatively simple. A normal-incidence pulse/echo immersion inspection is simulated for a curved metal component having a non-uniform microstructure, specifically an equiaxed, untextured microstructure in which the average

  13. Optimal Design for Reactivity Ratio Estimation: A Comparison of Techniques for AMPS/Acrylamide and AMPS/Acrylic Acid Copolymerizations

    Directory of Open Access Journals (Sweden)

    Alison J. Scott

    2015-11-01

    Full Text Available Water-soluble polymers of acrylamide (AAm and acrylic acid (AAc have significant potential in enhanced oil recovery, as well as in other specialty applications. To improve the shear strength of the polymer, a third comonomer, 2-acrylamido-2-methylpropane sulfonic acid (AMPS, can be added to the pre-polymerization mixture. Copolymerization kinetics of AAm/AAc are well studied, but little is known about the other comonomer pairs (AMPS/AAm and AMPS/AAc. Hence, reactivity ratios for AMPS/AAm and AMPS/AAc copolymerization must be established first. A key aspect in the estimation of reliable reactivity ratios is design of experiments, which minimizes the number of experiments and provides increased information content (resulting in more precise parameter estimates. However, design of experiments is hardly ever used during copolymerization parameter estimation schemes. In the current work, copolymerization experiments for both AMPS/AAm and AMPS/AAc are designed using two optimal techniques (Tidwell-Mortimer and the error-in-variables-model (EVM. From these optimally designed experiments, accurate reactivity ratio estimates are determined for AMPS/AAm (rAMPS = 0.18, rAAm = 0.85 and AMPS/AAc (rAMPS = 0.19, rAAc = 0.86.

  14. Estimation of both optical and nonoptical surface water quality parameters using Landsat 8 OLI imagery and statistical techniques

    Science.gov (United States)

    Sharaf El Din, Essam; Zhang, Yun

    2017-10-01

    Traditional surface water quality assessment is costly, labor intensive, and time consuming; however, remote sensing has the potential to assess surface water quality because of its spatiotemporal consistency. Therefore, estimating concentrations of surface water quality parameters (SWQPs) from satellite imagery is essential. Remote sensing estimation of nonoptical SWQPs, such as chemical oxygen demand (COD), biochemical oxygen demand (BOD), and dissolved oxygen (DO), has not yet been performed because they are less likely to affect signals measured by satellite sensors. However, concentrations of nonoptical variables may be correlated with optical variables, such as turbidity and total suspended sediments, which do affect the reflected radiation. In this context, an indirect relationship between satellite multispectral data and COD, BOD, and DO can be assumed. Therefore, this research attempts to develop an integrated Landsat 8 band ratios and stepwise regression to estimate concentrations of both optical and nonoptical SWQPs. Compared with previous studies, a significant correlation between Landsat 8 surface reflectance and concentrations of SWQPs was achieved and the obtained coefficient of determination (R2)>0.85. These findings demonstrated the possibility of using our technique to develop models to estimate concentrations of SWQPs and to generate spatiotemporal maps of SWQPs from Landsat 8 imagery.

  15. Different techniques of excess 210Pb for sedimentation rate estimation in the Sarawak and Sabah coastal waters

    International Nuclear Information System (INIS)

    Zal Uyun Wan Mahmood; Zaharudin Ahmad; Abdul Kadir Ishak; Che Abdul Rahim Mohamed

    2010-01-01

    Sediment core samples were collected at eight stations in the Sarawak and Sabah coastal waters using a gravity box corer to estimate sedimentation rates based on the activity of excess 210 Pb. The sedimentation rates derived from four mathematical models of CIC, Shukla-CIC, CRS and ADE were generally shown in good agreement with similar or comparable value at all stations. However, based on statistical analysis of independent sample t-test indicated that Shukla-CIC model was the most accurate, reliable and suitable technique to determine the sedimentation rate in the study area. (author)

  16. Comparisons and Uncertainty in Fat and Adipose Tissue Estimation Techniques: The Northern Elephant Seal as a Case Study.

    Directory of Open Access Journals (Sweden)

    Lisa K Schwarz

    Full Text Available Fat mass and body condition are important metrics in bioenergetics and physiological studies. They can also link foraging success with demographic rates, making them key components of models that predict population-level outcomes of environmental change. Therefore, it is important to incorporate uncertainty in physiological indicators if results will lead to species management decisions. Maternal fat mass in elephant seals (Mirounga spp can predict reproductive rate and pup survival, but no one has quantified or identified the sources of uncertainty for the two fat mass estimation techniques (labeled-water and truncated cones. The current cones method can provide estimates of proportion adipose tissue in adult females and proportion fat of juveniles in northern elephant seals (M. angustirostris comparable to labeled-water methods, but it does not work for all cases or species. We reviewed components and assumptions of the technique via measurements of seven early-molt and seven late-molt adult females. We show that seals are elliptical on land, rather than the assumed circular shape, and skin may account for a high proportion of what is often defined as blubber. Also, blubber extends past the neck-to-pelvis region, and comparisons of new and old ultrasound instrumentation indicate previous measurements of sculp thickness may be biased low. Accounting for such differences, and incorporating new measurements of blubber density and proportion of fat in blubber, we propose a modified cones method that can isolate blubber from non-blubber adipose tissue and separate fat into skin, blubber, and core compartments. Lastly, we found that adipose tissue and fat estimates using tritiated water may be biased high during the early molt. Both the tritiated water and modified cones methods had high, but reducible, uncertainty. The improved cones method for estimating body condition allows for more accurate quantification of the various tissue masses and may

  17. Estimation of erosion rates on the slope land in Nganjuk district using 137Cs technique

    International Nuclear Information System (INIS)

    Barokah Aliyanta; Rahmadi Suprapto

    2009-01-01

    Erosion investigation was conducted by using natural radionuclide of 137 Cs on the slope land in Nganjuk district, East Java. The investigated area covers Sawahan; Ngetos and Loceret sub-district with approximately an area more than 11,000 ha. Soil samples were collected and grouped based on soil type, location, land use, topography and drainage maps. Soil samples were taken from each group in sloping transect. Meanwhile, the reference samples were taken from four locations, namely 2 locations at the protected forest, 1 location at the terraces garden and 1 location at the hill slope that is well covered by grass. The result shows that the average reference inventory is 281 Bq/m 2 . This value is used to calculate annual erosion rate at period from 1963 to 2006. Estimated erosion rate is ranging from 2 up to more than 100 ton/ha/yr, and the SDR vary from 17 % to 100 %, at the T1 to T28 transect. (author)

  18. Communications-imposed pilot workload - A comparison of sixteen estimation techniques

    Science.gov (United States)

    Casali, J. G.; Wierwille, W. W.

    1984-01-01

    Sixteen potential metrics of mental workload were investigated in regard to their relative sensitivity to communications load and their differential intrusion on primary task performance. A moving-base flight simulator was used to present three cross-country flights to each of 30 subject pilots, each flight varying only in the difficulty of the inherent communications requirements. With the exception of the rating scale measures, which were obtained immediately post-flight, all measures were taken over a seven minute segment of the flight task. The results indicated that both the Modified Cooper-Harper and the workload Multi-descriptor rating scales were reliably sensitive to changes in communications load. Also, the secondary task measure of time estimation and the physiological measure of pupil diameter yielded sensitivity. As expected, those primary task measures which were direct measures of communicative performance were also sensitive to load, while aircraft control primary task measures were not, attesting to the task-specificity of such measures. Finally, the intrusion analysis revealed no differential interference between workload measures.

  19. A positional estimation technique for an autonomous land vehicle in an unstructured environment

    Science.gov (United States)

    Talluri, Raj; Aggarwal, J. K.

    1990-01-01

    This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.

  20. Soil Erosion Estimation Using Remote Sensing Techniques in Wadi Yalamlam Basin, Saudi Arabia

    Directory of Open Access Journals (Sweden)

    Jarbou A. Bahrawi

    2016-01-01

    Full Text Available Soil erosion is one of the major environmental problems in terms of soil degradation in Saudi Arabia. Soil erosion leads to significant on- and off-site impacts such as significant decrease in the productive capacity of the land and sedimentation. The key aspects influencing the quantity of soil erosion mainly rely on the vegetation cover, topography, soil type, and climate. This research studies the quantification of soil erosion under different levels of data availability in Wadi Yalamlam. Remote Sensing (RS and Geographic Information Systems (GIS techniques have been implemented for the assessment of the data, applying the Revised Universal Soil Loss Equation (RUSLE for the calculation of the risk of erosion. Thirty-four soil samples were randomly selected for the calculation of the erodibility factor, based on calculating the K-factor values derived from soil property surfaces after interpolating soil sampling points. Soil erosion risk map was reclassified into five erosion risk classes and 19.3% of the Wadi Yalamlam is under very severe risk (37,740 ha. GIS and RS proved to be powerful instruments for mapping soil erosion risk, providing sufficient tools for the analytical part of this research. The mapping results certified the role of RUSLE as a decision support tool.

  1. A Comprehensive Review on Water Quality Parameters Estimation Using Remote Sensing Techniques.

    Science.gov (United States)

    Gholizadeh, Mohammad Haji; Melesse, Assefa M; Reddi, Lakshmi

    2016-08-16

    Remotely sensed data can reinforce the abilities of water resources researchers and decision makers to monitor waterbodies more effectively. Remote sensing techniques have been widely used to measure the qualitative parameters of waterbodies (i.e., suspended sediments, colored dissolved organic matter (CDOM), chlorophyll-a, and pollutants). A large number of different sensors on board various satellites and other platforms, such as airplanes, are currently used to measure the amount of radiation at different wavelengths reflected from the water's surface. In this review paper, various properties (spectral, spatial and temporal, etc.) of the more commonly employed spaceborne and airborne sensors are tabulated to be used as a sensor selection guide. Furthermore, this paper investigates the commonly used approaches and sensors employed in evaluating and quantifying the eleven water quality parameters. The parameters include: chlorophyll-a (chl-a), colored dissolved organic matters (CDOM), Secchi disk depth (SDD), turbidity, total suspended sediments (TSS), water temperature (WT), total phosphorus (TP), sea surface salinity (SSS), dissolved oxygen (DO), biochemical oxygen demand (BOD) and chemical oxygen demand (COD).

  2. Estimation of fracture aperture using simulation technique; Simulation wo mochiita fracture kaiko haba no suitei

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, T. [Geological Survey of Japan, Tsukuba (Japan); Abe, M. [Tohoku University, Sendai (Japan). Faculty of Engineering

    1996-10-01

    Characteristics of amplitude variation around fractures have been investigated using simulation technique in the case changing the fracture aperture. Four models were used. The model-1 was a fracture model having a horizontal fracture at Z=0. For the model-2, the fracture was replaced by a group of small fractures. The model-3 had an extended borehole diameter at Z=0 in a shape of wedge. The model-4 had a low velocity layer at Z=0. The maximum amplitude was compared each other for each depth and for each model. For the model-1, the amplitude became larger at the depth of the fracture, and became smaller above the fracture. For the model-2, when the cross width D increased to 4 cm, the amplitude approached to that of the model-1. For the model-3 having extended borehole diameter, when the extension of borehole diameter ranged between 1 cm and 2 cm, the change of amplitude was hardly observed above and below the fracture. However, when the extension of borehole diameter was 4 cm, the amplitude became smaller above the extension part of borehole. 3 refs., 4 figs., 1 tab.

  3. A Comprehensive Review on Water Quality Parameters Estimation Using Remote Sensing Techniques

    Science.gov (United States)

    Gholizadeh, Mohammad Haji; Melesse, Assefa M.; Reddi, Lakshmi

    2016-01-01

    Remotely sensed data can reinforce the abilities of water resources researchers and decision makers to monitor waterbodies more effectively. Remote sensing techniques have been widely used to measure the qualitative parameters of waterbodies (i.e., suspended sediments, colored dissolved organic matter (CDOM), chlorophyll-a, and pollutants). A large number of different sensors on board various satellites and other platforms, such as airplanes, are currently used to measure the amount of radiation at different wavelengths reflected from the water’s surface. In this review paper, various properties (spectral, spatial and temporal, etc.) of the more commonly employed spaceborne and airborne sensors are tabulated to be used as a sensor selection guide. Furthermore, this paper investigates the commonly used approaches and sensors employed in evaluating and quantifying the eleven water quality parameters. The parameters include: chlorophyll-a (chl-a), colored dissolved organic matters (CDOM), Secchi disk depth (SDD), turbidity, total suspended sediments (TSS), water temperature (WT), total phosphorus (TP), sea surface salinity (SSS), dissolved oxygen (DO), biochemical oxygen demand (BOD) and chemical oxygen demand (COD). PMID:27537896

  4. Software development for estimating the conversion factor (k-factor) at suitable scan areas, relating the dose length product to the effective dose

    International Nuclear Information System (INIS)

    Kobayashi, Masanao; Asada, Yasuki; Suzuki, Syouichi; Kato, Ryouichi; Matsubara, Kosuke; Koshida, Kichiro; Matsunaga, Yuta; Kawaguchi, Ai; Haba, Tomonobu; Toyama, Hiroshi

    2017-01-01

    We developed a k-factor-creator software (kFC) that provides the k-factor for CT examination in an arbitrary scan area. It provides the k-factor from the effective dose and dose-length product by Imaging Performance Assessment of CT scanners and CT-EXPO. To assess the reliability, we compared the kFC-evaluated k-factors with those of the International Commission on Radiological Protection (ICRP) publication 102. To confirm the utility, the effective dose determined by coronary computed tomographic angiography (CCTA) was evaluated by a phantom study and k-factor studies. In the CCTA, the effective doses were 5.28 mSv in the phantom study, 2.57 mSv (51%) in the k-factor of ICRP, and 5.26 mSv (1%) in the k-factor of the kFC. Effective doses can be determined from the kFC-evaluated k-factors in suitable scan areas. Therefore, we speculate that the flexible k-factor is useful in clinical practice, because CT examinations are performed in various scan regions. (authors)

  5. Improved seismic risk estimation for Bucharest, based on multiple hazard scenarios, analytical methods and new techniques

    Science.gov (United States)

    Toma-Danila, Dragos; Florinela Manea, Elena; Ortanza Cioflan, Carmen

    2014-05-01

    Bucharest, capital of Romania (with 1678000 inhabitants in 2011), is one of the most exposed big cities in Europe to seismic damage. The major earthquakes affecting the city have their origin in the Vrancea region. The Vrancea intermediate-depth source generates, statistically, 2-3 shocks with moment magnitude >7.0 per century. Although the focal distance is greater than 170 km, the historical records (from the 1838, 1894, 1908, 1940 and 1977 events) reveal severe effects in the Bucharest area, e.g. intensities IX (MSK) for the case of 1940 event. During the 1977 earthquake, 1420 people were killed and 33 large buildings collapsed. The nowadays building stock is vulnerable both due to construction (material, age) and soil conditions (high amplification, generated within the weak consolidated Quaternary deposits, their thickness is varying 250-500m throughout the city). A number of 373 old buildings, out of 2563, evaluated by experts are more likely to experience severe damage/collapse in the next major earthquake. The total number of residential buildings, in 2011, was 113900. In order to guide the mitigation measures, different studies tried to estimate the seismic risk of Bucharest, in terms of buildings, population or economic damage probability. Unfortunately, most of them were based on incomplete sets of data, whether regarding the hazard or the building stock in detail. However, during the DACEA Project, the National Institute for Earth Physics, together with the Technical University of Civil Engineering Bucharest and NORSAR Institute managed to compile a database for buildings in southern Romania (according to the 1999 census), with 48 associated capacity and fragility curves. Until now, the developed real-time estimation system was not implemented for Bucharest. This paper presents more than an adaptation of this system to Bucharest; first, we analyze the previous seismic risk studies, from a SWOT perspective. This reveals that most of the studies don't use

  6. A preliminary study on sedimentation rate in Tasek Bera Lake estimated using Pb-210 dating technique

    International Nuclear Information System (INIS)

    Wan Zakaria Wan Muhamad Tahir; Johari Abdul Latif; Juhari Mohd Yusof; Kamaruzaman Mamat; Gharibreza, M.R.

    2010-01-01

    Tasek Bera is the largest natural lake system (60 ha) in Malaysia located in southwest Pahang. The lake is a complex dendritic system consisting of extensive peat-swamp forests. The catchment was originally lowland dipterocarp forest, but this has nearly over the past four decades been largely replaced with oil palm and rubber plantations developed by the Federal Land Development Authority (FELDA). Besides the environmentally importance of Tasek Bera, it is seriously subjected to erosion, sedimentation and morphological changes. Knowledge and information of accurate sedimentation rate and its causes are of utmost importance for appropriate management of lakes and future planning. In the present study, environmental 210 Pb (natural) dating technique was applied to determine sedimentation rate and pattern as well as the chronology of sediment deposit in Tasek Bera Lake. Three undisturbed core samples from different locations at the main entry and exit points of river mouth and in open water within the lake were collected during a field sampling campaign in October 2009 and analyzed for 210 Pb using gamma spectrometry method. Undisturbed sediments are classified as organic soils to peat with clayey texture that composed of 93 % clay, 5 % silt, and 2 % very fine sand. Comparatively higher sedimentation rates in the entry (0.06-1.58 cm/ yr) and exit (0.05-1.55 cm/ yr) points of the main river mouth as compared to the lakes open water (0.02- 0.74 cm/ yr) were noticed. Reasons for the different pattern of sedimentation rates in this lake and conclusion are discussed in this paper. (author)

  7. Estimating rumen microbial protein supply for indigenous ruminants using nuclear and purine excretion techniques in Indonesia

    International Nuclear Information System (INIS)

    Soejono, M.; Yusiati, L.M.; Budhi, S.P.S.; Widyobroto, B.P.; Bachrudin, Z.

    1999-01-01

    The microbial protein supply to ruminants can be estimated based on the amount of purine derivatives (PD) excreted in the urine. Four experiments were conducted to evaluate the PD excretion method for Bali and Ongole cattle. In the first experiment, six male, two year old Bali cattle (Bos sondaicus) and six Ongole cattle (Bos indicus) of similar sex and age, were used to quantify the endogenous contribution to total PD excretion in the urine. In the second experiment, four cattle from each breed were used to examine the response of PD excretion to feed intake. 14 C-uric acid was injected in one single dose to define the partitioning ratio of renal:non-renal losses of plasma PD. The third experiment was conducted to examine the ratio of purine N:total N in mixed rumen microbial population. The fourth experiment measured the enzyme activities of blood, liver and intestinal tissues concerned with PD metabolism. The results of the first experiment showed that endogenous PD excretion was 145 ± 42.0 and 132 ± 20.0 μmol/kg W 0.75 /d, for Bali and Ongole cattle, respectively. The second experiment indicated that the proportion of plasma PD excreted in the urine of Bali and Ongole cattle was 0.78 and 0.77 respectively. Hence, the prediction of purine absorbed based on PD excretion can be stated as Y = 0.78 X + 0.145 W 0.75 and Y = 0.77 X + 0.132 W 0.75 for Bali and Ongole cattle, respectively. The third experiment showed that there were no differences in the ratio of purine N:total N in mixed rumen microbes of Bali and Ongole cattle (17% vs 18%). The last experiment, showed that intestinal xanthine oxidase activity of Bali cattle was lower than that of Ongole cattle (0.001 vs 0.015 μmol uric acid produced/min/g tissue) but xanthine oxidase activity in the blood and liver of Bali cattle was higher than that of Ongole cattle (3.48 vs 1.34 μmol/min/L plasma and 0.191 vs 0.131 μmol/min/g liver tissue). Thus, there was no difference in PD excretion between these two breeds

  8. The importance of record length in estimating the magnitude of climatic changes: an example using 175 years of lake ice-out dates in New England

    Science.gov (United States)

    Hodgkins, Glenn A.

    2013-01-01

    Many studies have shown that lake ice-out (break-up) dates in the Northern Hemisphere are useful indicators of late winter/early spring climate change. Trends in lake ice-out dates in New England, USA, were analyzed for 25, 50, 75, 100, 125, 150, and 175 year periods ending in 2008. More than 100 years of ice-out data were available for 19 of the 28 lakes in this study. The magnitude of trends over time depends on the length of the period considered. For the recent 25-year period, there was a mix of earlier and later ice-out dates. Lake ice-outs during the last 50 years became earlier by 1.8 days/decade (median change for all lakes with adequate data). This is a much higher rate than for longer historical periods; ice-outs became earlier by 0.6 days/decade during the last 75 years, 0.4 days/ decade during the last 100 years, and 0.6 days/decade during the last 125 years. The significance of trends was assessed under the assumption of serial independence of historical ice-out dates and under the assumption of short and long term persistence. Hypolimnion dissolved oxygen (DO) levels are an important factor in lake eutrophication and coldwater fish survival. Based on historical data available at three lakes, 32 to 46 % of the interannual variability of late summer hypolimnion DO levels was related to ice-out dates; earlier ice-outs were associated with lower DO levels.

  9. Coastline Change Surround Sekampung River Estuary Estimated by Geographic Information System Technique

    Directory of Open Access Journals (Sweden)

    Fahri

    2011-05-01

    Full Text Available Surround a big river estuary coastline has a dynamic characteristic and change along a period of time, because of a natural process and/or it is accelerated by human activities. The surround Sekampung river estuary coastline located in Rawa Sragi area is one of the most dynamic coastlines in southern Lampung Province that has changed significantly from 1959 (as a natural process to year 1987 (as an accelerated process by human activities since the government of Indonesia has applied swamp drainage system for Rawa Sragi area. It is likely that the coastline has changed significantly in the period of 1987 to 2009 (as an increasing intensity of the human activities in the surrounding Rawa Sragi land. The objective of this research was to analyze the coastline change in the surrounding of Sekampung river estuary in two periods of time: (1 the change of the 1959 – 1987 period coastlines; and (2 the change of the 1987 – 2009 period coastlines. The method of this research was a GIS technique, the implementation was divided into three main steps: (1 the first analysis was conducted in laboratory include raster data source analysis and registration, coastline digitations, and overlaying and analysis of the coastline data; (2 field observation (ground check was conducted to observe and verify the ground existing coastline; and (3 the last analysis was conducted after ground check activity to improve and to verify the first coastline analysis results. The result of this research indicated that coastline change in the period of 1959 to 1987 increased the coast land as much as 717.19 hectares, but decreased the coast land as much as 308.51 hectares. Furthermore the coastline change in the period of 1987 to 2009 increased the coast land as much as 162.504 hectares, but decreased the coast land as much as 492.734 hectares. The 1959 – 1987 coastline change was a coast land increasing period, but the 1987 – 2009 coastline change was a coast land

  10. Exploration of deep S-wave velocity structure using microtremor array technique to estimate long-period ground motion

    International Nuclear Information System (INIS)

    Sato, Hiroaki; Higashi, Sadanori; Sato, Kiyotaka

    2007-01-01

    In this study, microtremor array measurements were conducted at 9 sites in the Niigata plain to explore deep S-wave velocity structures for estimation of long-period earthquake ground motion. The 1D S-wave velocity profiles in the Niigata plain are characterized by 5 layers with S-wave velocities of 0.4, 0.8, 1.5, 2.1 and 3.0 km/s, respectively. The depth to the basement layer is deeper in the Niigata port area located at the Japan sea side of the Niigata plain. In this area, the basement depth is about 4.8 km around the Seirou town and about 4.1 km around the Niigata city, respectively. These features about the basement depth in the Niigata plain are consistent with the previous surveys. In order to verify the profiles derived from microtremor array exploration, we estimate the group velocities of Love wave for four propagation paths of long-period earthquake ground motion during Niigata-ken tyuetsu earthquake by multiple filter technique, which were compared with the theoretical ones calculated from the derived profiles. As a result, it was confirmed that the group velocities from the derived profiles were in good agreement with the ones from long-period earthquake ground motion records during Niigata-ken tyuetsu earthquake. Furthermore, we applied the estimation method of design basis earthquake input for seismically isolated nuclear power facilities by using normal mode solution to estimate long-period earthquake ground motion during Niigata-ken tyuetsu earthquake. As a result, it was demonstrated that the applicability of the above method for the estimation of long-period earthquake ground motion were improved by using the derived 1D S-wave velocity profile. (author)

  11. [Estimation of a nationwide statistics of hernia operation applying data mining technique to the National Health Insurance Database].

    Science.gov (United States)

    Kang, Sunghong; Seon, Seok Kyung; Yang, Yeong-Ja; Lee, Aekyung; Bae, Jong-Myon

    2006-09-01

    The aim of this study is to develop a methodology for estimating a nationwide statistic for hernia operations with using the claim database of the Korea Health Insurance Cooperation (KHIC). According to the insurance claim procedures, the claim database was divided into the electronic data interchange database (EDI_DB) and the sheet database (Paper_DB). Although the EDI_DB has operation and management codes showing the facts and kinds of operations, the Paper_DB doesn't. Using the hernia matched management code in the EDI_DB, the cases of hernia surgery were extracted. For drawing the potential cases from the Paper_DB, which doesn't have the code, the predictive model was developed using the data mining technique called SEMMA. The claim sheets of the cases that showed a predictive probability of an operation over the threshold, as was decided by the ROC curve, were identified in order to get the positive predictive value as an index of usefulness for the predictive model. Of the claim databases in 2004, 14,386 cases had hernia related management codes with using the EDI system. For fitting the models with applying the data mining technique, logistic regression was chosen rather than the neural network method or the decision tree method. From the Paper_DB, 1,019 cases were extracted as potential cases. Direct review of the sheets of the extracted cases showed that the positive predictive value was 95.3%. The results suggested that applying the data mining technique to the claim database in the KHIC for estimating the nationwide surgical statistics would be useful from the aspect of execution and cost-effectiveness.

  12. Application of PSO (particle swarm optimization) and GA (genetic algorithm) techniques on demand estimation of oil in Iran

    Energy Technology Data Exchange (ETDEWEB)

    Assareh, E.; Behrang, M.A. [Department of Mechanical Engineering, Young Researchers Club, Islamic Azad University, Dezful Branch (Iran, Islamic Republic of); Assari, M.R. [Department of Mechanical Engineering, Engineering Faculty, Jundi Shapour University, Dezful (Iran, Islamic Republic of); Ghanbarzadeh, A. [Department of Mechanical Engineering, Engineering Faculty, Shahid Chamran University, Ahvaz (Iran, Islamic Republic of)

    2010-12-15

    This paper presents application of PSO (Particle Swarm Optimization) and GA (Genetic Algorithm) techniques to estimate oil demand in Iran, based on socio-economic indicators. The models are developed in two forms (exponential and linear) and applied to forecast oil demand in Iran. PSO-DEM and GA-DEM (PSO and GA demand estimation models) are developed to estimate the future oil demand values based on population, GDP (gross domestic product), import and export data. Oil consumption in Iran from 1981 to 2005 is considered as the case of this study. The available data is partly used for finding the optimal, or near optimal values of the weighting parameters (1981-1999) and partly for testing the models (2000-2005). For the best results of GA, the average relative errors on testing data were 2.83% and 1.72% for GA-DEM{sub exponential} and GA-DEM{sub linear}, respectively. The corresponding values for PSO were 1.40% and 1.36% for PSO-DEM{sub exponential} and PSO-DEM{sub linear}, respectively. Oil demand in Iran is forecasted up to year 2030. (author)

  13. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  14. Comparison of techniques for estimating PAH bioavailability: Uptake in Eisenia fetida, passive samplers and leaching using various solvents and additives

    Energy Technology Data Exchange (ETDEWEB)

    Bergknut, Magnus [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden)]. E-mail: magnus.bergknut@chem.umu.se; Sehlin, Emma [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Lundstedt, Staffan [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Andersson, Patrik L. [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Haglund, Peter [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Tysklind, Mats [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden)

    2007-01-15

    The aim of this study was to evaluate different techniques for assessing the availability of polycyclic aromatic hydrocarbons (PAHs) in soil. This was done by comparing the amounts (total and relative) taken up by the earthworm Eisenia fetida with the amounts extracted by solid-phase microextraction (SPME), semi-permeable membrane devices (SPMDs), leaching with various solvent mixtures, leaching using additives, and sequential leaching. Bioconcentration factors of PAHs in the earthworms based on equilibrium partitioning theory resulted in poor correlations to observed values. This was most notable for PAHs with high concentrations in the studied soil. Evaluation by principal component analysis (PCA) showed distinct differences between the evaluated techniques and, generally, there were larger proportions of carcinogenic PAHs (4-6 fused rings) in the earthworms. These results suggest that it may be difficult to develop a chemical method that is capable of mimicking biological uptake, and thus estimating the bioavailability of PAHs. - The total and relative amounts of PAHs extracted by abiotic techniques for assessing the bioavailability of PAHs was found to differ from the amounts taken up by Eisenia fetida.

  15. Primary length standard adjustment

    Science.gov (United States)

    Ševčík, Robert; Guttenová, Jana

    2007-04-01

    This paper deals with problems and techniques connected with primary length standard adjusting, which includes disassembling of the device and by use of the secondary laser with collimated beam and diffraction laws successively reassembling of the laser. In the reassembling process the device was enhanced with substituting the thermal grease cooling of cold finger by copper socket cooler. This improved external cooling system enables more effective cooling of molecular iodine in the cell, which allows better pressure stability of iodine vapor and easier readjustment of the system.

  16. Techniques for Estimating Emissions Factors from Forest Burning: ARCTAS and SEAC4RS Airborne Measurements Indicate which Fires Produce Ozone

    Science.gov (United States)

    Chatfield, Robert B.; Andreae, Meinrat O.

    2016-01-01

    Previous studies of emission factors from biomass burning are prone to large errors since they ignore the interplay of mixing and varying pre-fire background CO2 levels. Such complications severely affected our studies of 446 forest fire plume samples measured in the Western US by the science teams of NASA's SEAC4RS and ARCTAS airborne missions. Consequently we propose a Mixed Effects Regression Emission Technique (MERET) to check techniques like the Normalized Emission Ratio Method (NERM), where use of sequential observations cannot disentangle emissions and mixing. We also evaluate a simpler "consensus" technique. All techniques relate emissions to fuel burned using C(burn) = delta C(tot) added to the fire plume, where C(tot) approximately equals (CO2 = CO). Mixed-effects regression can estimate pre-fire background values of C(tot) (indexed by observation j) simultaneously with emissions factors indexed by individual species i, delta, epsilon lambda tau alpha-x(sub I)/C(sub burn))I,j. MERET and "consensus" require more than emissions indicators. Our studies excluded samples where exogenous CO or CH4 might have been fed into a fire plume, mimicking emission. We sought to let the data on 13 gases and particulate properties suggest clusters of variables and plume types, using non-negative matrix factorization (NMF). While samples were mixtures, the NMF unmixing suggested purer burn types. Particulate properties (b scant, b abs, SSA, AAE) and gas-phase emissions were interrelated. Finally, we sought a simple categorization useful for modeling ozone production in plumes. Two kinds of fires produced high ozone: those with large fuel nitrogen as evidenced by remnant CH3CN in the plumes, and also those from very intense large burns. Fire types with optimal ratios of delta-NOy/delta- HCHO associate with the highest additional ozone per unit Cburn, Perhaps these plumes exhibit limited NOx binding to reactive organics. Perhaps these plumes exhibit limited NOx binding to

  17. Risk estimation in association with diagnostic techniques in the nuclear medicine service of the Camaguey Ciego de Avila Territory

    International Nuclear Information System (INIS)

    Barrerras, C.A.; Brigido, F.O.; Naranjo, L.A.; Lasserra, S.O.; Hernandez Garcia, J.

    1999-01-01

    The nuclear medicine service at the Maria Curie Oncological Hospital, Camaguey, has experience of over three decades in using radiofarmaceutical imaging agents for diagnosis. Although the clinical risk associated with these techniques is negligible, it is necessary to evaluate the effective dose administered to the patient due to the introduction of radioactive substances into the body. The study of the dose administered to the patient provides useful data for evaluating the detriment associated with this medical practice, its subsequently optimization and consequently, for minimizing the stochastic effects on the patient. The aim of our paper is to study the collective effective dose administered by nuclear medicine service to Camaguey and Ciego de Avila population from 1995 to 1998 and the relative contribution to the total annual effective collective dose of the different diagnostic examinations. The studies were conducted on the basis of statistics from nuclear medicine examinations given to a population of 1102353 inhabitants since 1995. The results show that the nuclear medicine techniques of neck examinations with 1168.8 Sv man (1.11 Sv/expl), thyroid explorations with 119.6 Sv man (55.5 mSv/expl) and iodide uptake with 113.7 Sv man (14.0 mSv/expl) are the main techniques implicated in the relative contribution to the total annual effective collective dose of 1419.5 Sv man. The risk estimation in association with diagnostic techniques in the nuclear medicine service studied is globally low (total detriment: 103.6 as a result of 16232 explorations), similar to other published data

  18. Benthic O-2 uptake of two cold-water coral communities estimated with the non-invasive eddy correlation technique

    DEFF Research Database (Denmark)

    Rovelli, Lorenzo; Attard, Karl M.; Bryant, Lee D.

    2015-01-01

    times higher than the global mean for soft sediment communities at comparable depths. The measurements document the importance of CWC communities for local and regional carbon cycling and demonstrate that the EC technique is a valuable tool for assessing rates of benthic O2 uptake in such complex...... between 5 and 46 mmol m(-2) d(-1), mainly depending on the ambient flow characteristics. The average uptake rate estimated from the similar to 24 h long deployments amounted to 27.8 +/- 2.3 mmol m(-2) d(-1) at Mingulay and 24.8 +/- 2.6 mmol m(-2) d(-1) at Stjernsund (mean +/- SE). These rates are 4 to 5...

  19. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    The DeltaQ test is a method of estimating the air leakage from forced air duct systems. Developed primarily for residential and small commercial applications it uses the changes in blower door test results due to forced air system operation. Previous studies established the principles behind DeltaQ testing, but raised issues of precision of the test, particularly for leaky homes on windy days. Details of the measurement technique are available in an ASTM Standard (ASTM E1554-2007). In order to ease adoption of the test method, this study answers questions regarding the uncertainty due to changing weather during the test (particularly changes in wind speed) and the applicability to low leakage systems. The first question arises because the building envelope air flows and pressures used in the DeltaQ test are influenced by weather induced pressures. Variability in wind induced pressures rather than temperature difference induced pressures dominates this effect because the wind pressures change rapidly over the time period of a test. The second question needs to answered so that DeltaQ testing can be used in programs requiring or giving credit for tight ducts (e.g., California's Building Energy Code (CEC 2005)). DeltaQ modeling biases have been previously investigated in laboratory studies where there was no weather induced changes in envelope flows and pressures. Laboratory work by Andrews (2002) and Walker et al. (2004) found biases of about 0.5% of forced air system blower flow and individual test uncertainty of about 2% of forced air system blower flow. The laboratory tests were repeated by Walker and Dickerhoff (2006 and 2008) using a new ramping technique that continuously varied envelope pressures and air flows rather than taking data at pre-selected pressure stations (as used in ASTM E1554-2003 and other previous studies). The biases and individual test uncertainties for ramping were found to be very close (less than 0.5% of air handler flow) to those

  20. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    Science.gov (United States)

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  1. Validation of myocardial blood flow estimation with nitrogen-13 ammonia PET by the argon inert gas technique in humans

    International Nuclear Information System (INIS)

    Kotzerke, J.; Glatting, G.; Neumaier, B.; Reske, S.N.; Hoff, J. van den; Hoeher, M.; Woehrle, J. n

    2001-01-01

    We simultaneously determined global myocardial blood flow (MBF) by the argon inert gas technique and by nitrogen-13 ammonia positron emission tomography (PET) to validate PET-derived MBF values in humans. A total of 19 patients were investigated at rest (n=19) and during adenosine-induced hyperaemia (n=16). Regional coronary artery stenoses were ruled out by angiography. The argon inert gas method uses the difference of arterial and coronary sinus argon concentrations during inhalation of a mixture of 75% argon and 25% oxygen to estimate global MBF. It can be considered as valid as the microspheres technique, which, however, cannot be applied in humans. Dynamic PET was performed after injection of 0.8±0.2 GBq 13 N-ammonia and MBF was calculated applying a two-tissue compartment model. MBF values derived from the argon method at rest and during the hyperaemic state were 1.03±0.24 ml min -1 g -1 and 2.64±1.02 ml min -1 g -1 , respectively. MBF values derived from ammonia PET at rest and during hyperaemia were 0.95±0.23 ml min -1 g -1 and 2.44±0.81 ml min -1 g -1 , respectively. The correlation between the two methods was close (y=0.92x+0.14, r=0.96; P 13 N-ammonia PET. (orig.)

  2. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: a pilot study.

    Science.gov (United States)

    De Tobel, J; Radesh, P; Vandermeulen, D; Thevissen, P W

    2017-12-01

    Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated method for step (1) has been presented for 3D magnetic resonance imaging and is currently being optimised (Unterpirker et al. 2015). To develop an automated method for step (2) based on lower third molars on panoramic radiographs. A modified Demirjian staging technique including ten developmental stages was developed. Twenty panoramic radiographs per stage per gender were retrospectively selected for FDI element 38. Two observers decided in consensus about the stages. When necessary, a third observer acted as a referee to establish the reference stage for the considered third molar. This set of radiographs was used as training data for machine learning algorithms for automated staging. First, image contrast settings were optimised to evaluate the third molar of interest and a rectangular bounding box was placed around it in a standardised way using Adobe Photoshop CC 2017 software. This bounding box indicated the region of interest for the next step. Second, several machine learning algorithms available in MATLAB R2017a software were applied for automated stage recognition. Third, the classification performance was evaluated in a 5-fold cross-validation scenario, using different validation metrics (accuracy, Rank-N recognition rate, mean absolute difference, linear kappa coefficient). Transfer Learning as a type of Deep Learning Convolutional Neural Network approach outperformed all other tested approaches. Mean accuracy equalled 0.51, mean absolute difference was 0.6 stages and mean linearly weighted kappa was

  3. Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements

    Science.gov (United States)

    Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

    2013-12-01

    Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first

  4. CEBAF Upgrade Bunch Length Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, Mahmoud [Old Dominion Univ., Norfolk, VA (United States)

    2016-05-01

    Many accelerators use short electron bunches and measuring the bunch length is important for efficient operations. CEBAF needs a suitable bunch length because bunches that are too long will result in beam interruption to the halls due to excessive energy spread and beam loss. In this work, bunch length is measured by invasive and non-invasive techniques at different beam energies. Two new measurement techniques have been commissioned; a harmonic cavity showed good results compared to expectations from simulation, and a real time interferometer is commissioned and first checkouts were performed. Three other techniques were used for measurements and comparison purposes without modifying the old procedures. Two of them can be used when the beam is not compressed longitudinally while the other one, the synchrotron light monitor, can be used with compressed or uncompressed beam.

  5. [Estimating child mortality using the previous child technique, with data from health centers and household surveys: methodological aspects].

    Science.gov (United States)

    Aguirre, A; Hill, A G

    1988-01-01

    2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey

  6. Parametric, bootstrap, and jackknife variance estimators for the k-Nearest Neighbors technique with illustrations using forest inventory and satellite image data

    Science.gov (United States)

    Ronald E. McRoberts; Steen Magnussen; Erkki O. Tomppo; Gherardo. Chirici

    2011-01-01

    Nearest neighbors techniques have been shown to be useful for estimating forest attributes, particularly when used with forest inventory and satellite image data. Published reports of positive results have been truly international in scope. However, for these techniques to be more useful, they must be able to contribute to scientific inference which, for sample-based...

  7. Evaluation of the SF6 tracer technique for estimating methane emission rates with reference to dairy cows using a mechanistic model

    NARCIS (Netherlands)

    Berends, H.; Gerrits, W.J.J.; France, J.; Ellis, J.L.; Zijderveld, van S.M.; Dijkstra, J.

    2014-01-01

    A dynamic, mechanistic model of the sulfur hexafluoride (SF6) tracer technique, used for estimating methane (CH4) emission rates from ruminants, was constructed to evaluate the accuracy of the technique. The model consists of six state variables and six zero-pools representing the quantities of SF6

  8. Performance of the Angstrom-Prescott Model (A-P) and SVM and ANN techniques to estimate daily global solar irradiation in Botucatu/SP/Brazil

    Science.gov (United States)

    da Silva, Maurício Bruno Prado; Francisco Escobedo, João; Juliana Rossi, Taiza; dos Santos, Cícero Manoel; da Silva, Sílvia Helena Modenese Gorla

    2017-07-01

    This study describes the comparative study of different methods for estimating daily global solar irradiation (H): Angstrom-Prescott (A-P) model and two Machine Learning techniques (ML) - Support Vector Machine (SVM) and Artificial Neural Network (ANN). The H database was measured from 1996 to 2011 in Botucatu/SP/Brazil. Different combinations of input variables were adopted. MBE, RMSE, d Willmott, r and r2 statistical indicators obtained in the validation of A-P and SVM and ANN models showed that: SVM technique has better performance in estimating H than A-P and ANN models. A-P model has better performance in estimating H than ANN.

  9. Fractional baud-length coding

    Directory of Open Access Journals (Sweden)

    J. Vierinen

    2011-06-01

    Full Text Available We present a novel approach for modulating radar transmissions in order to improve target range and Doppler estimation accuracy. This is achieved by using non-uniform baud lengths. With this method it is possible to increase sub-baud range-resolution of phase coded radar measurements while maintaining a narrow transmission bandwidth. We first derive target backscatter amplitude estimation error covariance matrix for arbitrary targets when estimating backscatter in amplitude domain. We define target optimality and discuss different search strategies that can be used to find well performing transmission envelopes. We give several simulated examples of the method showing that fractional baud-length coding results in smaller estimation errors than conventional uniform baud length transmission codes when estimating the target backscatter amplitude at sub-baud range resolution. We also demonstrate the method in practice by analyzing the range resolved power of a low-altitude meteor trail echo that was measured using a fractional baud-length experiment with the EISCAT UHF system.

  10. Estimation of water quality parameters applying satellite data fusion and mining techniques in the lake Albufera de Valencia (Spain)

    Science.gov (United States)

    Doña, Carolina; Chang, Ni-Bin; Vannah, Benjamin W.; Sánchez, Juan Manuel; Delegido, Jesús; Camacho, Antonio; Caselles, Vicente

    2014-05-01

    Linked to the enforcement of the European Water Framework Directive (2000) (WFD), which establishes that all countries of the European Union have to avoid deterioration, improve and retrieve the status of the water bodies, and maintain their good ecological status, several remote sensing studies have been carried out to monitor and understand the water quality variables trend. Lake Albufera de Valencia (Spain) is a hypereutrophic system that can present chrorophyll a concentrations over 200 mg·m-3 and transparency (Secchi disk) values below 20 cm, needing to retrieve and improve its water quality. The principal aim of our work was to develop algorithms to estimate water quality parameters such as chlorophyll a concentration and water transparency, which are informative of the eutrophication and ecological status, using remote sensing data. Remote sensing data from Terra/MODIS, Landsat 5-TM and Landsat 7-ETM+ images were used to carry out this study. Landsat images are useful to analyze the spatial variability of the water quality variables, as well as to monitor small to medium size water bodies due to its 30-m spatial resolution. But, the poor temporal resolution of Landsat, with a 16-day revisit time, is an issue. In this work we tried to solve this data gap by applying fusion techniques between Landsat and MODIS images. Although the lower spatial resolution of MODIS is 250/500-m, one image per day is available. Thus, synthetic Landsat images were created using data fusion for no data acquisition dates. Good correlation values were obtained when comparing original and synthetic Landsat images. Genetic programming was used to develop models for predicting water quality. Using the reflectance bands of the synthetic Landsat images as inputs to the model, values of R2 = 0.94 and RMSE = 8 mg·m-3 were obtained when comparing modeled and observed values of chlorophyll a, and values of R2= 0.91 and RMSE = 4 cm for the transparency (Secchi disk). Finally, concentration

  11. Multi-technique combination of space geodesy observations: Impact of the Jason-2 satellite on the GPS satellite orbits estimation

    Science.gov (United States)

    Zoulida, Myriam; Pollet, Arnaud; Coulot, David; Perosanz, Félix; Loyer, Sylvain; Biancale, Richard; Rebischung, Paul

    2016-10-01

    In order to improve the Precise Orbit Determination (POD) of the GPS constellation and the Jason-2 Low Earth Orbiter (LEO), we carry out a simultaneous estimation of GPS satellite orbits along with Jason-2 orbits, using GINS software. Along with GPS station observations, we use Jason-2 GPS, SLR and DORIS observations, over a data span of 6 months (28/05/2011-03/12/2011). We use the Geophysical Data Records-D (GDR-D) orbit estimation standards for the Jason-2 satellite. A GPS-only solution is computed as well, where only the GPS station observations are used. It appears that adding the LEO GPS observations results in an increase of about 0.7% of ambiguities fixed, with respect to the GPS-only solution. The resulting GPS orbits from both solutions are of equivalent quality, agreeing with each other at about 7 mm on Root Mean Square (RMS). Comparisons of the resulting GPS orbits to the International GNSS Service (IGS) final orbits show the same level of agreement for both the GPS-only orbits, at 1.38 cm in RMS, and the GPS + Jason2 orbits at 1.33 cm in RMS. We also compare the resulting Jason-2 orbits with the 3-technique Segment Sol multi-missions d'ALTimétrie, d'orbitographie et de localisation précise (SSALTO) POD products. The orbits show good agreement, with 2.02 cm of orbit differences global RMS, and 0.98 cm of orbit differences RMS on the radial component.

  12. Feasibility study on estimation of rice weevil quantity in rice stock using near-infrared spectroscopy technique

    Directory of Open Access Journals (Sweden)

    Puttinun Jarruwat

    2014-07-01

    Full Text Available Thai rice is favored by large numbers of consumers of all continents because of its excellent taste, fragrant aroma and fine texture. Among all Thai rice varieties, Thai Hommali rice is the most preferred. Classification of rice as premium quality requires that almost all grain kernels of the rice be perfectly whole with only a small quantity of foreign particles. Of all the foreign particles found in rice, rice weevils can wreck severest havoc on the quality and quantity of rice such that premium grade rice is transformed into low grade rice. It is widely known that rice millers adopt the "overdose" fumigation practice to control the birth and propagation of rice weevils, the practice of which inevitably gives rise to pesticide residues on rice which end up in the body of consumers. However, if population concentration of rice weevils could be approximated, right amounts of chemicals for fumigation would be applied and thereby no overdose is required. The objective of this study is thus to estimate the quantity of rice weevils in both milled rice and brown rice of Thai Hommali rice variety using the near infrared spectroscopy (NIRS technique. Fourier transforms near infrared (FT-NIR spectrometer was used in this research and the near-infrared wavelength range was 780–2500 nm. A total of 20 levels of rice weevil infestation with an increment of 10 from 10 to 200 mature rice weevils were applied to 1680 rice samples. The spectral data and quantity of weevils are analyzed by partial least square regression (PLSR to establish the model for prediction. The results show that the model is able to estimate the quantity of weevils in milled Hommali rice and brown Hommali rice with high $R_{\\rm val}^{2}$ of 0.96 and 0.90, high RPD of 6.07 and 3.26 and small bias of 2.93 and 2.94, respectively.

  13. Exploiting Measurement Uncertainty Estimation in Evaluation of GOES-R ABI Image Navigation Accuracy Using Image Registration Techniques

    Science.gov (United States)

    Haas, Evan; DeLuccia, Frank

    2016-01-01

    In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.

  14. Estimating photometric redshifts for X-ray sources in the X-ATLAS field using machine-learning techniques

    Science.gov (United States)

    Mountrichas, G.; Corral, A.; Masoura, V. A.; Georgantopoulos, I.; Ruiz, A.; Georgakakis, A.; Carrera, F. J.; Fotopoulou, S.

    2017-12-01

    We present photometric redshifts for 1031 X-ray sources in the X-ATLAS field using the machine-learning technique TPZ. X-ATLAS covers 7.1 deg2 observed with XMM-Newton within the Science Demonstration Phase of the H-ATLAS field, making it one of the largest contiguous areas of the sky with both XMM-Newton and Herschel coverage. All of the sources have available SDSS photometry, while 810 additionally have mid-IR and/or near-IR photometry. A spectroscopic sample of 5157 sources primarily in the XMM/XXL field, but also from several X-ray surveys and the SDSS DR13 redshift catalogue, was used to train the algorithm. Our analysis reveals that the algorithm performs best when the sources are split, based on their optical morphology, into point-like and extended sources. Optical photometry alone is not enough to estimate accurate photometric redshifts, but the results greatly improve when at least mid-IR photometry is added in the training process. In particular, our measurements show that the estimated photometric redshifts for the X-ray sources of the training sample have a normalized absolute median deviation, nmad ≈ 0.06, and a percentage of outliers, η = 10-14%, depending upon whether the sources are extended or point like. Our final catalogue contains photometric redshifts for 933 out of the 1031 X-ray sources with a median redshift of 0.9. The table of the photometric redshifts is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A39

  15. Computer simulation of three-dimensional heavy ion beam trajectory imaging techniques used for magnetic field estimation

    Science.gov (United States)

    Ling, C.; Connor, K. A.; Demers, D. R.; Radke, R. J.; Schoch, P. M.

    2007-11-01

    A magnetic field mapping technique via heavy ion beam trajectory imaging is being developed on the Madison Symmetric Torus reversed field pinch. This paper describes the computational tools created to model camera images of the light emitted from a simulated ion beam, reconstruct a three-dimensional trajectory, and estimate the accuracy of the reconstruction. First, a computer model is used to create images of the torus interior from any candidate camera location. It is used to explore the visual field of the camera and thus to guide camera parameters and placement. Second, it is shown that a three-dimensional ion beam trajectory can be recovered from a pair of perspectively projected trajectory images. The reconstruction considers effects due to finite beam size, nonuniform beam current density, and image background noise. Third, it is demonstrated that the trajectory reconstructed from camera images can help compute magnetic field profiles, and might be used as an additional constraint to an equilibrium reconstruction code, such as MSTFit.

  16. Estimating fermentation characteristics and nutritive value of ensiled and dried pomegranate seeds for ruminants using in vitro gas production technique.

    Science.gov (United States)

    Taher-Maddah, M; Maheri-Sis, N; Salamatdoustnobar, R; Ahmadzadeh, A

    2012-01-01

    The purpose of this study was to determine the chemical composition and estimation of fermentation characteristics and nutritive value of ensiled and dried pomegranate seeds using in vitro gas production technique. Samples were collected, mixed, processed (ensiled and dried) and incubated in vitro with rumen liquor taken from three fistulated Iranian native (Taleshi) steers at 2, 4, 6, 8, 12, 16, 24, 36, 48, 72 and 96 h. The results showed that ensiling lead to significant increase in gas production of pomegranate seeds at all incubation times. The gas volume at 24 h incubation, were 25.76 and 17.91 ml/200mg DM for ensiled and dried pomegranate seeds, respectively. The gas production rate (c) also was significantly higher for ensiled groups than dried (0.0930 vs. 0.0643 ml/h). The organic matter digestibility (OMD), metabolizable energy (ME), net energy for lactation (NEL) and short chain fatty acids (SCFA) of ensiled pomegranate seeds were significantly higher than that of dried samples (43.15%, 6.37 MJ/kg DM, 4.43 MJ/kg DM, 0.5553 mmol for ensiled samples vs. 34.62%, 5.10 MJ/kg DM, 3.56 MJ/kg DM, 0.3680 mmol for dried samples, respectively). It can be concluded that ensiling increases the nutritive value of pomegranate seeds.

  17. Evaluating the use of electrical resistivity imaging technique for improving CH4 and CO2 emission rate estimations in landfills

    International Nuclear Information System (INIS)

    Georgaki, I.; Soupios, P.; Sakkas, N.; Ververidis, F.; Trantas, E.; Vallianatos, F.; Manios, T.

    2008-01-01

    In order to improve the estimation of surface gas emissions in landfill, we evaluated a combination of geophysical and greenhouse gas measurement methodologies. Based on fifteen 2D electrical resistivity tomographies (ERTs), longitudinal cross section images of the buried waste layers were developed, identifying place and cross section size of organic waste (OW), organic waste saturated in leachates (SOW), low organic and non-organic waste. CH 4 and CO 2 emission measurements were then conducted using the static chamber technique at 5 surface points along two tomographies: (a) across a high-emitting area, ERT no. 2, where different amounts of relatively fresh OW and SOW were detected, and (b) across the oldest (at least eight years) cell in the landfill, ERT no. 6, with significant amounts of OW. Where the highest emission rates were recorded, they were strongly affected by the thickness of the OW and SOW fraction underneath each gas sampling point. The main reason for lower than expected values was the age of the layered buried waste. Lower than predicted emissions were also attributed to soil condition, which was the case at sampling points with surface ponding, i.e. surface accumulation of leachate (or precipitated water)

  18. Effect of gadolinium on hepatic fat quantification using multi-echo reconstruction technique with T2* correction and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Ge, Mingmei; Wu, Bing; Liu, Zhiqin; Song, Hai; Meng, Xiangfeng; Wu, Xinhuai [The Military General Hospital of Beijing PLA, Department of Radiology, Beijing (China); Zhang, Jing [The 309th Hospital of Chinese People' s Liberation Army, Department of Radiology, Beijing (China)

    2016-06-15

    To determine whether hepatic fat quantification is affected by administration of gadolinium using a multiecho reconstruction technique with T2* correction and estimation. Forty-eight patients underwent the investigational sequence for hepatic fat quantification at 3.0T MRI once before and twice after administration of gadopentetate dimeglumine (0.1 mmol/kg). A one-way repeated-measures analysis of variance with pairwise comparisons was conducted to evaluate the systematic bias of fat fraction (FF) and R2* measurements between three acquisitions. Bland-Altman plots were used to assess the agreements between pre- and post-contrast FF measurements in the liver. A P value <0.05 indicated statistically significant difference. FF measurements of liver, spleen and spine revealed no significant systematic bias between the three measurements (P > 0.05 for all). Good agreements (95 % confidence interval) of FF measurements were demonstrated between pre-contrast and post-contrast1 (-0.49 %, 0.52 %) and post-contrast2 (-0.83 %, 0.77 %). R2* increased in liver and spleen (P = 0.039, P = 0.01) after administration of gadolinium. Although under the impact of an increased R2* in liver and spleen post-contrast, the investigational sequence can still obtain stable fat quantification. Therefore, it could be applied post-contrast to substantially increase the efficiency of MR examination and also provide a backup for the occasional failure of FF measurements pre-contrast. (orig.)

  19. A Novel Differential Time-of-Arrival Estimation Technique for Impact Localization on Carbon Fiber Laminate Sheets

    Directory of Open Access Journals (Sweden)

    Eugenio Marino Merlo

    2017-10-01

    Full Text Available Composite material structures are commonly used in many industrial sectors (aerospace, automotive, transportation, and can operate in harsh environments where impacts with other parts or debris may cause critical safety and functionality issues. This work presents a method for improving the accuracy of impact position determination using acoustic source triangulation schemes based on the data collected by piezoelectric sensors attached to the structure. A novel approach is used to estimate the Differential Time-of-Arrival (DToA between the impact response signals collected by a triplet of sensors, overcoming the limitations of classical methods that rely on amplitude thresholds calibrated for a specific sensor type. An experimental evaluation of the proposed technique was performed with specially made circular piezopolymer (PVDF sensors designed for Structural Health Monitoring (SHM applications, and compared with commercial piezoelectric SHM sensors of similar dimensions. Test impacts at low energies from 35 mJ to 600 mJ were generated in a laboratory by free-falling metal spheres on a 500 mm × 500 mm × 1.25 mm quasi-isotropic Carbon Fiber Reinforced Polymer (CFRP laminate plate. From the analysis of many impact signals, the resulting localization error was improved for all types of sensors and, in particular, for the circular PVDF sensor an average error of 20.3 mm and a standard deviation of 8.9 mm was obtained.

  20. Using remote sensing and GIS techniques to estimate discharge and recharge fluxes for the Death Valley regional groundwater flow system, USA

    Science.gov (United States)

    D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,

    1996-01-01

    The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter

  1. A new vision on the averaging technique for the estimation of non-stationary Brainstem Auditory-Evoked Potentials: application of a metaheuristic method.

    Science.gov (United States)

    Naït-Ali, Amine; Siarry, Patrick

    2006-06-01

    The aim of this paper consists in highlighting the use of the averaging technique in some biomedical applications, such as evoked potentials (EP) extraction. We show that this technique, which is generally considered as classical, can be very efficient if the dynamic model of the signal to be estimated is a priori known. Therefore, using an appropriate model and under some specific conditions, one can show that the estimation can be performed efficiently even in case of a very low signal to noise ratio (SNR), which occurs when handling Brainstem Auditory-Evoked Potentials.

  2. Biological Nitrogen Fixation and Microbial Biomass N in the Rhizosphere of Chickpea as Estimated by 15N Isotope Dilution Technique

    International Nuclear Information System (INIS)

    Galal, Y. G. M.; El-Ghandour, I. A.; Abdel Raouf, A. M. N.; Osman, M. E.

    2004-01-01

    Pot experiment was carried out with chickpea that cultivated in virgin sandy soil and inoculated with Rhizobium (Rh), mycorrhizea (VAM) and mixture of both. The objective of this work is the estimation of biological nitrogen fixation (BNF) and microbial biomass N (MBN) contribution as affected by inoculation and N and P fertilizers levels under chickpea plants. Nitrogen gained from air (Ndf A) was determined using 15 N isotope dilution technique, while the MBN was detected through the fumigation-extraction method. Nitrogen and phosphorus fertilizers were applied at three levels, 0; 10 ppm N and 3.3 ppm P and 20 ppm Nand 6.6 ppm P in the form of ( 15 NH 4 ) 2 SO 4 and super-phosphate, respectively. The effect of inoculation and chemical fertilizers on dry matter (DM), N and P uptake (shoot and grain), BNF and MBN were traced. The obtained data revealed that the highest DM and N uptake by chickpea shoot were recorded with the dual inoculation (Rh + VAM) at the moderate level of N and P fertilizers, while the highest DM, N and P uptake by grain were recorded with Rh solely at the same rate of fertilizers. It was clear that inoculation with Rh either alone or in combination with VAM substituted considerable amounts of N via BNF process. In this respect, dual inoculation is still superior over single inoculation. Percentages of N 2 -fixed was ranged from 45% to 73% in shoot while it was 27% to 69% in grain according to inoculation and fertilization treatments. Fixed N utilized by shoot was positively affected by increasing the N fertilizer rate while that derived by grain was not affected. The fluctuation in the soil microbial biomass N did not gave us a chance to recognize, exactly, the impact of inoculation and/or fertilization levels. (Authors)

  3. Estimation of chromium-51 ethylene diamine tetra-acetic acid plasma clearance: A comparative assessment of simplified techniques

    International Nuclear Information System (INIS)

    Picciotto, G.; Cacace, G.; Mosso, R.; De Filippi, P.G.; Cesana, P.; Ropolo, R.

    1992-01-01

    Chromium-51 ethylene diamine tetra-acetic acid ( 51 Cr-EDTA) total plasma clearance was evaluated using a multi-sample method (i.e. 12 blood samples) as the reference compared with several simplified methods which necessitated only one or few blood samples. The following 5 methods were evaluated: Terminal slope-intercept method with 3 blood samples, simplified method of Broechner-Mortensen and 3 single-sample methods (Constable, Christensen and Groth, Tauxe). Linear regression analysis was performed. Standard error of estimate, bias and imprecision of different methods were evaluated. For 51 Cr-EDTA total plasma clearance greater than 30 ml.min -1 , the results which most approximated the reference source were obtained by the Christensen and Groth method at a sampling time of 300 min (inaccuracy of 4.9%). For clearances between 10 and 30 ml.min -1 , single-sample methods failed to give reliable results. Terminal slope-intercept and Broechner-Mortensen methods were better, with inaccuracies of 17.7% and 16.9%, respectively. Although sampling times at 180, 240 and 300 min are time-consuming for patients, 51 Cr-EDTA total plasma clearance can be accurately calculated for values greater than 10 ml.min -1 using the Broechner-Mortensen method. In patients with clearance greater than 30 ml.min -1 , single-sample techniques provide a good alternative to the multi-sample method; the choice of the method to be used depends on the degree of accuracy required. (orig.)

  4. Kondo length in bosonic lattices

    Science.gov (United States)

    Giuliano, Domenico; Sodano, Pasquale; Trombettoni, Andrea

    2017-09-01

    Motivated by the fact that the low-energy properties of the Kondo model can be effectively simulated in spin chains, we study the realization of the effect with bond impurities in ultracold bosonic lattices at half filling. After presenting a discussion of the effective theory and of the mapping of the bosonic chain onto a lattice spin Hamiltonian, we provide estimates for the Kondo length as a function of the parameters of the bosonic model. We point out that the Kondo length can be extracted from the integrated real-space correlation functions, which are experimentally accessible quantities in experiments with cold atoms.

  5. Overview of bunch length measurements

    International Nuclear Information System (INIS)

    Lumpkin, A. H.

    1999-01-01

    An overview of particle and photon beam bunch length measurements is presented in the context of free-electron laser (FEL) challenges. Particle-beam peak current is a critical factor in obtaining adequate FEL gain for both oscillators and self-amplified spontaneous emission (SASE) devices. Since measurement of charge is a standard measurement, the bunch length becomes the key issue for ultrashort bunches. Both time-domain and frequency-domain techniques are presented in the context of using electromagnetic radiation over eight orders of magnitude in wavelength. In addition, the measurement of microbunching in a micropulse is addressed

  6. A novel measurement technique to estimate the RF beat-linewidth of free-running heterodyning system using a photonic discriminator

    NARCIS (Netherlands)

    Khan, M.R.H.; Marpaung, D.A.I.; Burla, M.; Roeloffzen, C.G.H.

    2011-01-01

    We propose a novel technique to our knowledge to estimate the beat spectrum linewidth of a free-running heterodyning scheme using an optical discriminator. Utilizing a dense wavelength division multiplexing (DWDM) filter as an optical discriminator, the phase modulation (PM) to intensity modulation

  7. Evaluation of the oral 13C-bicarbonate tracer technique for the estimation of CO2 production and energy expenditure in dogs during rest and physical activity

    DEFF Research Database (Denmark)

    Larsson, Caroline; Junghans, Peter; Tauson, Anne-Helene

    2010-01-01

    essential to determine the energy expenditure (EE) in a reliable and feasible way. In the present experiment, the non-invasive oral ¹³C-bicarbonate tracer technique (o¹³CT), i.e. collection of breath samples after oral administration of NaH¹³CO3, was used for the estimation of CO2 production and EE in dogs...

  8. Assessment of Demirjian's 8-teeth technique of age estimation and Indian-specific formulas in an East Indian population: A cross-sectional study.

    Science.gov (United States)

    Rath, Hemamalini; Rath, Rachna; Mahapatra, Sandeep; Debta, Tribikram

    2017-01-01

    The age of an individual can be assessed by a plethora of widely available tooth-based techniques, among which radiological methods prevail. The Demirjian's technique of age assessment based on tooth development stages has been extensively investigated in different populations of the world. The present study is to assess the applicability of Demirjian's modified 8-teeth technique in age estimation of population of East India (Odisha), utilizing Acharya's Indian-specific cubic functions. One hundred and six pretreatment orthodontic radiographs of patients in an age group of 7-23 years with representation from both genders were assessed for eight left mandibular teeth and scored as per the Demirjian's 9-stage criteria for teeth development stages. Age was calculated on the basis of Acharya's Indian formula. Statistical analysis was performed to compare the estimated and actual age. All data were analyzed using SPSS 20.0 (SPSS Inc., Chicago, Illinois, USA) and MS Excel Package. The results revealed that the mean absolute error (MAE) in age estimation of the entire sample was 1.3 years with 50% of the cases having an error rate within ± 1 year. The MAE in males and females (7-16 years) was 1.8 and 1.5, respectively. Likewise, the MAE in males and females (16.1-23 years) was 1.1 and 1.3, respectively. The low error rate in estimating age justifies the application of this modified technique and Acharya's Indian formulas in the present East Indian population.

  9. Estimates of Free-tropospheric NO2 Abundance from the Aura Ozone Monitoring Instrument (OMI) Using Cloud Slicing Technique

    Science.gov (United States)

    Choi, S.; Joiner, J.; Krotkov, N. A.; Choi, Y.; Duncan, B. N.; Celarier, E. A.; Bucsela, E. J.; Vasilkov, A. P.; Strahan, S. E.; Veefkind, J. P.; Cohen, R. C.; Weinheimer, A. J.; Pickering, K. E.

    2013-12-01

    Total column measurements of NO2 from space-based sensors are of interest to the atmospheric chemistry and air quality communities; the relatively short lifetime of near-surface NO2 produces satellite-observed hot-spots near pollution sources including power plants and urban areas. However, estimates of NO2 concentrations in the free-troposphere, where lifetimes are longer and the radiative impact through ozone formation is larger, are severely lacking. Such information is critical to evaluate chemistry-climate and air quality models that are used for prediction of the evolution of tropospheric ozone and its impact of climate and air quality. Here, we retrieve free-tropospheric NO2 volume mixing ratio (VMR) using the cloud slicing technique. We use cloud optical centroid pressures (OCPs) as well as collocated above-cloud vertical NO2 columns (defined as the NO2 column from top of the atmosphere to the cloud OCP) from the Ozone Monitoring Instrument (OMI). The above-cloud NO2 vertical columns used in our study are retrieved independent of a priori NO2 profile information. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud optical centroid pressure is proportional to the NO2 volume mixing ratio (VMR) for a given pressure (altitude) range. We retrieve NO2 volume mixing ratios and compare the obtained NO2 VMRs with in-situ aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. The agreement is good when proper data screening is applied. In addition, the OMI cloud slicing reports a high NO2 VMR where the aircraft reported lightning NOx during the Deep Convection Clouds and Chemistry (DC3) campaign in 2012. We also provide a global seasonal climatology of free-tropospheric NO2 VMR in cloudy conditions. Enhanced NO2 in free troposphere commonly appears near polluted urban locations where NO2 produced in the boundary layer may be transported vertically out of the

  10. WREP: A wavelet-based technique for extracting the red edge position from reflectance spectra for estimating leaf and canopy chlorophyll contents of cereal crops

    Science.gov (United States)

    Li, Dong; Cheng, Tao; Zhou, Kai; Zheng, Hengbiao; Yao, Xia; Tian, Yongchao; Zhu, Yan; Cao, Weixing

    2017-07-01

    Red edge position (REP), defined as the wavelength of the inflexion point in the red edge region (680-760 nm) of the reflectance spectrum, has been widely used to estimate foliar chlorophyll content from reflectance spectra. A number of techniques have been developed for REP extraction in the past three decades, but most of them require data-specific parameterization and the consistence of their performance from leaf to canopy levels remains poorly understood. In this study, we propose a new technique (WREP) to extract REPs based on the application of continuous wavelet transform to reflectance spectra. The REP is determined by the zero-crossing wavelength in the red edge region of a wavelet transformed spectrum for a number of scales of wavelet decomposition. The new technique is simple to implement and requires no parameterization from the user as long as continuous wavelet transforms are applied to reflectance spectra. Its performance was evaluated for estimating leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of cereal crops (i.e. rice and wheat) and compared with traditional techniques including linear interpolation, linear extrapolation, polynomial fitting and inverted Gaussian. Our results demonstrated that WREP obtained the best estimation accuracy for both LCC and CCC as compared to traditional techniques. High scales of wavelet decomposition were favorable for the estimation of CCC and low scales for the estimation of LCC. The difference in optimal scale reveals the underlying mechanism of signature transfer from leaf to canopy levels. In addition, crop-specific models were required for the estimation of CCC over the full range. However, a common model could be built with the REPs extracted with Scale 5 of the WREP technique for wheat and rice crops when CCC was less than 2 g/m2 (R2 = 0.73, RMSE = 0.26 g/m2). This insensitivity of WREP to crop type indicates the potential for aerial mapping of chlorophyll content between growth seasons

  11. Techniques for estimating the quantity and quality of storm runoff from urban watersheds of Jefferson County, Kentucky

    Science.gov (United States)

    Evaldi, R.D.; Moore, B.L.

    1994-01-01

    Linear regression models are presented for estimating storm-runoff volumes, and mean con- centrations and loads of selected constituents in storm runoff from urban watersheds of Jefferson County, Kentucky. Constituents modeled include dissolved oxygen, biochemical and chemical oxygen demand, total and suspended solids, volatile residue, nitrogen, phosphorus and phosphate, calcium, magnesium, barium, copper, iron, lead, and zinc. Model estimations are a function of drainage area, percentage of impervious area, climatological data, and land uses. Estimation models are based on runoff volumes, and concen- trations and loads of constituents in runoff measured at 6 stormwater outfalls and 25 streams in Jefferson County.

  12. Radiographic assessment of endodontic working length

    OpenAIRE

    Osama S Alothmani; Lara T Friedlander; Nicholas P Chandler

    2013-01-01

    The use of radiographs for working length determination is usual practice in endodontics. Exposing radiographs following the principles of the paralleling technique allows more accurate length determination compared to the bisecting-angle method. However, it has been reported that up to 28.5% of cases can have the file tip extending beyond the confines of the root canals despite an acceptable radiographic appearance. The accuracy of radiographic working length determination could be affected ...

  13. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: a pilot study

    OpenAIRE

    De Tobel, Jannick; Radesh, Purnima; Vandermeulen, Dirk; Thevissen, Patrick

    2017-01-01

    Background • Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated m...

  14. Computational efficiency study of a semi-analytical technique for the angular integral estimation found in transfer matrix generation

    International Nuclear Information System (INIS)

    Garcia, R.D.M.

    1984-01-01

    The computational efficiency of a semi-analytic technique recently proposed for the evaluation of certain angular integrals encountered in the generation of the isotropic and linearly anisotropic components of elastic and discrete inelastic transfer matrices is studied. It is concluded from a comparison with results obtained with the use of numerical quadratures that the technique has certain computational advantages that recommend its implementation. (Author) [pt

  15. Validation of an elastic registration technique to estimate anatomical lung modification in Non-Small-Cell Lung Cancer Tomotherapy

    International Nuclear Information System (INIS)

    Faggiano, Elena; Cattaneo, Giovanni M; Ciavarro, Cristina; Dell'Oca, Italo; Persano, Diego; Calandrino, Riccardo; Rizzo, Giovanna

    2011-01-01

    The study of lung parenchyma anatomical modification is useful to estimate dose discrepancies during the radiation treatment of Non-Small-Cell Lung Cancer (NSCLC) patients. We propose and validate a method, based on free-form deformation and mutual information, to elastically register planning kVCT with daily MVCT images, to estimate lung parenchyma modification during Tomotherapy. We analyzed 15 registrations between the planning kVCT and 3 MVCT images for each of the 5 NSCLC patients. Image registration accuracy was evaluated by visual inspection and, quantitatively, by Correlation Coefficients (CC) and Target Registration Errors (TRE). Finally, a lung volume correspondence analysis was performed to specifically evaluate registration accuracy in lungs. Results showed that elastic registration was always satisfactory, both qualitatively and quantitatively: TRE after elastic registration (average value of 3.6 mm) remained comparable and often smaller than voxel resolution. Lung volume variations were well estimated by elastic registration (average volume and centroid errors of 1.78% and 0.87 mm, respectively). Our results demonstrate that this method is able to estimate lung deformations in thorax MVCT, with an accuracy within 3.6 mm comparable or smaller than the voxel dimension of the kVCT and MVCT images. It could be used to estimate lung parenchyma dose variations in thoracic Tomotherapy

  16. Comparison of groundwater recharge estimation techniques in an alluvial aquifer system with an intermittent/ephemeral stream (Queensland, Australia)

    Science.gov (United States)

    King, Adam C.; Raiber, Matthias; Cox, Malcolm E.; Cendón, Dioni I.

    2017-09-01

    This study demonstrates the importance of the conceptual hydrogeological model for the estimation of groundwater recharge rates in an alluvial system interconnected with an ephemeral or intermittent stream in south-east Queensland, Australia. The losing/gaining condition of these streams is typically subject to temporal and spatial variability, and knowledge of these hydrological processes is critical for the interpretation of recharge estimates. Recharge rate estimates of 76-182 mm/year were determined using the water budget method. The water budget method provides useful broad approximations of recharge and discharge fluxes. The chloride mass balance (CMB) method and the tritium method were used on 17 and 13 sites respectively, yielding recharge rates of 1-43 mm/year (CMB) and 4-553 mm/year (tritium method). However, the conceptual hydrogeological model confirms that the results from the CMB method at some sites are not applicable in this setting because of overland flow and channel leakage. The tritium method was appropriate here and could be applied to other alluvial systems, provided that channel leakage and diffuse infiltration of rainfall can be accurately estimated. The water-table fluctuation (WTF) method was also applied to data from 16 bores; recharge estimates ranged from 0 to 721 mm/year. The WTF method was not suitable where bank storage processes occurred.

  17. Directional velocity estimation using a spatio-temporal encoding technique based on frequency division for synthetic transmit aperture ultrasound

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2006-01-01

    This paper investigates the possibility of flow estimation using spatio-temporal encoding of the transmissions in synthetic transmit aperture imaging (STA). The spatial encoding is based on a frequency division approach. In STA, a major disadvantage is that only a single transmitter (denoting...... increase the transmitted energy, the waveforms are designed as linear frequency modulated signals. Therefore, the full excitation amplitude can be used during most of the transmission. The method has been evaluated for blood velocity estimation for several different velocities and incident angles...... in the flow direction, directional data were extracted and correlated. Hereby, the velocity of the blood was estimated. The pulse repetition frequency was 16 kHz. Three different setups were investigated with flow angles of 45, 60, and 75 degrees with respect to the acoustic axis. Four different velocities...

  18. Solar Irradiance Measurements Using Smart Devices: A Cost-Effective Technique for Estimation of Solar Irradiance for Sustainable Energy Systems

    Directory of Open Access Journals (Sweden)

    Hussein Al-Taani

    2018-02-01

    Full Text Available Solar irradiance measurement is a key component in estimating solar irradiation, which is necessary and essential to design sustainable energy systems such as photovoltaic (PV systems. The measurement is typically done with sophisticated devices designed for this purpose. In this paper we propose a smartphone-aided setup to estimate the solar irradiance in a certain location. The setup is accessible, easy to use and cost-effective. The method we propose does not have the accuracy of an irradiance meter of high precision but has the advantage of being readily accessible on any smartphone. It could serve as a quick tool to estimate irradiance measurements in the preliminary stages of PV systems design. Furthermore, it could act as a cost-effective educational tool in sustainable energy courses where understanding solar radiation variations is an important aspect.

  19. Physical basis and potential estimation techniques for soil erosion parameters in the Precipitation-Runoff Modeling System (PRMS)

    Science.gov (United States)

    Carey, W.P.; Simon, Andrew

    1984-01-01

    Simulation of upland-soil erosion by the Precipitation-Runoff Modeling System currently requires the user to estimate two rainfall detachment parameters and three hydraulic detachmment paramenters. One rainfall detachment parameter can be estimated from rainfall simulator tests. A reformulation of the rainfall detachment equation allows the second parameter to be computed directly. The three hydraulic detachment parameters consist of one exponent and two coefficients. The initial value of the exponent is generally set equal to 1.5. The two coefficients are functions of the soil 's resistance to erosion and one of the two also accounts for sediment delivery processes not simulated in the model. Initial estimates of these parameters can be derived from other modeling studies or from published empirical relations. (USGS)

  20. Estimating non-marginal willingness to pay for railway noise abatement: application of the two-step hedonic regression technique

    OpenAIRE

    Swärdh, Jan-Erik; Andersson, Henrik; Jonsson, Lina; Ögren, Mikael

    2012-01-01

    In this study we estimate the demand for peace and quiet, and thus also the willingness to pay for railway noise abatement, based on both steps of the hedonic model regression on property prices. The estimated demand relationship suggests welfare gains for a 1 dB reduction of railway noise as; USD 162 per individual per year at the baseline noise level of 71 dB, and USD 86 at the baseline noise level of 61 dB. Below a noise level of 49.1 dB, individuals have no willingness to pay ...