Multi-objective mixture-based iterated density estimation evolutionary algorithms
Thierens, D.; Bosman, P.A.N.
2001-01-01
We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability
Directory of Open Access Journals (Sweden)
Wu Chi-Yeh
2010-01-01
Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G
Density meter algorithm and system for estimating sampling/mixing uncertainty
International Nuclear Information System (INIS)
Shine, E.P.
1986-01-01
The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses
Density meter algorithm and system for estimating sampling/mixing uncertainty
International Nuclear Information System (INIS)
Shine, E.P.
1986-01-01
The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses
International Nuclear Information System (INIS)
Dessì, Alessia; Pani, Danilo; Raffo, Luigi
2014-01-01
Non-invasive fetal electrocardiography is still an open research issue. The recent publication of an annotated dataset on Physionet providing four-channel non-invasive abdominal ECG traces promoted an international challenge on the topic. Starting from that dataset, an algorithm for the identification of the fetal QRS complexes from a reduced number of electrodes and without any a priori information about the electrode positioning has been developed, entering into the top ten best-performing open-source algorithms presented at the challenge. In this paper, an improved version of that algorithm is presented and evaluated exploiting the same challenge metrics. It is mainly based on the subtraction of the maternal QRS complexes in every lead, obtained by synchronized averaging of morphologically similar complexes, the filtering of the maternal P and T waves and the enhancement of the fetal QRS through independent component analysis (ICA) applied on the processed signals before a final fetal QRS detection stage. The RR time series of both the mother and the fetus are analyzed to enhance pseudoperiodicity with the aim of correcting wrong annotations. The algorithm has been designed and extensively evaluated on the open dataset A (N = 75), and finally evaluated on datasets B (N = 100) and C (N = 272) to have the mean scores over data not used during the algorithm development. Compared to the results achieved by the previous version of the algorithm, the current version would mark the 5th and 4th position in the final ranking related to the events 1 and 2, reserved to the open-source challenge entries, taking into account both official and unofficial entrants. On dataset A, the algorithm achieves 0.982 median sensitivity and 0.976 median positive predictivity. (paper)
Huang, Chengjun; Chen, Xiang; Cao, Shuai; Qiu, Bensheng; Zhang, Xu
2017-08-01
Objective. To realize accurate muscle force estimation, a novel framework is proposed in this paper which can extract the input of the prediction model from the appropriate activation area of the skeletal muscle. Approach. Surface electromyographic (sEMG) signals from the biceps brachii muscle during isometric elbow flexion were collected with a high-density (HD) electrode grid (128 channels) and the external force at three contraction levels was measured at the wrist synchronously. The sEMG envelope matrix was factorized into a matrix of basis vectors with each column representing an activation pattern and a matrix of time-varying coefficients by a nonnegative matrix factorization (NMF) algorithm. The activation pattern with the highest activation intensity, which was defined as the sum of the absolute values of the time-varying coefficient curve, was considered as the major activation pattern, and its channels with high weighting factors were selected to extract the input activation signal of a force estimation model based on the polynomial fitting technique. Main results. Compared with conventional methods using the whole channels of the grid, the proposed method could significantly improve the quality of force estimation and reduce the electrode number. Significance. The proposed method provides a way to find proper electrode placement for force estimation, which can be further employed in muscle heterogeneity analysis, myoelectric prostheses and the control of exoskeleton devices.
Comparison of density estimators. [Estimation of probability density functions
Energy Technology Data Exchange (ETDEWEB)
Kao, S.; Monahan, J.F.
1977-09-01
Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Histogram Estimators of Bivariate Densities
National Research Council Canada - National Science Library
Husemann, Joyce A
1986-01-01
One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...
Estimation and display of beam density profiles
Energy Technology Data Exchange (ETDEWEB)
Dasgupta, S; Mukhopadhyay, T; Roy, A; Mallik, C
1989-03-15
A setup in which wire-scanner-type beam-profile monitor data are collected on-line in a nuclear data-acquisition system has been used and a simple algorithm for estimation and display of the current density distribution in a particle beam is described.
Anisotropic Density Estimation in Global Illumination
DEFF Research Database (Denmark)
Schjøth, Lars
2009-01-01
Density estimation employed in multi-pass global illumination algorithms gives cause to a trade-off problem between bias and noise. The problem is seen most evident as blurring of strong illumination features. This thesis addresses the problem, presenting four methods that reduce both noise...
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Density estimation from local structure
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2009-11-01
Full Text Available Mixture Model (GMM) density function of the data and the log-likelihood scores are compared to the scores of a GMM trained with the expectation maximization (EM) algorithm on 5 real-world classification datasets (from the UCI collection). They show...
Optimization of Barron density estimates
Czech Academy of Sciences Publication Activity Database
Vajda, Igor; van der Meulen, E. C.
2001-01-01
Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001
Nonparametric Collective Spectral Density Estimation and Clustering
Maadooliat, Mehdi
2017-04-12
In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at
Nonparametric Collective Spectral Density Estimation and Clustering
Maadooliat, Mehdi; Sun, Ying; Chen, Tianbo
2017-01-01
In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at
A global algorithm for estimating Absolute Salinity
McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.
2012-12-01
The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).
Linear scaling of density functional algorithms
International Nuclear Information System (INIS)
Stechel, E.B.; Feibelman, P.J.; Williams, A.R.
1993-01-01
An efficient density functional algorithm (DFA) that scales linearly with system size will revolutionize electronic structure calculations. Density functional calculations are reliable and accurate in determining many condensed matter and molecular ground-state properties. However, because current DFA's, including methods related to that of Car and Parrinello, scale with the cube of the system size, density functional studies are not routinely applied to large systems. Linear scaling is achieved by constructing functions that are both localized and fully occupied, thereby eliminating the need to calculate global eigenfunctions. It is, however, widely believed that exponential localization requires the existence of an energy gap between the occupied and unoccupied states. Despite this, the authors demonstrate that linear scaling can still be achieved for metals. Using a linear scaling algorithm, they have explicitly constructed localized, almost fully occupied orbitals for the quintessential metallic system, jellium. The algorithm is readily generalizable to any system geometry and Hamiltonian. They will discuss the conceptual issues involved, convergence properties and scaling for their new algorithm
A global algorithm for estimating Absolute Salinity
Directory of Open Access Journals (Sweden)
T. J. McDougall
2012-12-01
Full Text Available The International Thermodynamic Equation of Seawater – 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density than does Practical Salinity.
When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic, Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg^{−1} in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p in the world ocean.
To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811. In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally.
Relative Pose Estimation Algorithm with Gyroscope Sensor
Directory of Open Access Journals (Sweden)
Shanshan Wei
2016-01-01
Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.
Automated mammographic breast density estimation using a fully convolutional network.
Lee, Juhun; Nishikawa, Robert M
2018-03-01
The purpose of this study was to develop a fully automated algorithm for mammographic breast density estimation using deep learning. Our algorithm used a fully convolutional network, which is a deep learning framework for image segmentation, to segment both the breast and the dense fibroglandular areas on mammographic images. Using the segmented breast and dense areas, our algorithm computed the breast percent density (PD), which is the faction of dense area in a breast. Our dataset included full-field digital screening mammograms of 604 women, which included 1208 mediolateral oblique (MLO) and 1208 craniocaudal (CC) views. We allocated 455, 58, and 91 of 604 women and their exams into training, testing, and validation datasets, respectively. We established ground truth for the breast and the dense fibroglandular areas via manual segmentation and segmentation using a simple thresholding based on BI-RADS density assessments by radiologists, respectively. Using the mammograms and ground truth, we fine-tuned a pretrained deep learning network to train the network to segment both the breast and the fibroglandular areas. Using the validation dataset, we evaluated the performance of the proposed algorithm against radiologists' BI-RADS density assessments. Specifically, we conducted a correlation analysis between a BI-RADS density assessment of a given breast and its corresponding PD estimate by the proposed algorithm. In addition, we evaluated our algorithm in terms of its ability to classify the BI-RADS density using PD estimates, and its ability to provide consistent PD estimates for the left and the right breast and the MLO and CC views of the same women. To show the effectiveness of our algorithm, we compared the performance of our algorithm against a state of the art algorithm, laboratory for individualized breast radiodensity assessment (LIBRA). The PD estimated by our algorithm correlated well with BI-RADS density ratings by radiologists. Pearson's rho values of
A Developed ESPRIT Algorithm for DOA Estimation
Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed
2015-05-01
A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.
An efficient quantum algorithm for spectral estimation
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
Global Population Density Grid Time Series Estimates
National Aeronautics and Space Administration — Global Population Density Grid Time Series Estimates provide a back-cast time series of population density grids based on the year 2000 population grid from SEDAC's...
Breast density estimation from high spectral and spatial resolution MRI
Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.
2016-01-01
Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (pdensity estimations. An interclass correlation coefficient of 0.99 (pdensity estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590
Nonparametric methods for volatility density estimation
Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.
2009-01-01
Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on
ADN* Density log estimation Using Rockcell*
International Nuclear Information System (INIS)
Okuku, C.; Iloghalu, Emeka. M.; Omotayo, O.
2003-01-01
This work is intended to inform on the possibilities of estimating good density data in zones associated with sliding in a reservoir with ADN* tool with or without ADOS in string in cases where repeat sections were not done, possibly due to hole stability or directional concerns. This procedure has been equally used to obtain a better density data in corkscrew holes. Density data (ROBB) was recomputed using neural network in RockCell* to estimate the density over zones of interest. RockCell* is a Schlumberger software that has neural network functionally which can be used to estimate missing logs using the combination of the responses of other log curves and intervals that are not affected by sliding. In this work, an interval was selected and within this interval twelve litho zones were defined using the unsupervised neural network. From this a training set was selected based on intervals of very good log responses outside the sliding zones. This training set was used to train and run the neural network for a specific lithostratigraphic interval. The results matched the known good density curve. Then after this, an estimation of the density curve was done using the supervised neural network. The output from this estimation matched very closely in the good portions of the log, thus providing some density measurements in the sliding zone. This methodology provides a scientific solution to missing data during the process of Formation evaluation
Energy-balanced algorithm for RFID estimation
Zhao, Jumin; Wang, Fangyuan; Li, Dengao; Yan, Lijuan
2016-10-01
RFID has been widely used in various commercial applications, ranging from inventory control, supply chain management to object tracking. It is necessary for us to estimate the number of RFID tags deployed in a large area periodically and automatically. Most of the prior works use passive tags to estimate and focus on designing time-efficient algorithms that can estimate tens of thousands of tags in seconds. But for a RFID reader to access tags in a large area, active tags are likely to be used due to their longer operational ranges. But these tags use their own battery as energy supplier. Hence, conserving energy for active tags becomes critical. Some prior works have studied how to reduce energy expenditure of a RFID reader when it reads tags IDs. In this paper, we study how to reduce the amount of energy consumed by active tags during the process of estimating the number of tags in a system and make the energy every tag consumed balanced approximately. We design energy-balanced estimation algorithm that can achieve our goal we mentioned above.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
Perceived Speech Quality Estimation Using DTW Algorithm
Directory of Open Access Journals (Sweden)
S. Arsenovski
2009-06-01
Full Text Available In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their correlation has been observed.
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Comparison of Pilot Symbol Embedded Channel Estimation Algorithms
Directory of Open Access Journals (Sweden)
P. Kadlec
2009-12-01
Full Text Available In the paper, algorithms of the pilot symbol embedded channel estimation are compared. Attention is turned to the Least Square (LS channel estimation and the Sliding Correlator (SC algorithm. Both algorithms are implemented in Matlab to estimate the Channel Impulse Response (CIR of a channel exhibiting multi-path propagation. Algorithms are compared from the viewpoint of computational demands, influence of the Additive White Gaussian Noise (AWGN, an embedded pilot symbol and a computed CIR over the estimation error.
Computerized image analysis: estimation of breast density on mammograms
Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
2000-06-01
An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.
Estimating snowpack density from Albedo measurement
James L. Smith; Howard G. Halverson
1979-01-01
Snow is a major source of water in Western United States. Data on snow depth and average snowpack density are used in mathematical models to predict water supply. In California, about 75 percent of the snow survey sites above 2750-meter elevation now used to collect data are in statutory wilderness areas. There is need for a method of estimating the water content of a...
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing
Automatic bounding estimation in modified NLMS algorithm
International Nuclear Information System (INIS)
Shahtalebi, K.; Doost-Hoseini, A.M.
2002-01-01
Modified Normalized Least Mean Square algorithm, which is a sign form of Nlm based on set-membership (S M) theory in the class of optimal bounding ellipsoid (OBE) algorithms, requires a priori knowledge of error bounds that is unknown in most applications. In a special but popular case of measurement noise, a simple algorithm has been proposed. With some simulation examples the performance of algorithm is compared with Modified Normalized Least Mean Square
International Nuclear Information System (INIS)
Brown, M.L.; Savage, D.J.
1986-04-01
The application of density measurement to heavy metal monitoring in the solvent phase is described, including practical experience gained during three fast reactor fuel reprocessing campaigns. An experimental algorithm relating heavy metal concentration and sample density was generated from laboratory-measured density data, for uranyl nitrate dissolved in nitric acid loaded tri-butyl phosphate in odourless kerosene. Differences in odourless kerosene batch densities are mathematically interpolated, and the algorithm can be used to estimate heavy metal concentrations from the density to within +1.5 g/l. An Anton Paar calculating digital densimeter with remote cell operation was used for all density measurements, but the algorithm will give similar accuracy with any density measuring device capable of a precision of better than 0.0005 g/cm 3 . For plant control purposes, the algorithm was simplified using a density referencing system, whereby the density of solvent not yet loaded with heavy metal is subtracted from the sample density. This simplified algorithm compares very favourably with empirical algorithms, derived from numerical analysis of density data and chemically measured uranium and plutonium data obtained during fuel reprocessing campaigns, particularly when differences in the acidity of the solvent are considered before and after loading with heavy metal. This simplified algorithm had been successfully used for plant control of heavy metal loaded solvent during four fast reactor fuel reprocessing campaigns. (author)
Infrared thermography for wood density estimation
López, Gamaliel; Basterra, Luis-Alfonso; Acuña, Luis
2018-03-01
Infrared thermography (IRT) is becoming a commonly used technique to non-destructively inspect and evaluate wood structures. Based on the radiation emitted by all objects, this technique enables the remote visualization of the surface temperature without making contact using a thermographic device. The process of transforming radiant energy into temperature depends on many parameters, and interpreting the results is usually complicated. However, some works have analyzed the operation of IRT and expanded its applications, as found in the latest literature. This work analyzes the effect of density on the thermodynamic behavior of timber to be determined by IRT. The cooling of various wood samples has been registered, and a statistical procedure that enables one to quantitatively estimate the density of timber has been designed. This procedure represents a new method to physically characterize this material.
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
Implementation of several mathematical algorithms to breast tissue density classification
International Nuclear Information System (INIS)
Quintana, C.; Redondo, M.; Tirao, G.
2014-01-01
The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories. - Highlights: • Breast density classification can be obtained by suitable mathematical algorithms. • Mathematical processing help radiologists to obtain the BI-RADS classification. • The entropy and joint entropy show high performance for density classification
Multivariate density estimation theory, practice, and visualization
Scott, David W
2015-01-01
David W. Scott, PhD, is Noah Harding Professor in the Department of Statistics at Rice University. The author of over 100 published articles, papers, and book chapters, Dr. Scott is also Fellow of the American Statistical Association (ASA) and the Institute of Mathematical Statistics. He is recipient of the ASA Founder's Award and the Army Wilks Award. His research interests include computational statistics, data visualization, and density estimation. Dr. Scott is also Coeditor of Wiley Interdisciplinary Reviews: Computational Statistics and previous Editor of the Journal of Computational and
Algorithms for Brownian first-passage-time estimation
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
Towards the compression of parton densities through machine learning algorithms
Carrazza, Stefano
2016-01-01
One of the most fascinating challenges in the context of parton density function (PDF) is the determination of the best combined PDF uncertainty from individual PDF sets. Since 2014 multiple methodologies have been developed to achieve this goal. In this proceedings we first summarize the strategy adopted by the PDF4LHC15 recommendation and then, we discuss about a new approach to Monte Carlo PDF compression based on clustering through machine learning algorithms.
Mammography density estimation with automated volumetic breast density measurement
International Nuclear Information System (INIS)
Ko, Su Yeon; Kim, Eun Kyung; Kim, Min Jung; Moon, Hee Jung
2014-01-01
To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p 0.001 to 0.015). There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density.
EuroMInd-D: A Density Estimate of Monthly Gross Domestic Product for the Euro Area
DEFF Research Database (Denmark)
Proietti, Tommaso; Marczak, Martyna; Mazzi, Gianluigi
EuroMInd-D is a density estimate of monthly gross domestic product (GDP) constructed according to a bottom–up approach, pooling the density estimates of eleven GDP components, by output and expenditure type. The components density estimates are obtained from a medium-size dynamic factor model...... of a set of coincident time series handling mixed frequencies of observation and ragged–edged data structures. They reflect both parameter and filtering uncertainty and are obtained by implementing a bootstrap algorithm for simulating from the distribution of the maximum likelihood estimators of the model...
Sequential bayes estimation algorithm with cubic splines on uniform meshes
International Nuclear Information System (INIS)
Hossfeld, F.; Mika, K.; Plesser-Walk, E.
1975-11-01
After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de
Implementation of several mathematical algorithms to breast tissue density classification
Quintana, C.; Redondo, M.; Tirao, G.
2014-02-01
The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.
MVDR Algorithm Based on Estimated Diagonal Loading for Beamforming
Directory of Open Access Journals (Sweden)
Yuteng Xiao
2017-01-01
Full Text Available Beamforming algorithm is widely used in many signal processing fields. At present, the typical beamforming algorithm is MVDR (Minimum Variance Distortionless Response. However, the performance of MVDR algorithm relies on the accurate covariance matrix. The MVDR algorithm declines dramatically with the inaccurate covariance matrix. To solve the problem, studying the beamforming array signal model and beamforming MVDR algorithm, we improve MVDR algorithm based on estimated diagonal loading for beamforming. MVDR optimization model based on diagonal loading compensation is established and the interval of the diagonal loading compensation value is deduced on the basis of the matrix theory. The optimal diagonal loading value in the interval is also determined through the experimental method. The experimental results show that the algorithm compared with existing algorithms is practical and effective.
Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm
Institute of Scientific and Technical Information of China (English)
Haidong Xu; Mingyan Jiang; Kun Xu
2015-01-01
The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.
A Motion Estimation Algorithm Using DTCWT and ARPS
Directory of Open Access Journals (Sweden)
Unan Y. Oktiawati
2013-09-01
Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.
Algorithm of the managing systems state estimation
Directory of Open Access Journals (Sweden)
Skubilin M. D.
2010-02-01
Full Text Available The possibility of an electronic estimation of automatic and automated managing systems state is analyzed. An estimation of a current state (functional readiness of technical equipment and person-operator as integrated system allows to take operatively adequate measures on an exception and-or minimisation of consequences of system’s transition in a supernumerary state. The offered method is universal enough and can be recommended for normalisation of situations on transport, mainly in aircraft.
Concrete density estimation by rebound hammer method
International Nuclear Information System (INIS)
Ismail, Mohamad Pauzi bin; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri; Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin; Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin
2016-01-01
Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite
Comparison of two global digital algorithms for Minkowski tensor estimation
DEFF Research Database (Denmark)
The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties...... are confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate estimators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....
Orientation estimation algorithm applied to high-spin projectiles
International Nuclear Information System (INIS)
Long, D F; Lin, J; Zhang, X M; Li, J
2014-01-01
High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm. (paper)
Orientation estimation algorithm applied to high-spin projectiles
Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.
2014-06-01
High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of
Error Estimation for the Linearized Auto-Localization Algorithm
Directory of Open Access Journals (Sweden)
Fernando Seco
2012-02-01
Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.
Estimation error algorithm at analysis of beta-spectra
International Nuclear Information System (INIS)
Bakovets, N.V.; Zhukovskij, A.I.; Zubarev, V.N.; Khadzhinov, E.M.
2005-01-01
This work describes the estimation error algorithm at the operations with beta-spectrums, as well as compares the theoretical and experimental errors by the processing of beta-channel's data. (authors)
Using transformation algorithms to estimate (co)variance ...
African Journals Online (AJOL)
REML) procedures by a diagonalization approach is extended to multiple traits by the use of canonical transformations. A computing strategy is developed for use on large data sets employing two different REML algorithms for the estimation of ...
An Algorithm for Induction Motor Stator Flux Estimation
Directory of Open Access Journals (Sweden)
STOJIC, D. M.
2012-08-01
Full Text Available A new method for the induction motor stator flux estimation used in the sensorless IM drive applications is presented in this paper. Proposed algorithm advantageously solves problems associated with the pure integration, commonly used for the stator flux estimation. An observer-based structure is proposed based on the stator flux vector stationary state, in order to eliminate the undesired DC offset component present in the integrator based stator flux estimates. By using a set of simulation runs it is shown that the proposed algorithm enables the DC-offset free stator flux estimated for both low and high stator frequency induction motor operation.
Iterative importance sampling algorithms for parameter estimation
Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.
2016-01-01
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...
Validation of Core Temperature Estimation Algorithm
2016-01-20
based on an extended Kalman filter , which was developed using field data from 17 young male U.S. Army soldiers with core temperatures ranging from...CTstart, v) %KFMODEL estimate core temperature from heart rate with Kalman filter % This version supports both batch mode (operate on entire HR time...CTstart = 37.1; % degrees Celsius end if nargin < 3 v = 0; end %Extended Kalman Filter Parameters a = 1; gamma = 0.022^2; b_0 = -7887.1; b_1
Application of genetic algorithms for parameter estimation in liquid chromatography
International Nuclear Information System (INIS)
Hernandez Torres, Reynier; Irizar Mesa, Mirtha; Tavares Camara, Leoncio Diogenes
2012-01-01
In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography
Power system static state estimation using Kalman filter algorithm
Directory of Open Access Journals (Sweden)
Saikia Anupam
2016-01-01
Full Text Available State estimation of power system is an important tool for operation, analysis and forecasting of electric power system. In this paper, a Kalman filter algorithm is presented for static estimation of power system state variables. IEEE 14 bus system is employed to check the accuracy of this method. Newton Raphson load flow study is first carried out on our test system and a set of data from the output of load flow program is taken as measurement input. Measurement inputs are simulated by adding Gaussian noise of zero mean. The results of Kalman estimation are compared with traditional Weight Least Square (WLS method and it is observed that Kalman filter algorithm is numerically more efficient than traditional WLS method. Estimation accuracy is also tested for presence of parametric error in the system. In addition, numerical stability of Kalman filter algorithm is tested by considering inclusion of zero mean errors in the initial estimates.
On Improving Convergence Rates for Nonnegative Kernel Density Estimators
Terrell, George R.; Scott, David W.
1980-01-01
To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...
Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.
Algorithms for estimating blood velocities using ultrasound
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2000-01-01
Ultrasound has been used intensively for the last 15 years for studying the hemodynamics of the human body. Systems for determining both the velocity distribution at one point of interest (spectral systems) and for displaying a map of velocity in real time have been constructed. A number of schemes...... have been developed for performing the estimation, and the various approaches are described. The current systems only display the velocity along the ultrasound beam direction and a velocity transverse to the beam is not detected. This is a major problem in these systems, since most blood vessels...... are parallel to the skin surface. Angling the transducer will often disturb the flow, and new techniques for finding transverse velocities are needed. The various approaches for determining transverse velocities will be explained. This includes techniques using two-dimensional correlation (speckle tracking...
Algorithms for non-linear M-estimation
DEFF Research Database (Denmark)
Madsen, Kaj; Edlund, O; Ekblom, H
1997-01-01
In non-linear regression, the least squares method is most often used. Since this estimator is highly sensitive to outliers in the data, alternatives have became increasingly popular during the last decades. We present algorithms for non-linear M-estimation. A trust region approach is used, where...
Estimating diurnal primate densities using distance sampling ...
African Journals Online (AJOL)
SARAH
2016-03-31
Mar 31, 2016 ... In the second session, we used 10 transect adjusted to transect (Grid 17 ... session transect was visited 20 times while at the second session transect ... probability, the density of the group and the group size of each species ...
A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri
2013-01-01
representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...
Applicability of genetic algorithms to parameter estimation of economic models
Directory of Open Access Journals (Sweden)
Marcel Ševela
2004-01-01
Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.
Efficient AM Algorithms for Stochastic ML Estimation of DOA
Directory of Open Access Journals (Sweden)
Haihua Chen
2016-01-01
Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.
Transmission dose estimation algorithm for in vivo dosimetry
International Nuclear Information System (INIS)
Yun, Hyong Geun; Shin, Kyo Chul; Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo
2002-01-01
Measurement of transmission dose is useful for in vivo dosimetry of QA purpose. The objective of this study is to develope an algorithm for estimation of tumor dose using measured transmission dose for open radiation field. Transmission dose was measured with various field size (FS), phantom thickness (Tp), and phantom chamber distance (PCD) with an acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. Using measured data and regression analysis, an algorithm was developed for estimation of expected reading of transmission dose. Accuracy of the algorithm was tested with flat solid phantom with various settings. The algorithm consisted of quadratic function of log(A/P) (where A/P is area-perimeter ratio) and tertiary function of PCD. The algorithm could estimate dose with very high accuracy for open square field, with errors within ±0.5%. For elongated radiation field, the errors were limited to ±1.0%. The developed algorithm can accurately estimate the transmission dose in open radiation fields with various treatment settings
Transmission dose estimation algorithm for in vivo dosimetry
Energy Technology Data Exchange (ETDEWEB)
Yun, Hyong Geun; Shin, Kyo Chul [Dankook Univ., Seoul (Korea, Republic of); Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan [Seoul National Univ., Seoul (Korea, Republic of); Lee, Hyoung Koo [Catholic Univ., Seoul (Korea, Republic of)
2002-07-01
Measurement of transmission dose is useful for in vivo dosimetry of QA purpose. The objective of this study is to develope an algorithm for estimation of tumor dose using measured transmission dose for open radiation field. Transmission dose was measured with various field size (FS), phantom thickness (Tp), and phantom chamber distance (PCD) with an acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. Using measured data and regression analysis, an algorithm was developed for estimation of expected reading of transmission dose. Accuracy of the algorithm was tested with flat solid phantom with various settings. The algorithm consisted of quadratic function of log(A/P) (where A/P is area-perimeter ratio) and tertiary function of PCD. The algorithm could estimate dose with very high accuracy for open square field, with errors within {+-}0.5%. For elongated radiation field, the errors were limited to {+-}1.0%. The developed algorithm can accurately estimate the transmission dose in open radiation fields with various treatment settings.
Nonparametric volatility density estimation for discrete time models
Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.
2005-01-01
We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed
Ant-inspired density estimation via random walks.
Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A
2017-10-03
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.
A Simple Density with Distance Based Initial Seed Selection Technique for K Means Algorithm
Directory of Open Access Journals (Sweden)
Sajidha Syed Azimuddin
2017-01-01
Full Text Available Open issues with respect to K means algorithm are identifying the number of clusters, initial seed concept selection, clustering tendency, handling empty clusters, identifying outliers etc. In this paper we propose a novel and a simple technique considering both density and distance of the concepts in a dataset to identify initial seed concepts for clustering. Many authors have proposed different techniques to identify initial seed concepts; but our method ensures that the initial seed concepts are chosen from different clusters that are to be generated by the clustering solution. The hallmark of our algorithm is that it is a single pass algorithm that does not require any extra parameters to be estimated. Further, our seed concepts are one among the actual concepts and not the mean of representative concepts as is the case in many other algorithms. We have implemented our proposed algorithm and compared the results with the interval based technique of Fouad Khan. We see that our method outperforms the interval based method. We have also compared our method with the original random K means and K Means++ algorithms.
Information geometry of density matrices and state estimation
International Nuclear Information System (INIS)
Brody, Dorje C
2011-01-01
Given a pure state vector |x) and a density matrix ρ-hat, the function p(x|ρ-hat)= defines a probability density on the space of pure states parameterised by density matrices. The associated Fisher-Rao information measure is used to define a unitary invariant Riemannian metric on the space of density matrices. An alternative derivation of the metric, based on square-root density matrices and trace norms, is provided. This is applied to the problem of quantum-state estimation. In the simplest case of unitary parameter estimation, new higher-order corrections to the uncertainty relations, applicable to general mixed states, are derived. (fast track communication)
Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation
Directory of Open Access Journals (Sweden)
Namyong Kim
2016-06-01
Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.
Data-driven algorithm to estimate friction in automobile engine
DEFF Research Database (Denmark)
Stotsky, Alexander A.
2010-01-01
Algorithms based on the oscillations of the engine angular rotational speed under fuel cutoff and no-load were proposed for estimation of the engine friction torque. The recursive algorithm to restore the periodic signal is used to calculate the amplitude of the engine speed signal at fuel cutoff....... The values of the friction torque in the corresponding table entries are updated at acquiring new measurements of the friction moment. A new, data-driven algorithm for table adaptation on the basis of stepwise regression was developed and verified using the six-cylinder Volvo engine....
Pose estimation for augmented reality applications using genetic algorithm.
Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen
2005-12-01
This paper describes a genetic algorithm that tackles the pose-estimation problem in computer vision. Our genetic algorithm can find the rotation and translation of an object accurately when the three-dimensional structure of the object is given. In our implementation, each chromosome encodes both the pose and the indexes to the selected point features of the object. Instead of only searching for the pose as in the existing work, our algorithm, at the same time, searches for a set containing the most reliable feature points in the process. This mismatch filtering strategy successfully makes the algorithm more robust under the presence of point mismatches and outliers in the images. Our algorithm has been tested with both synthetic and real data with good results. The accuracy of the recovered pose is compared to the existing algorithms. Our approach outperformed the Lowe's method and the other two genetic algorithms under the presence of point mismatches and outliers. In addition, it has been used to estimate the pose of a real object. It is shown that the proposed method is applicable to augmented reality applications.
A flexible fuzzy regression algorithm for forecasting oil consumption estimation
International Nuclear Information System (INIS)
Azadeh, A.; Khakestani, M.; Saberi, M.
2009-01-01
Oil consumption plays a vital role in socio-economic development of most countries. This study presents a flexible fuzzy regression algorithm for forecasting oil consumption based on standard economic indicators. The standard indicators are annual population, cost of crude oil import, gross domestic production (GDP) and annual oil production in the last period. The proposed algorithm uses analysis of variance (ANOVA) to select either fuzzy regression or conventional regression for future demand estimation. The significance of the proposed algorithm is three fold. First, it is flexible and identifies the best model based on the results of ANOVA and minimum absolute percentage error (MAPE), whereas previous studies consider the best fitted fuzzy regression model based on MAPE or other relative error results. Second, the proposed model may identify conventional regression as the best model for future oil consumption forecasting because of its dynamic structure, whereas previous studies assume that fuzzy regression always provide the best solutions and estimation. Third, it utilizes the most standard independent variables for the regression models. To show the applicability and superiority of the proposed flexible fuzzy regression algorithm the data for oil consumption in Canada, United States, Japan and Australia from 1990 to 2005 are used. The results show that the flexible algorithm provides accurate solution for oil consumption estimation problem. The algorithm may be used by policy makers to accurately foresee the behavior of oil consumption in various regions.
Global stereo matching algorithm based on disparity range estimation
Li, Jing; Zhao, Hong; Gu, Feifei
2017-09-01
The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.
Current Source Density Estimation for Single Neurons
Directory of Open Access Journals (Sweden)
Dorottya Cserpán
2014-03-01
Full Text Available Recent developments of multielectrode technology made it possible to measure the extracellular potential generated in the neural tissue with spatial precision on the order of tens of micrometers and on submillisecond time scale. Combining such measurements with imaging of single neurons within the studied tissue opens up new experimental possibilities for estimating distribution of current sources along a dendritic tree. In this work we show that if we are able to relate part of the recording of extracellular potential to a specific cell of known morphology we can estimate the spatiotemporal distribution of transmembrane currents along it. We present here an extension of the kernel CSD method (Potworowski et al., 2012 applicable in such case. We test it on several model neurons of progressively complicated morphologies from ball-and-stick to realistic, up to analysis of simulated neuron activity embedded in a substantial working network (Traub et al, 2005. We discuss the caveats and possibilities of this new approach.
Estimation of electricity demand of Iran using two heuristic algorithms
International Nuclear Information System (INIS)
Amjadi, M.H.; Nezamabadi-pour, H.; Farsangi, M.M.
2010-01-01
This paper deals with estimation of electricity demand of Iran based on economic indicators using Particle Swarm Optimization (PSO) Algorithm. The estimation is based on Gross Domestic Product (GDP), population, number of customers and average price electricity by developing two different estimation models: a linear model and a non-linear model. The proposed models are obtained based upon available actual data of 21 years; since 1980-2000. Then the models obtained are used to estimate the electricity demand of the target years; for a period of time e.g. 2001-2006 and the results obtained are compared with the actual demand during this period. Furthermore, to validate the results obtained by PSO, genetic algorithm (GA) is applied to solve the problem. The results show that the PSO is a useful optimization tool for solving the problem using two developed models and can be used as an alternative solution to estimate the future electricity demand.
Kernel bandwidth estimation for non-parametric density estimation: a comparative study
CSIR Research Space (South Africa)
Van der Walt, CM
2013-12-01
Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...
Toward accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2017-08-01
Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and
Flux estimation algorithms for electric drives: a comparative study
Koteich , Mohamad
2016-01-01
International audience; This paper reviews the stator flux estimation algorithms applied to the alternating current motor drives. The so-called voltage model estimation, which consists of integrating the back-electromotive force signal, is addressed. However, in practice , the pure integration is prone to drift problems due to noises, measurement error, stator resistance uncertainty and unknown initial conditions. This limitation becomes more restrictive at low speed operation. Several soluti...
A Fast DOA Estimation Algorithm Based on Polarization MUSIC
Directory of Open Access Journals (Sweden)
R. Guo
2015-04-01
Full Text Available A fast DOA estimation algorithm developed from MUSIC, which also benefits from the processing of the signals' polarization information, is presented. Besides performance enhancement in precision and resolution, the proposed algorithm can be exerted on various forms of polarization sensitive arrays, without specific requirement on the array's pattern. Depending on the continuity property of the space spectrum, a huge amount of computation incurred in the calculation of 4-D space spectrum is averted. Performance and computation complexity analysis of the proposed algorithm is discussed and the simulation results are presented. Compared with conventional MUSIC, it is indicated that the proposed algorithm has considerable advantage in aspects of precision and resolution, with a low computation complexity proportional to a conventional 2-D MUSIC.
Geomagnetic matching navigation algorithm based on robust estimation
Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan
2017-08-01
The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.
A Novel DOA Estimation Algorithm Using Array Rotation Technique
Directory of Open Access Journals (Sweden)
Xiaoyu Lan
2014-03-01
Full Text Available The performance of traditional direction of arrival (DOA estimation algorithm based on uniform circular array (UCA is constrained by the array aperture. Furthermore, the array requires more antenna elements than targets, which will increase the size and weight of the device and cause higher energy loss. In order to solve these issues, a novel low energy algorithm utilizing array base-line rotation for multiple targets estimation is proposed. By rotating two elements and setting a fixed time delay, even the number of elements is selected to form a virtual UCA. Then, the received data of signals will be sampled at multiple positions, which improves the array elements utilization greatly. 2D-DOA estimation of the rotation array is accomplished via multiple signal classification (MUSIC algorithms. Finally, the Cramer-Rao bound (CRB is derived and simulation results verified the effectiveness of the proposed algorithm with high resolution and estimation accuracy performance. Besides, because of the significant reduction of array elements number, the array antennas system is much simpler and less complex than traditional array.
An Application of Data Mining Algorithms for Shipbuilding Cost Estimation
Kaluzny, B.L.; Barbici, S.; Berg, G.; Chiomento, R.; Derpanis,D.; Jonsson, U.; Shaw, R.H.A.D.; Smit, M.C.; Ramaroson, F.
2011-01-01
This article presents a novel application of known data mining algorithms to the problem of estimating the cost of ship development and construction. The work is a product of North Atlantic Treaty Organization Research and Technology Organization Systems Analysis and Studies 076 Task Group “NATO
Directory of Open Access Journals (Sweden)
V. Jayaraj
2010-08-01
Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.
Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation
International Nuclear Information System (INIS)
Yeo, U. J.; Supple, J. R.; Franich, R. D.; Taylor, M. L.; Smith, R.; Kron, T.
2013-01-01
Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L. Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.; Franek, M.; Schonlieb, C.-B.
2012-01-01
for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations
Head pose estimation algorithm based on deep learning
Cao, Yuanming; Liu, Yijun
2017-05-01
Head pose estimation has been widely used in the field of artificial intelligence, pattern recognition and intelligent human-computer interaction and so on. Good head pose estimation algorithm should deal with light, noise, identity, shelter and other factors robustly, but so far how to improve the accuracy and robustness of attitude estimation remains a major challenge in the field of computer vision. A method based on deep learning for pose estimation is presented. Deep learning with a strong learning ability, it can extract high-level image features of the input image by through a series of non-linear operation, then classifying the input image using the extracted feature. Such characteristics have greater differences in pose, while they are robust of light, identity, occlusion and other factors. The proposed head pose estimation is evaluated on the CAS-PEAL data set. Experimental results show that this method is effective to improve the accuracy of pose estimation.
Improved Variable Window Kernel Estimates of Probability Densities
Hall, Peter; Hu, Tien Chung; Marron, J. S.
1995-01-01
Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...
On Improving Density Estimators which are not Bona Fide Functions
Gajek, Leslaw
1986-01-01
In order to improve the rate of decrease of the IMSE for nonparametric kernel density estimators with nonrandom bandwidth beyond $O(n^{-4/5})$ all current methods must relax the constraint that the density estimate be a bona fide function, that is, be nonnegative and integrate to one. In this paper we show how to achieve similar improvement without relaxing any of these constraints. The method can also be applied for orthogonal series, adaptive orthogonal series, spline, jackknife, and other ...
Density estimates of monarch butterflies overwintering in central Mexico
Directory of Open Access Journals (Sweden)
Wayne E. Thogmartin
2017-04-01
Full Text Available Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L. under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1; the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1. Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp. lost (0.86 billion stems in the northern US plus the amount of milkweed remaining (1.34 billion stems, we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.
Density estimates of monarch butterflies overwintering in central Mexico
Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.
Network Kernel Density Estimation for the Analysis of Facility POI Hotspots
Directory of Open Access Journals (Sweden)
YU Wenhao
2015-12-01
Full Text Available The distribution pattern of urban facility POIs (points of interest usually forms clusters (i.e. "hotspots" in urban geographic space. To detect such type of hotspot, the methods mostly employ spatial density estimation based on Euclidean distance, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. By using these methods, it is difficult to exactly and objectively delimitate the shape and the size of hotspot. Therefore, this research adopts the kernel density estimation based on the network distance to compute the density of hotspot and proposes a simple and efficient algorithm. The algorithm extends the 2D dilation operator to the 1D morphological operator, thus computing the density of network unit. Through evaluation experiment, it is suggested that the algorithm is more efficient and scalable than the existing algorithms. Based on the case study on real POI data, the range of hotspot can highlight the spatial characteristic of urban functions along traffic routes, in order to provide valuable spatial knowledge and information services for the applications of region planning, navigation and geographic information inquiring.
Estimating the Partition Function Zeros by Using the Wang-Landau Monte Carlo Algorithm
Energy Technology Data Exchange (ETDEWEB)
Kim, Seung-Yeon [Korea National University of Transportation, Chungju (Korea, Republic of)
2017-03-15
The concept of the partition function zeros is one of the most efficient methods for investigating the phase transitions and the critical phenomena in various physical systems. Estimating the partition function zeros requires information on the density of states Ω(E) as a function of the energy E. Currently, the Wang-Landau Monte Carlo algorithm is one of the best methods for calculating Ω(E). The partition function zeros in the complex temperature plane of the Ising model on an L × L square lattice (L = 10 ∼ 80) with a periodic boundary condition have been estimated by using the Wang-Landau Monte Carlo algorithm. The efficiency of the Wang-Landau Monte Carlo algorithm and the accuracies of the partition function zeros have been evaluated for three different, 5%, 10%, and 20%, flatness criteria for the histogram H(E).
Optimal Bandwidth Selection for Kernel Density Functionals Estimation
Directory of Open Access Journals (Sweden)
Su Chen
2015-01-01
Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.
Research reactor loading pattern optimization using estimation of distribution algorithms
Energy Technology Data Exchange (ETDEWEB)
Jiang, S. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Ziver, K. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); AMCG Group, RM Consultants, Abingdon (United Kingdom); Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Franklin, S. J.; Phillips, H. J. [Imperial College, Reactor Centre, Silwood Park, Buckhurst Road, Ascot, Berkshire, SL5 7TE (United Kingdom)
2006-07-01
A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K{sub eff}) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K{sub eff} with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)
Research reactor loading pattern optimization using estimation of distribution algorithms
International Nuclear Information System (INIS)
Jiang, S.; Ziver, K.; Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H.; Franklin, S. J.; Phillips, H. J.
2006-01-01
A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K eff ) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K eff with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this
Probability Density Estimation Using Neural Networks in Monte Carlo Calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo
2008-01-01
The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)
Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.
A Pulse Rate Estimation Algorithm Using PPG and Smartphone Camera.
Siddiqui, Sarah Ali; Zhang, Yuan; Feng, Zhiquan; Kos, Anton
2016-05-01
The ubiquitous use and advancement in built-in smartphone sensors and the development in big data processing have been beneficial in several fields including healthcare. Among the basic vitals monitoring, pulse rate monitoring is the most important healthcare necessity. A multimedia video stream data acquired by built-in smartphone camera can be used to estimate it. In this paper, an algorithm that uses only smartphone camera as a sensor to estimate pulse rate using PhotoPlethysmograph (PPG) signals is proposed. The results obtained by the proposed algorithm are compared with the actual pulse rate and the maximum error found is 3 beats per minute. The standard deviation in percentage error and percentage accuracy is found to be 0.68 % whereas the average percentage error and percentage accuracy is found to be 1.98 % and 98.02 % respectively.
Improved quantum backtracking algorithms using effective resistance estimates
Jarret, Michael; Wan, Kianna
2018-02-01
We investigate quantum backtracking algorithms of the type introduced by Montanaro (Montanaro, arXiv:1509.02374). These algorithms explore trees of unknown structure and in certain settings exponentially outperform their classical counterparts. Some of the previous work focused on obtaining a quantum advantage for trees in which a unique marked vertex is promised to exist. We remove this restriction by recharacterizing the problem in terms of the effective resistance of the search space. In this paper, we present a generalization of one of Montanaro's algorithms to trees containing k marked vertices, where k is not necessarily known a priori. Our approach involves using amplitude estimation to determine a near-optimal weighting of a diffusion operator, which can then be applied to prepare a superposition state with support only on marked vertices and ancestors thereof. By repeatedly sampling this state and updating the input vertex, a marked vertex is reached in a logarithmic number of steps. The algorithm thereby achieves the conjectured bound of O ˜(√{T Rmax }) for finding a single marked vertex and O ˜(k √{T Rmax }) for finding all k marked vertices, where T is an upper bound on the tree size and Rmax is the maximum effective resistance encountered by the algorithm. This constitutes a speedup over Montanaro's original procedure in both the case of finding one and the case of finding multiple marked vertices in an arbitrary tree.
Adaptive algorithm for mobile user positioning based on environment estimation
Directory of Open Access Journals (Sweden)
Grujović Darko
2014-01-01
Full Text Available This paper analyzes the challenges to realize an infrastructure independent and a low-cost positioning method in cellular networks based on RSS (Received Signal Strength parameter, auxiliary timing parameter and environment estimation. The proposed algorithm has been evaluated using field measurements collected from GSM (Global System for Mobile Communications network, but it is technology independent and can be applied in UMTS (Universal Mobile Telecommunication Systems and LTE (Long-Term Evolution networks, also.
Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.
Smith, J E
2012-01-01
Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes
[Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].
Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong
2015-11-01
With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.
Gradient-based stochastic estimation of the density matrix
Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton
2018-03-01
Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.
Françoise Benz
2004-01-01
ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on natural annealing processes or Evolutionary Computation, based on biological evolution processes. Geneti...
Françoise Benz
2004-01-01
ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on nat...
The PARAFAC-MUSIC Algorithm for DOA Estimation with Doppler Frequency in a MIMO Radar System
Directory of Open Access Journals (Sweden)
Nan Wang
2014-01-01
Full Text Available The PARAFAC-MUSIC algorithm is proposed to estimate the direction-of-arrival (DOA of the targets with Doppler frequency in a monostatic MIMO radar system in this paper. To estimate the Doppler frequency, the PARAFAC (parallel factor algorithm is firstly utilized in the proposed algorithm, and after the compensation of Doppler frequency, MUSIC (multiple signal classification algorithm is applied to estimate the DOA. By these two steps, the DOA of moving targets can be estimated successfully. Simulation results show that the proposed PARAFAC-MUSIC algorithm has a higher accuracy than the PARAFAC algorithm and the MUSIC algorithm in DOA estimation.
A new approach for estimating the density of liquids.
Sakagami, T; Fuchizaki, K; Ohara, K
2016-10-05
We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Directory of Open Access Journals (Sweden)
Samir Saoudi
2008-07-01
Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Sparse Covariance Matrix Estimation by DCA-Based Algorithms.
Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham
2017-11-01
This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.
Density Estimation in Several Populations With Uncertain Population Membership
Ma, Yanyuan
2011-09-01
We devise methods to estimate probability density functions of several populations using observations with uncertain population membership, meaning from which population an observation comes is unknown. The probability of an observation being sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate our methods with data from a nutrition study.
Directory of Open Access Journals (Sweden)
Wenjing Zhao
2018-01-01
Full Text Available SGK (sequential generalization of K-means dictionary learning denoising algorithm has the characteristics of fast denoising speed and excellent denoising performance. However, the noise standard deviation must be known in advance when using SGK algorithm to process the image. This paper presents a denoising algorithm combined with SGK dictionary learning and the principal component analysis (PCA noise estimation. At first, the noise standard deviation of the image is estimated by using the PCA noise estimation algorithm. And then it is used for SGK dictionary learning algorithm. Experimental results show the following: (1 The SGK algorithm has the best denoising performance compared with the other three dictionary learning algorithms. (2 The SGK algorithm combined with PCA is superior to the SGK algorithm combined with other noise estimation algorithms. (3 Compared with the original SGK algorithm, the proposed algorithm has higher PSNR and better denoising performance.
Face Value: Towards Robust Estimates of Snow Leopard Densities.
Directory of Open Access Journals (Sweden)
Justine S Alexander
Full Text Available When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01 individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87. Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.
MicroTrack: an algorithm for concurrent projectome and microstructure estimation.
Sherbondy, Anthony J; Rowe, Matthew C; Alexander, Daniel C
2010-01-01
This paper presents MicroTrack, an algorithm that combines global tractography and direct microstructure estimation using diffusion-weighted imaging data. Previous work recovers connectivity via tractography independently from estimating microstructure features, such as axon diameter distribution and density. However, the two estimates have great potential to inform one another given the common assumption that microstructural features remain consistent along fibers. Here we provide a preliminary examination of this hypothesis. We adapt a global tractography algorithm to associate axon diameter with each putative pathway and optimize both the set of pathways and their microstructural parameters to find the best fit of this holistic white-matter model to the MRI data. We demonstrate in simulation that, with a multi-shell HARDI acquisition, this approach not only improves estimates of microstructural parameters over voxel-by-voxel estimation, but provides a solution to long standing problems in tractography. In particular, a simple experiment demonstrates the resolution of the well known ambiguity between crossing and kissing fibers. The results strongly motivate further development of this kind of algorithm for brain connectivity mapping.
Model parameters estimation and sensitivity by genetic algorithms
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca
2003-01-01
In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The
Bayesian error estimation in density-functional theory
DEFF Research Database (Denmark)
Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund
2005-01-01
We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...
Estimate of energy density on CYCLOPS spatial filter pinhole structure
International Nuclear Information System (INIS)
Guch, S. Jr.
1974-01-01
The inclusion of a spatial filter between the B and C stages in CYCLOPS to reduce the effects of small-scale beam self-focusing is discussed. An estimate is made of the energy density to which the pinhole will be subjected, and the survivability of various pinhole materials and designs is discussed
State of the Art in Photon-Density Estimation
DEFF Research Database (Denmark)
Hachisuka, Toshiya; Jarosz, Wojciech; Georgiev, Iliyan
2013-01-01
scattering. Since its introduction, photon-density estimation has been significantly extended in computer graphics with the introduction of: specialized techniques that intelligently modify the positions or bandwidths to reduce visual error using a small number of photons, approaches that eliminate error...
State of the Art in Photon Density Estimation
DEFF Research Database (Denmark)
Hachisuka, Toshiya; Jarosz, Wojciech; Bouchard, Guillaume
2012-01-01
scattering. Since its introduction, photon-density estimation has been significantly extended in computer graphics with the introduction of: specialized techniques that intelligently modify the positions or bandwidths to reduce visual error using a small number of photons, approaches that eliminate error...
Estimation of larval density of Liriomyza sativae Blanchard (Diptera ...
African Journals Online (AJOL)
This study was conducted to develop sequential sampling plans to estimate larval density of Liriomyza sativae Blanchard (Diptera: Agromyzidae) at three precision levels in cucumber greenhouse. The within- greenhouse spatial patterns of larvae were aggregated. The slopes and intercepts of both Iwao's patchiness ...
Estimating forest canopy bulk density using six indirect methods
Robert E. Keane; Elizabeth D. Reinhardt; Joe Scott; Kathy Gray; James Reardon
2005-01-01
Canopy bulk density (CBD) is an important crown characteristic needed to predict crown fire spread, yet it is difficult to measure in the field. Presented here is a comprehensive research effort to evaluate six indirect sampling techniques for estimating CBD. As reference data, detailed crown fuel biomass measurements were taken on each tree within fixed-area plots...
Estimating Soil Bulk Density and Total Nitrogen from Catchment ...
African Journals Online (AJOL)
Even though data on soil bulk density (BD) and total nitrogen (TN) are essential for planning modern farming techniques, their data availability is limited for many applications in the developing word. This study is designed to estimate BD and TN from soil properties, land-use systems, soil types and landforms in the ...
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Corruption clubs: empirical evidence from kernel density estimates
Herzfeld, T.; Weiss, Ch.
2007-01-01
A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to
A Balanced Approach to Adaptive Probability Density Estimation
Directory of Open Access Journals (Sweden)
Julio A. Kovacs
2017-04-01
Full Text Available Our development of a Fast (Mutual Information Matching (FIM of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.
2012-03-11
The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).
Simplified large African carnivore density estimators from track indices
Directory of Open Access Journals (Sweden)
Christiaan W. Winterbach
2016-12-01
Full Text Available Background The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. Methods We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. Results The Lion on Clay and Low Density on Sand models with intercept were not significant (P > 0.05. The other four models with intercept and the six models thorough origin were all significant (P < 0.05. The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Discussion Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26
Alkan, Hilal; Balkaya, Çağlayan
2018-02-01
We present an efficient inversion tool for parameter estimation from horizontal loop electromagnetic (HLEM) data using Differential Search Algorithm (DSA) which is a swarm-intelligence-based metaheuristic proposed recently. The depth, dip, and origin of a thin subsurface conductor causing the anomaly are the parameters estimated by the HLEM method commonly known as Slingram. The applicability of the developed scheme was firstly tested on two synthetically generated anomalies with and without noise content. Two control parameters affecting the convergence characteristic to the solution of the algorithm were tuned for the so-called anomalies including one and two conductive bodies, respectively. Tuned control parameters yielded more successful statistical results compared to widely used parameter couples in DSA applications. Two field anomalies measured over a dipping graphitic shale from Northern Australia were then considered, and the algorithm provided the depth estimations being in good agreement with those of previous studies and drilling information. Furthermore, the efficiency and reliability of the results obtained were investigated via probability density function. Considering the results obtained, we can conclude that DSA characterized by the simple algorithmic structure is an efficient and promising metaheuristic for the other relatively low-dimensional geophysical inverse problems. Finally, the researchers after being familiar with the content of developed scheme displaying an easy to use and flexible characteristic can easily modify and expand it for their scientific optimization problems.
Evaluating lidar point densities for effective estimation of aboveground biomass
Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason M.; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.
2016-01-01
The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.
Estimating Traffic Accidents in Turkey Using Differential Evolution Algorithm
Akgüngör, Ali Payıdar; Korkmaz, Ersin
2017-06-01
Estimating traffic accidents play a vital role to apply road safety procedures. This study proposes Differential Evolution Algorithm (DEA) models to estimate the number of accidents in Turkey. In the model development, population (P) and the number of vehicles (N) are selected as model parameters. Three model forms, linear, exponential and semi-quadratic models, are developed using DEA with the data covering from 2000 to 2014. Developed models are statistically compared to select the best fit model. The results of the DE models show that the linear model form is suitable to estimate the number of accidents. The statistics of this form is better than other forms in terms of performance criteria which are the Mean Absolute Percentage Errors (MAPE) and the Root Mean Square Errors (RMSE). To investigate the performance of linear DE model for future estimations, a ten-year period from 2015 to 2024 is considered. The results obtained from future estimations reveal the suitability of DE method for road safety applications.
Estimating the effect of urban density on fuel demand
Energy Technology Data Exchange (ETDEWEB)
Karathodorou, Niovi; Graham, Daniel J. [Imperial College London, London, SW7 2AZ (United Kingdom); Noland, Robert B. [Rutgers University, New Brunswick, NJ 08901 (United States)
2010-01-15
Much of the empirical literature on fuel demand presents estimates derived from national data which do not permit any explicit consideration of the spatial structure of the economy. Intuitively we would expect the degree of spatial concentration of activities to have a strong link with transport fuel consumption. The present paper addresses this theme by estimating a fuel demand model for urban areas to provide a direct estimate of the elasticity of demand with respect to urban density. Fuel demand per capita is decomposed into car stock per capita, fuel consumption per kilometre and annual distance driven per car per year. Urban density is found to affect fuel consumption, mostly through variations in the car stock and in the distances travelled, rather than through fuel consumption per kilometre. (author)
Using Genetic Algorithm to Estimate Hydraulic Parameters of Unconfined Aquifers
Directory of Open Access Journals (Sweden)
Asghar Asghari Moghaddam
2009-03-01
Full Text Available Nowadays, optimization techniques such as Genetic Algorithms (GA have attracted wide attention among scientists for solving complicated engineering problems. In this article, pumping test data are used to assess the efficiency of GA in estimating unconfined aquifer parameters and a sensitivity analysis is carried out to propose an optimal arrangement of GA. For this purpose, hydraulic parameters of three sets of pumping test data are calculated by GA and they are compared with the results of graphical methods. The results indicate that the GA technique is an efficient, reliable, and powerful method for estimating the hydraulic parameters of unconfined aquifer and, further, that in cases of deficiency in pumping test data, it has a better performance than graphical methods.
Coupling two iteratives algorithms for density measurements by computerized tomography
International Nuclear Information System (INIS)
Silva, L.E.M.C.; Santos, C.A.C.; Borges, J.C.; Frenkel, A.D.B.; Rocha, G.M.
1986-01-01
This work develops a study for coupling two iteratives algotithms for density measurements by computerized tomography. Tomographies have been obtained with an automatized prototype, controled by a microcomputer, projected and assembled in the Nuclear Instrumentation Laboratory, at COPPE/UFRJ. Results show a good performance of the tomographic system, and demonstrate the validity of the method of calculus adopted. (Author) [pt
Simulating prescribed particle densities in the grand canonical ensemble using iterative algorithms.
Malasics, Attila; Gillespie, Dirk; Boda, Dezso
2008-03-28
We present two efficient iterative Monte Carlo algorithms in the grand canonical ensemble with which the chemical potentials corresponding to prescribed (targeted) partial densities can be determined. The first algorithm works by always using the targeted densities in the kT log(rho(i)) (ideal gas) terms and updating the excess chemical potentials from the previous iteration. The second algorithm extrapolates the chemical potentials in the next iteration from the results of the previous iteration using a first order series expansion of the densities. The coefficients of the series, the derivatives of the densities with respect to the chemical potentials, are obtained from the simulations by fluctuation formulas. The convergence of this procedure is shown for the examples of a homogeneous Lennard-Jones mixture and a NaCl-CaCl(2) electrolyte mixture in the primitive model. The methods are quite robust under the conditions investigated. The first algorithm is less sensitive to initial conditions.
Dual-Layer Density Estimation for Multiple Object Instance Detection
Directory of Open Access Journals (Sweden)
Qiang Zhang
2016-01-01
Full Text Available This paper introduces a dual-layer density estimation-based architecture for multiple object instance detection in robot inventory management applications. The approach consists of raw scale-invariant feature transform (SIFT feature matching and key point projection. The dominant scale ratio and a reference clustering threshold are estimated using the first layer of the density estimation. A cascade of filters is applied after feature template reconstruction and refined feature matching to eliminate false matches. Before the second layer of density estimation, the adaptive threshold is finalized by multiplying an empirical coefficient for the reference value. The coefficient is identified experimentally. Adaptive threshold-based grid voting is applied to find all candidate object instances. Error detection is eliminated using final geometric verification in accordance with Random Sample Consensus (RANSAC. The detection results of the proposed approach are evaluated on a self-built dataset collected in a supermarket. The results demonstrate that the approach provides high robustness and low latency for inventory management application.
A density-independent algorithm for moisture content determination in sawdust, based on a one-port reflection measurement technique is proposed for the first time. Performance of this algorithm is demonstrated through measurement of the dielectric properties of sawdust with an open-ended haft-mode s...
Semiautomatic estimation of breast density with DM-Scan software.
Martínez Gómez, I; Casals El Busto, M; Antón Guirao, J; Ruiz Perales, F; Llobet Azpitarte, R
2014-01-01
To evaluate the reproducibility of the calculation of breast density with DM-Scan software, which is based on the semiautomatic segmentation of fibroglandular tissue, and to compare it with the reproducibility of estimation by visual inspection. The study included 655 direct digital mammograms acquired using craniocaudal projections. Three experienced radiologists analyzed the density of the mammograms using DM-Scan, and the inter- and intra-observer agreement between pairs of radiologists for the Boyd and BI-RADS® scales were calculated using the intraclass correlation coefficient. The Kappa index was used to compare the inter- and intra-observer agreements with those obtained previously for visual inspection in the same set of images. For visual inspection, the mean interobserver agreement was 0,876 (95% CI: 0,873-0,879) on the Boyd scale and 0,823 (95% CI: 0,818-0,829) on the BI-RADS® scale. The mean intraobserver agreement was 0,813 (95% CI: 0,796-0,829) on the Boyd scale and 0,770 (95% CI: 0,742-0,797) on the BI-RADS® scale. For DM-Scan, the mean inter- and intra-observer agreement was 0,92, considerably higher than the agreement for visual inspection. The semiautomatic calculation of breast density using DM-Scan software is more reliable and reproducible than visual estimation and reduces the subjectivity and variability in determining breast density. Copyright © 2012 SERAM. Published by Elsevier Espana. All rights reserved.
Covariance and correlation estimation in electron-density maps.
Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna
2012-03-01
Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.
Directory of Open Access Journals (Sweden)
Marco Lombardo
Full Text Available PURPOSE: To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. METHODS: Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL. The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr, the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. RESULTS: The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. CONCLUSIONS: The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi
Improving Frozen Precipitation Density Estimation in Land Surface Modeling
Sparrow, K.; Fall, G. M.
2017-12-01
The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in
Estimating black bear density using DNA data from hair snares
Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.
2010-01-01
DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.
Motion estimation for video coding efficient algorithms and architectures
Chakrabarti, Indrajit; Chatterjee, Sumit Kumar
2015-01-01
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
Fast Parabola Detection Using Estimation of Distribution Algorithms
Directory of Open Access Journals (Sweden)
Jose de Jesus Guerrero-Turrubiates
2017-01-01
Full Text Available This paper presents a new method based on Estimation of Distribution Algorithms (EDAs to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications.
A modified estimation distribution algorithm based on extreme elitism.
Gao, Shujun; de Silva, Clarence W
2016-12-01
An existing estimation distribution algorithm (EDA) with univariate marginal Gaussian model was improved by designing and incorporating an extreme elitism selection method. This selection method highlighted the effect of a few top best solutions in the evolution and advanced EDA to form a primary evolution direction and obtain a fast convergence rate. Simultaneously, this selection can also keep the population diversity to make EDA avoid premature convergence. Then the modified EDA was tested by means of benchmark low-dimensional and high-dimensional optimization problems to illustrate the gains in using this extreme elitism selection. Besides, no-free-lunch theorem was implemented in the analysis of the effect of this new selection on EDAs. Copyright Â© 2016 Elsevier Ireland Ltd. All rights reserved.
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been
Impelluso, Thomas J
2003-06-01
An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.
An improved initialization center k-means clustering algorithm based on distance and density
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Crowd density estimation based on convolutional neural networks with mixed pooling
Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming
2017-09-01
Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.
The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.
Carson, Cantwell G; Levine, Jonathan S
2016-09-01
The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Ambit determination method in estimating rice plant population density
Directory of Open Access Journals (Sweden)
Abu Bakar, B.,
2017-11-01
Full Text Available Rice plant population density is a key indicator in determining the crop setting and fertilizer application rate. It is therefore essential that the population density is monitored to ensure that a correct crop management decision is taken. The conventional method of determining plant population is by manually counting the total number of rice plant tillers in a 25 cm x 25 cm square frame. Sampling is done by randomly choosing several different locations within a plot to perform tiller counting. This sampling method is time consuming, labour intensive and costly. An alternative fast estimating method was developed to overcome this issue. The method relies on measuring the outer circumference or ambit of the contained rice plants in a 25 cm x 25 cm square frame to determine the number of tillers within that square frame. Data samples of rice variety MR219 were collected from rice plots in the Muda granary area, Sungai Limau Dalam, Kedah. The data were taken at 50 days and 70 days after seeding (DAS. A total of 100 data samples were collected for each sampling day. A good correlation was obtained for the variety of 50 DAS and 70 DAS. The model was then verified by taking 100 samples with the latching strap for 50 DAS and 70 DAS. As a result, this technique can be used as a fast, economical and practical alternative to manual tiller counting. The technique can potentially be used in the development of an electronic sensing system to estimate paddy plant population density.
A density based algorithm to detect cavities and holes from planar points
Zhu, Jie; Sun, Yizhong; Pang, Yueyong
2017-12-01
Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.
A projection and density estimation method for knowledge discovery.
Directory of Open Access Journals (Sweden)
Adam Stanski
Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.
An Improved Convolutional Neural Network on Crowd Density Estimation
Directory of Open Access Journals (Sweden)
Pan Shao-Yun
2016-01-01
Full Text Available In this paper, a new method is proposed for crowd density estimation. An improved convolutional neural network is combined with traditional texture feature. The data calculated by the convolutional layer can be treated as a new kind of features.So more useful information of images can be extracted by different features.In the meantime, the size of image has little effect on the result of convolutional neural network. Experimental results indicate that our scheme has adequate performance to allow for its use in real world applications.
Directory of Open Access Journals (Sweden)
Jiangang Liu
Full Text Available Toxicogenomics promises to aid in predicting adverse effects, understanding the mechanisms of drug action or toxicity, and uncovering unexpected or secondary pharmacology. However, modeling adverse effects using high dimensional and high noise genomic data is prone to over-fitting. Models constructed from such data sets often consist of a large number of genes with no obvious functional relevance to the biological effect the model intends to predict that can make it challenging to interpret the modeling results. To address these issues, we developed a novel algorithm, Predictive Power Estimation Algorithm (PPEA, which estimates the predictive power of each individual transcript through an iterative two-way bootstrapping procedure. By repeatedly enforcing that the sample number is larger than the transcript number, in each iteration of modeling and testing, PPEA reduces the potential risk of overfitting. We show with three different cases studies that: (1 PPEA can quickly derive a reliable rank order of predictive power of individual transcripts in a relatively small number of iterations, (2 the top ranked transcripts tend to be functionally related to the phenotype they are intended to predict, (3 using only the most predictive top ranked transcripts greatly facilitates development of multiplex assay such as qRT-PCR as a biomarker, and (4 more importantly, we were able to demonstrate that a small number of genes identified from the top-ranked transcripts are highly predictive of phenotype as their expression changes distinguished adverse from nonadverse effects of compounds in completely independent tests. Thus, we believe that the PPEA model effectively addresses the over-fitting problem and can be used to facilitate genomic biomarker discovery for predictive toxicology and drug responses.
Statistical algorithm for automated signature analysis of power spectral density data
International Nuclear Information System (INIS)
Piety, K.R.
1977-01-01
A statistical algorithm has been developed and implemented on a minicomputer system for on-line, surveillance applications. Power spectral density (PSD) measurements on process signals are the performance signatures that characterize the ''health'' of the monitored equipment. Statistical methods provide a quantitative basis for automating the detection of anomalous conditions. The surveillance algorithm has been tested on signals from neutron sensors, proximeter probes, and accelerometers to determine its potential for monitoring nuclear reactors and rotating machinery
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
Modern optimization algorithms for fault location estimation in power systems
Directory of Open Access Journals (Sweden)
A. Sanad Ahmed
2017-10-01
Full Text Available This paper presents a fault location estimation approach in two terminal transmission lines using Teaching Learning Based Optimization (TLBO technique, and Harmony Search (HS technique. Also, previous methods were discussed such as Genetic Algorithm (GA, Artificial Bee Colony (ABC, Artificial neural networks (ANN and Cause & effect (C&E with discussing advantages and disadvantages of all methods. Initial data for proposed techniques are post-fault measured voltages and currents from both ends, along with line parameters as initial inputs as well. This paper deals with several types of faults, L-L-L, L-L-L-G, L-L-G and L-G. Simulation of the model was performed on SIMULINK by extracting initial inputs from SIMULINK to MATLAB, where the objective function specifies the fault location with a very high accuracy, precision and within a very short time. Future works are discussed showing the benefit behind using the Differential Learning TLBO (DLTLBO was discussed as well.
Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes
Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman
2011-01-01
Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...
Directory of Open Access Journals (Sweden)
Chuii Khim Chong
2012-06-01
Full Text Available This paper introduces an improved Differential Evolution algorithm (IDE which aims at improving its performance in estimating the relevant parameters for metabolic pathway data to simulate glycolysis pathway for yeast. Metabolic pathway data are expected to be of significant help in the development of efficient tools in kinetic modeling and parameter estimation platforms. Many computation algorithms face obstacles due to the noisy data and difficulty of the system in estimating myriad of parameters, and require longer computational time to estimate the relevant parameters. The proposed algorithm (IDE in this paper is a hybrid of a Differential Evolution algorithm (DE and a Kalman Filter (KF. The outcome of IDE is proven to be superior than Genetic Algorithm (GA and DE. The results of IDE from experiments show estimated optimal kinetic parameters values, shorter computation time and increased accuracy for simulated results compared with other estimation algorithms
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement
Evaluating Prognostics Performance for Algorithms Incorporating Uncertainty Estimates
National Aeronautics and Space Administration — Uncertainty Representation and Management (URM) are an integral part of the prognostic system development.1As capabilities of prediction algorithms evolve, research...
Hybrid fuzzy charged system search algorithm based state estimation in distribution networks
Directory of Open Access Journals (Sweden)
Sachidananda Prasad
2017-06-01
Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.
An automatic iris occlusion estimation method based on high-dimensional density estimation.
Li, Yung-Hui; Savvides, Marios
2013-04-01
Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.
Estimation of Engine Intake Air Mass Flow using a generic Speed-Density method
Directory of Open Access Journals (Sweden)
Vojtíšek Michal
2014-10-01
Full Text Available Measurement of real driving emissions (RDE from internal combustion engines under real-world operation using portable, onboard monitoring systems (PEMS is becoming an increasingly important tool aiding the assessment of the effects of new fuels and technologies on environment and human health. The knowledge of exhaust flow is one of the prerequisites for successful RDE measurement with PEMS. One of the simplest approaches for estimating the exhaust flow from virtually any engine is its computation from the intake air flow, which is calculated from measured engine rpm and intake manifold charge pressure and temperature using a generic speed-density algorithm, applicable to most contemporary four-cycle engines. In this work, a generic speed-density algorithm was compared against several reference methods on representative European production engines - a gasoline port-injected automobile engine, two turbocharged diesel automobile engines, and a heavy-duty turbocharged diesel engine. The overall results suggest that the uncertainty of the generic speed-density method is on the order of 10% throughout most of the engine operating range, but increasing to tens of percent where high-volume exhaust gas recirculation is used. For non-EGR engines, such uncertainty is acceptable for many simpler and screening measurements, and may be, where desired, reduced by engine-specific calibration.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
International Nuclear Information System (INIS)
Damek, Nawel; Kamoun, Samira
2011-01-01
In this communication, two recursive parametric estimation algorithms are analyzed and applied to an squirrelcage asynchronous machine located at the research ''Unit of Automatic Control'' (UCA) at ENIS. The first algorithm which, use the transfer matrix mathematical model, is based on the gradient principle. The second algorithm, which use the state-space mathematical model, is based on the minimization of the estimation error. These algorithms are applied as a key technique to estimate asynchronous machine with unknown, but constant or timevarying parameters. Stator voltage and current are used as measured data. The proposed recursive parametric estimation algorithms are validated on the experimental data of an asynchronous machine under normal operating condition as full load. The results show that these algorithms can estimate effectively the machine parameters with reliability.
Review of methods for level density estimation from resonance parameters
International Nuclear Information System (INIS)
Froehner, F.H.
1983-01-01
A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)
HEDPIN: a computer program to estimate pinwise power density
International Nuclear Information System (INIS)
Cappiello, M.W.
1976-05-01
A description is given of the digital computer program, HEDPIN. This program, modeled after a previously developed program, POWPIN, provides a means of estimating the pinwise power density distribution in fast reactor triangular pitched pin bundles. The capability also exists for computing any reaction rate of interest at the respective pin positions within an assembly. HEDPIN was developed in support of FTR fuel and test management as well as fast reactor core design and core characterization planning and analysis. The results of a test devised to check out HEDPIN's computational method are given, and the realm of application is discussed. Nearly all programming is in FORTRAN IV. Variable dimensioning is employed to make efficient use of core memory and maintain short running time for small problems. Input instructions, sample problem, and a program listing are also given
CSIR Research Space (South Africa)
Du Plessis, WP
2011-09-01
Full Text Available The use of the density-taper approach to initialise a genetic algorithm is shown to give excellent results in the synthesis of thinned arrays. This approach is shown to give better SLL values more consistently than using random values and difference...
Cortical cell and neuron density estimates in one chimpanzee hemisphere.
Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H
2016-01-19
The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates.
An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation
Directory of Open Access Journals (Sweden)
Senthil Kumar Murugesan
2015-01-01
Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.
The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models
GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.
2008-01-01
In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.
Directory of Open Access Journals (Sweden)
Yuxiang He
2018-01-01
Full Text Available This paper presents a new and enhanced fusion module for the Multi-Sensor Precipitation Estimator (MPE that would objectively blend real-time satellite quantitative precipitation estimates (SQPE with radar and gauge estimates. This module consists of a preprocessor that mitigates systematic bias in SQPE, and a two-way blending routine that statistically fuses adjusted SQPE with radar estimates. The preprocessor not only corrects systematic bias in SQPE, but also improves the spatial distribution of precipitation based on SQPE and makes it closely resemble that of radar-based observations. It uses a more sophisticated radar-satellite merging technique to blend preprocessed datasets, and provides a better overall QPE product. The performance of the new satellite-radar-gauge blending module is assessed using independent rain gauge data over a five-year period between 2003–2007, and the assessment evaluates the accuracy of newly developed satellite-radar-gauge (SRG blended products versus that of radar-gauge products (which represents MPE algorithm currently used in the NWS (National Weather Service operations over two regions: (I Inside radar effective coverage and (II immediately outside radar coverage. The outcomes of the evaluation indicate (a ingest of SQPE over areas within effective radar coverage improve the quality of QPE by mitigating the errors in radar estimates in region I; and (b blending of radar, gauge, and satellite estimates over region II leads to reduction of errors relative to bias-corrected SQPE. In addition, the new module alleviates the discontinuities along the boundaries of radar effective coverage otherwise seen when SQPE is used directly to fill the areas outside of effective radar coverage.
A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream
Ying Wah, Teh
2014-01-01
Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753
A fast density-based clustering algorithm for real-time Internet of Things stream.
Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut
2014-01-01
Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.
Estimating Foreign-Object-Debris Density from Photogrammetry Data
Long, Jason; Metzger, Philip; Lane, John
2013-01-01
Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.
A Pilot-Pattern Based Algorithm for MIMO-OFDM Channel Estimation
Directory of Open Access Journals (Sweden)
Guomin Li
2016-12-01
Full Text Available An improved pilot pattern algorithm for facilitating the channel estimation in multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM systems is proposed in this paper. The presented algorithm reconfigures the parameter in the least square (LS algorithm, which belongs to the space-time block-coded (STBC category for channel estimation in pilot-based MIMO-OFDM system. Simulation results show that the algorithm has better performance in contrast to the classical single symbol scheme. In contrast to the double symbols scheme, the proposed algorithm can achieve nearly the same performance with only half of the complexity of the double symbols scheme.
Hard Ware Implementation of Diamond Search Algorithm for Motion Estimation and Object Tracking
International Nuclear Information System (INIS)
Hashimaa, S.M.; Mahmoud, I.I.; Elazm, A.A.
2009-01-01
Object tracking is very important task in computer vision. Fast search algorithms emerged as important search technique to achieve real time tracking results. To enhance the performance of these algorithms, we advocate the hardware implementation of such algorithms. Diamond search block matching motion estimation has been proposed recently to reduce the complexity of motion estimation. In this paper we selected the diamond search algorithm (DS) for implementation using FPGA. This is due to its fundamental role in all fast search patterns. The proposed architecture is simulated and synthesized using Xilinix and modelsim soft wares. The results agree with the algorithm implementation in Matlab environment.
An algorithm for the estimation of road traffic space mean speeds from double loop detector data
Energy Technology Data Exchange (ETDEWEB)
Martinez-Diaz, M.; Perez Perez, I.
2016-07-01
Most algorithms trying to analyze or forecast road traffic rely on many inputs, but in practice, calculations are usually limited by the available data and measurement equipment. Generally, some of these inputs are substituted by raw or even inappropriate estimations, which in some cases come into conflict with the fundamentals of traffic flow theory. This paper refers to one common example of these bad practices. Many traffic management centres depend on the data provided by double loop detectors, which supply, among others, vehicle speeds. The common data treatment is to compute the arithmetic mean of these speeds over different aggregation periods (i.e. the time mean speeds). Time mean speed is not consistent with Edie’s generalized definitions of traffic variables, and therefore it is not the average speed which relates flow to density. This means that current practice begins with an error that can have negative effects in later studies and applications. The algorithm introduced in this paper enables easily the estimation of space mean speeds from the data provided by the loops. It is based on two key hypotheses: stationarity of traffic and log-normal distribution of the individual speeds in each time interval of aggregation. It could also be used in case of transient traffic as a part of any data fusion methodology. (Author)
2-D DOA Estimation of LFM Signals Based on Dechirping Algorithm and Uniform Circle Array
Directory of Open Access Journals (Sweden)
K. B. Cui
2017-04-01
Full Text Available Based on Dechirping algorithm and uniform circle array(UCA, a new 2-D direction of arrival (DOA estimation algorithm of linear frequency modulation (LFM signals is proposed in this paper. The algorithm uses the thought of Dechirping and regards the signal to be estimated which is received by the reference sensor as the reference signal and proceeds the difference frequency treatment with the signal received by each sensor. So the signal to be estimated becomes a single-frequency signal in each sensor. Then we transform the single-frequency signal to an isolated impulse through Fourier transform (FFT and construct a new array data model based on the prominent parts of the impulse. Finally, we respectively use multiple signal classification (MUSIC algorithm and rotational invariance technique (ESPRIT algorithm to realize 2-D DOA estimation of LFM signals. The simulation results verify the effectiveness of the algorithm proposed.
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
A Novel Modification of PSO Algorithm for SML Estimation of DOA
Directory of Open Access Journals (Sweden)
Haihua Chen
2016-12-01
Full Text Available This paper addresses the issue of reducing the computational complexity of Stochastic Maximum Likelihood (SML estimation of Direction-of-Arrival (DOA. The SML algorithm is well-known for its high accuracy of DOA estimation in sensor array signal processing. However, its computational complexity is very high because the estimation of SML criteria is a multi-dimensional non-linear optimization problem. As a result, it is hard to apply the SML algorithm to real systems. The Particle Swarm Optimization (PSO algorithm is considered as a rather efficient method for multi-dimensional non-linear optimization problems in DOA estimation. However, the conventional PSO algorithm suffers two defects, namely, too many particles and too many iteration times. Therefore, the computational complexity of SML estimation using conventional PSO algorithm is still a little high. To overcome these two defects and to reduce computational complexity further, this paper proposes a novel modification of the conventional PSO algorithm for SML estimation and we call it Joint-PSO algorithm. The core idea of the modification lies in that it uses the solution of Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT and stochastic Cramer-Rao bound (CRB to determine a novel initialization space. Since this initialization space is already close to the solution of SML, fewer particles and fewer iteration times are needed. As a result, the computational complexity can be greatly reduced. In simulation, we compare the proposed algorithm with the conventional PSO algorithm, the classic Altering Minimization (AM algorithm and Genetic algorithm (GA. Simulation results show that our proposed algorithm is one of the most efficient solving algorithms and it shows great potential for the application of SML in real systems.
Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J
2016-04-01
Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of -4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and -5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of -5.6 to 5.2 bpm and a bias of -0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available.
Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation
Kim, Sunwoo
This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.
Energy Technology Data Exchange (ETDEWEB)
Kagie, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lanterman, Aaron D. [Georgia Inst. of Technology, Atlanta, GA (United States)
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Directory of Open Access Journals (Sweden)
E. Hadaś
2016-06-01
Full Text Available The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m−2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9 between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.
A predictor-corrector algorithm to estimate the fractional flow in oil-water models
International Nuclear Information System (INIS)
Savioli, Gabriela B; Berdaguer, Elena M Fernandez
2008-01-01
We introduce a predictor-corrector algorithm to estimate parameters in a nonlinear hyperbolic problem. It can be used to estimate the oil-fractional flow function from the Buckley-Leverett equation. The forward model is non-linear: the sought- for parameter is a function of the solution of the equation. Traditionally, the estimation of functions requires the selection of a fitting parametric model. The algorithm that we develop does not require a predetermined parameter model. Therefore, the estimation problem is carried out over a set of parameters which are functions. The algorithm is based on the linearization of the parameter-to-output mapping. This technique is new in the field of nonlinear estimation. It has the advantage of laying aside parametric models. The algorithm is iterative and is of predictor-corrector type. We present theoretical results on the inverse problem. We use synthetic data to test the new algorithm.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Simulating nailfold capillaroscopy sequences to evaluate algorithms for blood flow estimation.
Tresadern, P A; Berks, M; Murray, A K; Dinsdale, G; Taylor, C J; Herrick, A L
2013-01-01
The effects of systemic sclerosis (SSc)--a disease of the connective tissue causing blood flow problems that can require amputation of the fingers--can be observed indirectly by imaging the capillaries at the nailfold, though taking quantitative measures such as blood flow to diagnose the disease and monitor its progression is not easy. Optical flow algorithms may be applied, though without ground truth (i.e. known blood flow) it is hard to evaluate their accuracy. We propose an image model that generates realistic capillaroscopy videos with known flow, and use this model to quantify the effect of flow rate, cell density and contrast (among others) on estimated flow. This resource will help researchers to design systems that are robust under real-world conditions.
Zhan, Hanyu; Jiang, Hanwan; Jiang, Ruinian
2018-03-01
Perturbations worked as extra scatters will cause coda waveform distortions; thus, coda wave with long propagation time and traveling path are sensitive to micro-defects in strongly heterogeneous media such as concretes. In this paper, we conduct varied external loads on a life-size concrete slab which contains multiple existing micro-cracks, and a couple of sources and receivers are installed to collect coda wave signals. The waveform decorrelation coefficients (DC) at different loads are calculated for all available source-receiver pair measurements. Then inversions of the DC results are applied to estimate the associated distribution density values in three-dimensional regions through kernel sensitivity model and least-square algorithms, which leads to the images indicating the micro-cracks positions. This work provides an efficiently non-destructive approach to detect internal defects and damages of large-size concrete structures.
Cubic scaling algorithms for RPA correlation using interpolative separable density fitting
Lu, Jianfeng; Thicke, Kyle
2017-12-01
We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.
International Nuclear Information System (INIS)
Zhang Man-Hong
2016-01-01
By performing the electronic structure computation of a Si atom, we compare two iteration algorithms of Broyden electron density mixing in the literature. One was proposed by Johnson and implemented in the well-known VASP code. The other was given by Eyert. We solve the Kohn-Sham equation by using a conventional outward/inward integration of the differential equation and then connect two parts of solutions at the classical turning points, which is different from the method of the matrix eigenvalue solution as used in the VASP code. Compared to Johnson’s algorithm, the one proposed by Eyert needs fewer total iteration numbers. (paper)
ROBUST ALGORITHMS OF PARAMETRIC ESTIMATION IN SOME STABILIZATION PROBLEMS
Directory of Open Access Journals (Sweden)
A.A. Vedyakov
2016-07-01
Full Text Available Subject of Research.The tasks of dynamic systems provision in the stable state by means of ensuring of trite solution stability for various dynamic systems in the education regime with the aid of their parameters tuning are considered. Method. The problems are solved by application of ideology of the robust finitely convergent algorithms creation. Main Results. The concepts of parametric algorithmization of stability and steady asymptotic stability are introduced and the results are presented on synthesis of coarsed gradient algorithms solving the proposed tasks for finite number of iterations with the purpose of the posed problems decision. Practical Relevance. The article results may be called for decision of practical stabilization tasks in the process of various engineering constructions and devices operation.
Sharp probability estimates for Shor's order-finding algorithm
Bourdon, P. S.; Williams, H. T.
2006-01-01
Let N be a (large positive integer, let b > 1 be an integer relatively prime to N, and let r be the order of b modulo N. Finally, let QC be a quantum computer whose input register has the size specified in Shor's original description of his order-finding algorithm. We prove that when Shor's algorithm is implemented on QC, then the probability P of obtaining a (nontrivial) divisor of r exceeds 0.7 whenever N exceeds 2^{11}-1 and r exceeds 39, and we establish that 0.7736 is an asymptotic lower...
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2006-01-01
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...... and provides a better coverage of the Pareto optimal solutions at a lower computational cost....
Ahmed, Sajid
2017-05-12
The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.
Ahmed, Sajid; Jardak, Seifallah; Alouini, Mohamed-Slim
2017-01-01
The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.
Energy Technology Data Exchange (ETDEWEB)
Sheng, Zheng, E-mail: 19994035@sina.com [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Wang, Jun; Zhou, Bihua [National Defense Key Laboratory on Lightning Protection and Electromagnetic Camouflage, PLA University of Science and Technology, Nanjing 210007 (China); Zhou, Shudao [College of Meteorology and Oceanography, PLA University of Science and Technology, Nanjing 211101 (China); Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science and Technology, Nanjing 210044 (China)
2014-03-15
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
International Nuclear Information System (INIS)
Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao
2014-01-01
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm
Energy Technology Data Exchange (ETDEWEB)
Eisenbach, Markus [ORNL; Li, Ying Wai [ORNL
2017-06-01
We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage of avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.
The efficiency of the RULES-4 classification learning algorithm in predicting the density of agents
Directory of Open Access Journals (Sweden)
Ziad Salem
2014-12-01
Full Text Available Learning is the act of obtaining new or modifying existing knowledge, behaviours, skills or preferences. The ability to learn is found in humans, other organisms and some machines. Learning is always based on some sort of observations or data such as examples, direct experience or instruction. This paper presents a classification algorithm to learn the density of agents in an arena based on the measurements of six proximity sensors of a combined actuator sensor units (CASUs. Rules are presented that were induced by the learning algorithm that was trained with data-sets based on the CASU’s sensor data streams collected during a number of experiments with “Bristlebots (agents in the arena (environment”. It was found that a set of rules generated by the learning algorithm is able to predict the number of bristlebots in the arena based on the CASU’s sensor readings with satisfying accuracy.
Directory of Open Access Journals (Sweden)
Jayaraj V
2010-01-01
Full Text Available A new switching-based median filtering scheme for restoration of images that are highly corrupted by salt and pepper noise is proposed. An algorithm based on the scheme is developed. The new scheme introduces the concept of substitution of noisy pixels by linear prediction prior to estimation. A novel simplified linear predictor is developed for this purpose. The objective of the scheme and algorithm is the removal of high-density salt and pepper noise in images. The new algorithm shows significantly better image quality with good PSNR, reduced MSE, good edge preservation, and reduced streaking. The good performance is achieved with reduced computational complexity. A comparison of the performance is made with several existing algorithms in terms of visual and quantitative results. The performance of the proposed scheme and algorithm is demonstrated.
Application of the Levenberg-Marquardt Scheme to the MUSIC Algorithm for AOA Estimation
Directory of Open Access Journals (Sweden)
Joon-Ho Lee
2013-01-01
can be expressed in a least squares form. Based on this observation, we present a rigorous Levenberg-Marquardt (LM formulation of the MUSIC algorithm for simultaneous estimation of an azimuth and an elevation. We show a convergence property and compare the performance of the LM-based MUSIC algorithm with that of the standard MUSIC algorithm via Monte-Carlo simulation. We also compare the performance of the MUSIC algorithm with that of the Capon algorithm both for the standard implementation and for the LM-based implementation.
Manifold absolute pressure estimation using neural network with hybrid training algorithm.
Directory of Open Access Journals (Sweden)
Mohd Taufiq Muslim
Full Text Available In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM algorithm, Bayesian Regularization (BR algorithm and Particle Swarm Optimization (PSO algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS. The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.
Manifold absolute pressure estimation using neural network with hybrid training algorithm.
Muslim, Mohd Taufiq; Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli
2017-01-01
In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.
Estimate-Merge-Technique-based algorithms to track an underwater ...
Indian Academy of Sciences (India)
D V A N Ravi Kumar
2017-07-04
Jul 4, 2017 ... In this paper, two novel methods based on the Estimate Merge Technique ... mentioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in .... equations are converted to sequential equations to make ... estimation error and low convergence time) at feasibly high.
A Kalman-based Fundamental Frequency Estimation Algorithm
DEFF Research Database (Denmark)
Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom
2017-01-01
Fundamental frequency estimation is an important task in speech and audio analysis. Harmonic model-based methods typically have superior estimation accuracy. However, such methods usually as- sume that the fundamental frequency and amplitudes are station- ary over a short time frame. In this pape...
Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.
2018-02-01
In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
A FPC-ROOT Algorithm for 2D-DOA Estimation in Sparse Array
Directory of Open Access Journals (Sweden)
Wenhao Zeng
2016-01-01
Full Text Available To improve the performance of two-dimensional direction-of-arrival (2D DOA estimation in sparse array, this paper presents a Fixed Point Continuation Polynomial Roots (FPC-ROOT algorithm. Firstly, a signal model for DOA estimation is established based on matrix completion and it can be proved that the proposed model meets Null Space Property (NSP. Secondly, left and right singular vectors of received signals matrix are achieved using the matrix completion algorithm. Finally, 2D DOA estimation can be acquired through solving the polynomial roots. The proposed algorithm can achieve high accuracy of 2D DOA estimation in sparse array, without solving autocorrelation matrix of received signals and scanning of two-dimensional spectral peak. Besides, it decreases the number of antennas and lowers computational complexity and meanwhile avoids the angle ambiguity problem. Computer simulations demonstrate that the proposed FPC-ROOT algorithm can obtain the 2D DOA estimation precisely in sparse array.
Nagy, Ivan
2017-01-01
This book provides a general theoretical background for constructing the recursive Bayesian estimation algorithms for mixture models. It collects the recursive algorithms for estimating dynamic mixtures of various distributions and brings them in the unified form, providing a scheme for constructing the estimation algorithm for a mixture of components modeled by distributions with reproducible statistics. It offers the recursive estimation of dynamic mixtures, which are free of iterative processes and close to analytical solutions as much as possible. In addition, these methods can be used online and simultaneously perform learning, which improves their efficiency during estimation. The book includes detailed program codes for solving the presented theoretical tasks. Codes are implemented in the open source platform for engineering computations. The program codes given serve to illustrate the theory and demonstrate the work of the included algorithms.
Ridge Distance Estimation in Fingerprint Images: Algorithm and Performance Evaluation
Directory of Open Access Journals (Sweden)
Tian Jie
2004-01-01
Full Text Available It is important to estimate the ridge distance accurately, an intrinsic texture property of a fingerprint image. Up to now, only several articles have touched directly upon ridge distance estimation. Little has been published providing detailed evaluation of methods for ridge distance estimation, in particular, the traditional spectral analysis method applied in the frequency field. In this paper, a novel method on nonoverlap blocks, called the statistical method, is presented to estimate the ridge distance. Direct estimation ratio (DER and estimation accuracy (EA are defined and used as parameters along with time consumption (TC to evaluate performance of these two methods for ridge distance estimation. Based on comparison of performances of these two methods, a third hybrid method is developed to combine the merits of both methods. Experimental results indicate that DER is 44.7%, 63.8%, and 80.6%; EA is 84%, 93%, and 91%; and TC is , , and seconds, with the spectral analysis method, statistical method, and hybrid method, respectively.
Directory of Open Access Journals (Sweden)
Changgan SHU
2014-09-01
Full Text Available In the standard root multiple signal classification algorithm, the performance of direction of arrival estimation will reduce and even lose effect in circumstances that a low signal noise ratio and a small signals interval. By reconstructing and weighting the covariance matrix of received signal, the modified algorithm can provide more accurate estimation results. The computer simulation and performance analysis are given next, which show that under the condition of lower signal noise ratio and stronger correlation between signals, the proposed modified algorithm could provide preferable azimuth estimating performance than the standard method.
Analysis of the Command and Control Segment (CCS) attitude estimation algorithm
Stockwell, Catherine
1993-01-01
This paper categorizes the qualitative behavior of the Command and Control Segment (CCS) differential correction algorithm as applied to attitude estimation using simultaneous spin axis sun angle and Earth cord length measurements. The categories of interest are the domains of convergence, divergence, and their boundaries. Three series of plots are discussed that show the dependence of the estimation algorithm on the vehicle radius, the sun/Earth angle, and the spacecraft attitude. Common qualitative dynamics to all three series are tabulated and discussed. Out-of-limits conditions for the estimation algorithm are identified and discussed.
Directory of Open Access Journals (Sweden)
Xiangbing Zhou
2017-12-01
Full Text Available Rapidly growing Global Positioning System (GPS data plays an important role in trajectory and their applications (e.g., GPS-enabled smart devices. In order to employ K-means to mine the better origins and destinations (OD behind the GPS data and overcome its shortcomings including slowness of convergence, sensitivity to initial seeds selection, and getting stuck in a local optimum, this paper proposes and focuses on a novel niche genetic algorithm (NGA with density and noise for K-means clustering (NoiseClust. In NoiseClust, an improved noise method and K-means++ are proposed to produce the initial population and capture higher quality seeds that can automatically determine the proper number of clusters, and also handle the different sizes and shapes of genes. A density-based method is presented to divide the number of niches, with its aim to maintain population diversity. Adaptive probabilities of crossover and mutation are also employed to prevent the convergence to a local optimum. Finally, the centers (the best chromosome are obtained and then fed into the K-means as initial seeds to generate even higher quality clustering results by allowing the initial seeds to readjust as needed. Experimental results based on taxi GPS data sets demonstrate that NoiseClust has high performance and effectiveness, and easily mine the city’s situations in four taxi GPS data sets.
International Nuclear Information System (INIS)
Nunes, F.; Varela, P.; Silva, A.; Manso, M.; Santos, J.; Nunes, I.; Serra, F.; Kurzan, B.; Suttrop, W.
1997-01-01
Broadband reflectometry is a current technique that uses the round-trip group delays of reflected frequency-swept waves to measure density profiles of fusion plasmas. The main factor that may limit the accuracy of the reconstructed profiles is the interference of the probing waves with the plasma density fluctuations: plasma turbulence leads to random phase variations and magneto hydrodynamic activity produces mainly strong amplitude and phase modulations. Both effects cause the decrease, and eventually loss, of signal at some frequencies. Several data processing techniques can be applied to filter and/or interpolate noisy group delay data obtained from turbulent plasmas with a single frequency sweep. Here, we propose a more powerful algorithm performing two-dimensional regularization (in space and time) of data provided by multiple consecutive frequency sweeps, which leads to density profiles with improved accuracy. The new method is described and its application to simulated data corrupted by noise and missing data is considered. It is shown that the algorithm improves the identification of slowly varying plasma density perturbations by attenuating the effect of fast fluctuations and noise contained in experimental data. First results obtained with this method in ASDEX Upgrade tokamak are presented. copyright 1997 American Institute of Physics
Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys
Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik
2011-01-01
The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...
Directory of Open Access Journals (Sweden)
Niels Halama
2009-11-01
Full Text Available Determining the correct number of positive immune cells in immunohistological sections of colorectal cancer and other tumor entities is emerging as an important clinical predictor and therapy selector for an individual patient. This task is usually obstructed by cell conglomerates of various sizes. We here show that at least in colorectal cancer the inclusion of immune cell conglomerates is indispensable for estimating reliable patient cell counts. Integrating virtual microscopy and image processing principally allows the high-throughput evaluation of complete tissue slides.For such large-scale systems we demonstrate a robust quantitative image processing algorithm for the reproducible quantification of cell conglomerates on CD3 positive T cells in colorectal cancer. While isolated cells (28 to 80 microm(2 are counted directly, the number of cells contained in a conglomerate is estimated by dividing the area of the conglomerate in thin tissues sections (< or =6 microm by the median area covered by an isolated T cell which we determined as 58 microm(2. We applied our algorithm to large numbers of CD3 positive T cell conglomerates and compared the results to cell counts obtained manually by two independent observers. While especially for high cell counts, the manual counting showed a deviation of up to 400 cells/mm(2 (41% variation, algorithm-determined T cell numbers generally lay in between the manually observed cell numbers but with perfect reproducibility.In summary, we recommend our approach as an objective and robust strategy for quantifying immune cell densities in immunohistological sections which can be directly implemented into automated full slide image processing systems.
SU-E-T-496: A Study of Two Commercial Dose Calculation Algorithms in Low Density Phantom
International Nuclear Information System (INIS)
Lim, S; Lovelock, D; Yorke, E; Kuo, L; LoSasso, T
2014-01-01
Purpose: Some lung cancer patients have very low lung density due to comorbidities. We investigate calculation accuracy of Eclipse AAA and Acuros(AXB) using a phantom that simulates this situation. Methods: A 2.5 x 5.0 x 5 cm (long) solid water inhomogeneity positioned 10 cm deep in a Balsa lung phantom (density 0.099 gm/cc) was irradiated with an off-center field such that the central axis was parallel to one side of the inhomogeneity. Radiochromic films were placed at 2.5cm(S1) and 5cm(S2) depths. After CT scanning, Hounsfield Units(HU) were converted to electron(ρe) and mass(ρm) density using in-house(IH) and vendor-supplied(V) calibration curves. IH electron densities were generated using a commercial electron density phantom. The phantom was exposed to 6 MV 3x3 and 20x20 fields. Dose distributions were calculated using the AAA and AXB algorithms. Results: The HU of BW is -910±40 which translates to ρe of 0.088±0.050(IH) and 0.090±0.050(V), and ρm of 0.101±0.045(IH) and 0.103±0.039(V). Both ρe(V) and ρm(V) are higher than ρe(IH) and ρm(IH) respectively by 1.4-5.3% and 0.5-12.3%. The average calculated dose inside the solid water ‘tumor’ are within 3.7% and 2.4% of measurements for both calibrations and field sizes using AAA and AXB. Within 10mm outside the ‘tumor’, AAA on average underestimates by 18.3% and 17.0% respectively for 3x3 using IH and V. AXB underestimates by 5.9%(S1)-6.6%(S2) and 13.1%(S1)-16.0%(S2) respectively using IH and V. For 20x20, AAA and AXB underestimate by 2.8%(S1)-4.4%(S2) and 0.3%(S1)-1.4%(S2) respectively with either calibration. Conclusion: The difference in the HU calibration between V and IH is not of clinical significance in normal field sizes. In the low density region of small fields, the calculations from both algorithms differ significantly from measurements. This may be attributed to the insufficient lateral electron transport modeled by two algorithms resulting in the over-estimation in penumbra
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.
Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu
2015-08-01
This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.
Directory of Open Access Journals (Sweden)
Małgorzata Stramska
2013-02-01
Full Text Available The quasi-synoptic view available from satellites has been broadly used in recent years to observe in near-real time the large-scale dynamics of marine ecosystems and to estimate primary productivity in the world ocean. However, the standard global NASA ocean colour algorithms generally do not produce good results in the Baltic Sea. In this paper, we compare the ability of seven algorithms to estimate depth-integrated daily primary production (PP, mg C m-2 in the Baltic Sea. All the algorithms use surface chlorophyll concentration, sea surface temperature, photosynthetic available radiation, latitude, longitude and day of the year as input data. Algorithm-derived PP is then compared with PP estimates obtained from 14C uptake measurements. The results indicate that the best agreement between the modelled and measured PP in the Baltic Sea is obtained with the DESAMBEM algorithm. This result supports the notion that a regional approach should be used in the interpretation of ocean colour satellite data in the Baltic Sea.
Directory of Open Access Journals (Sweden)
Dominik Jaskierniak
2015-06-01
Full Text Available Managers of forested water supply catchments require efficient and accurate methods to quantify changes in forest water use due to changes in forest structure and density after disturbance. Using Light Detection and Ranging (LiDAR data with as few as 0.9 pulses m−2, we applied a local maximum filtering (LMF method and normalised cut (NCut algorithm to predict stocking density (SDen of a 69-year-old Eucalyptus regnans forest comprising 251 plots with resolution of the order of 0.04 ha. Using the NCut method we predicted basal area (BAHa per hectare and sapwood area (SAHa per hectare, a well-established proxy for transpiration. Sapwood area was also indirectly estimated with allometric relationships dependent on LiDAR derived SDen and BAHa using a computationally efficient procedure. The individual tree detection (ITD rates for the LMF and NCut methods respectively had 72% and 68% of stems correctly identified, 25% and 20% of stems missed, and 2% and 12% of stems over-segmented. The significantly higher computational requirement of the NCut algorithm makes the LMF method more suitable for predicting SDen across large forested areas. Using NCut derived ITD segments, observed versus predicted stand BAHa had R2 ranging from 0.70 to 0.98 across six catchments, whereas a generalised parsimonious model applied to all sites used the portion of hits greater than 37 m in height (PH37 to explain 68% of BAHa. For extrapolating one ha resolution SAHa estimates across large forested catchments, we found that directly relating SAHa to NCut derived LiDAR indices (R2 = 0.56 was slightly more accurate but computationally more demanding than indirect estimates of SAHa using allometric relationships consisting of BAHa (R2 = 0.50 or a sapwood perimeter index, defined as (BAHaSDen½ (R2 = 0.48.
DEFF Research Database (Denmark)
Buch-Kromann, Tine; Nielsen, Jens
2012-01-01
This paper introduces a multivariate density estimator for truncated and censored data with special emphasis on extreme values based on survival analysis. A local constant density estimator is considered. We extend this estimator by means of tail flattening transformation, dimension reducing prior...
Automatic breast tissue density estimation scheme in digital mammography images
Menechelli, Renan C.; Pacheco, Ana Luisa V.; Schiabel, Homero
2017-03-01
Cases of breast cancer have increased substantially each year. However, radiologists are subject to subjectivity and failures of interpretation which may affect the final diagnosis in this examination. The high density features in breast tissue are important factors related to these failures. Thus, among many functions some CADx (Computer-Aided Diagnosis) schemes are classifying breasts according to the predominant density. In order to aid in such a procedure, this work attempts to describe automated software for classification and statistical information on the percentage change in breast tissue density, through analysis of sub regions (ROIs) from the whole mammography image. Once the breast is segmented, the image is divided into regions from which texture features are extracted. Then an artificial neural network MLP was used to categorize ROIs. Experienced radiologists have previously determined the ROIs density classification, which was the reference to the software evaluation. From tests results its average accuracy was 88.7% in ROIs classification, and 83.25% in the classification of the whole breast density in the 4 BI-RADS density classes - taking into account a set of 400 images. Furthermore, when considering only a simplified two classes division (high and low densities) the classifier accuracy reached 93.5%, with AUC = 0.95.
Majeed, Muhammad Usman
2017-01-01
the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time
Usefulness of an enhanced Kitaev phase-estimation algorithm in quantum metrology and computation
Kaftal, Tomasz; Demkowicz-Dobrzański, Rafał
2014-12-01
We analyze the performance of a generalized Kitaev's phase-estimation algorithm where N phase gates, acting on M qubits prepared in a product state, may be distributed in an arbitrary way. Unlike the standard algorithm, where the mean square error scales as 1 /N , the optimal generalizations offer the Heisenberg 1 /N2 error scaling and we show that they are in fact very close to the fundamental Bayesian estimation bound. We also demonstrate that the optimality of the algorithm breaks down when losses are taken into account, in which case the performance is inferior to the optimal entanglement-based estimation strategies. Finally, we show that when an alternative resource quantification is adopted, which describes the phase estimation in Shor's algorithm more accurately, the standard Kitaev's procedure is indeed optimal and there is no need to consider its generalized version.
GPS Signal Offset Detection and Noise Strength Estimation in a Parallel Kalman Filter Algorithm
National Research Council Canada - National Science Library
Vanek, Barry
1999-01-01
.... The variance of the noise process is estimated and provided to the second algorithm, a parallel Kalman filter structure, which then adapts to changes in the real-world measurement noise strength...
Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables
International Nuclear Information System (INIS)
Zwierz, Marcin; Perez-Delgado, Carlos A.; Kok, Pieter
2010-01-01
We reveal a close relationship between quantum metrology and the Deutsch-Jozsa algorithm on continuous-variable quantum systems. We develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter-estimation protocol or the Deutsch-Jozsa algorithm. The parameter-estimation part of the procedure attains the Heisenberg limit and is therefore optimal. Due to the use of approximate normalizable continuous-variable eigenstates, the Deutsch-Jozsa algorithm is probabilistic. The procedure estimates a value of an unknown parameter and solves the Deutsch-Jozsa problem without the use of any entanglement.
Directory of Open Access Journals (Sweden)
Han Liwei
2014-07-01
Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.
Digital Repository Service at National Institute of Oceanography (India)
Madhupratap, M.; Achuthankutty, C.T.; Nair, S.R.S.
Direct sampling of the sandy substratus of the Agatti Lagoon with a corer showed the presence of vary high densities of epibenthic forms. On average, densities were about 25 times higher than previously estimated with emergence traps. About 80...
A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation
DEFF Research Database (Denmark)
Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri
2014-01-01
This paper concerns sparse decomposition of a noisy signal into atoms which are specified by unknown continuous-valued parameters. An example could be estimation of the model order, frequencies and amplitudes of a superposition of complex sinusoids. The common approach is to reduce the continuous...
Automated volumetric breast density estimation: A comparison with visual assessment
International Nuclear Information System (INIS)
Seo, J.M.; Ko, E.S.; Han, B.-K.; Ko, E.Y.; Shin, J.H.; Hahn, S.Y.
2013-01-01
Aim: To compare automated volumetric breast density (VBD) measurement with visual assessment according to Breast Imaging Reporting and Data System (BI-RADS), and to determine the factors influencing the agreement between them. Materials and methods: One hundred and ninety-three consecutive screening mammograms reported as negative were included in the study. Three radiologists assigned qualitative BI-RADS density categories to the mammograms. An automated volumetric breast-density method was used to measure VBD (% breast density) and density grade (VDG). Each case was classified into an agreement or disagreement group according to the comparison between visual assessment and VDG. The correlation between visual assessment and VDG was obtained. Various physical factors were compared between the two groups. Results: Agreement between visual assessment by the radiologists and VDG was good (ICC value = 0.757). VBD showed a highly significant positive correlation with visual assessment (Spearman's ρ = 0.754, p < 0.001). VBD and the x-ray tube target was significantly different between the agreement group and the disagreement groups (p = 0.02 and 0.04, respectively). Conclusion: Automated VBD is a reliable objective method to measure breast density. The agreement between VDG and visual assessment by radiologist might be influenced by physical factors
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.
2016-07-01
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
Chang, Yaping; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Zhang, Shiqiang
2018-06-01
The long-term change of evapotranspiration (ET) is crucial for managing water resources in areas with extreme climates, such as the Tibetan Plateau (TP). This study proposed a modified algorithm for estimating ET based on the MOD16 algorithm on a global scale over alpine meadow on the TP in China. Wind speed and vegetation height were integrated to estimate aerodynamic resistance, while the temperature and moisture constraints for stomatal conductance were revised based on the technique proposed by Fisher et al. (2008). Moreover, Fisher's method for soil evaporation was adopted to reduce the uncertainty in soil evaporation estimation. Five representative alpine meadow sites on the TP were selected to investigate the performance of the modified algorithm. Comparisons were made between the ET observed using the Eddy Covariance (EC) and estimated using both the original and modified algorithms. The results revealed that the modified algorithm performed better than the original MOD16 algorithm with the coefficient of determination (R2) increasing from 0.26 to 0.68, and root mean square error (RMSE) decreasing from 1.56 to 0.78 mm d-1. The modified algorithm performed slightly better with a higher R2 (0.70) and lower RMSE (0.61 mm d-1) for after-precipitation days than for non-precipitation days at Suli site. Contrarily, better results were obtained for non-precipitation days than for after-precipitation days at Arou, Tanggula, and Hulugou sites, indicating that the modified algorithm may be more suitable for estimating ET for non-precipitation days with higher accuracy than for after-precipitation days, which had large observation errors. The comparisons between the modified algorithm and two mainstream methods suggested that the modified algorithm could produce high accuracy ET over the alpine meadow sites on the TP.
A Study on Fuel Estimation Algorithms for a Geostationary Communication & Broadcasting Satellite
Directory of Open Access Journals (Sweden)
Jong Won Eun
2000-12-01
Full Text Available It has been developed to calculate fuel budget for a geostationary communication and broadcasting satellite. It is quite essential that the pre-launch fuel budget estimation must account for the deterministic transfer and drift orbit maneuver requirements. After on-station, the calculation of satellite lifetime should be based on the estimation of remaining fuel and assessment of actual performance. These estimations step from the proper algorithms to produce the prediction of satellite lifetime. This paper concentrates on the fuel estimation method that was studied for calculation of the propellant budget by using the given algorithms. Applications of this method are discussed for a communication and broadcasting satellite.
Wang, Jeen-Shing; Lin, Che-Wei; Yang, Ya-Ting C; Ho, Yu-Jen
2012-10-01
This paper presents a walking pattern classification and a walking distance estimation algorithm using gait phase information. A gait phase information retrieval algorithm was developed to analyze the duration of the phases in a gait cycle (i.e., stance, push-off, swing, and heel-strike phases). Based on the gait phase information, a decision tree based on the relations between gait phases was constructed for classifying three different walking patterns (level walking, walking upstairs, and walking downstairs). Gait phase information was also used for developing a walking distance estimation algorithm. The walking distance estimation algorithm consists of the processes of step count and step length estimation. The proposed walking pattern classification and walking distance estimation algorithm have been validated by a series of experiments. The accuracy of the proposed walking pattern classification was 98.87%, 95.45%, and 95.00% for level walking, walking upstairs, and walking downstairs, respectively. The accuracy of the proposed walking distance estimation algorithm was 96.42% over a walking distance.
Directory of Open Access Journals (Sweden)
Hendra Gunawan
2014-06-01
Full Text Available http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting of an effect of topography, an effect of intracrustal, and an isostatic compensation. Based on simulation results, Bouguer density estimates were then investigated for a gravity survey of 2005 on La Soufriere Volcano-Guadeloupe area (Antilles Islands. The Bouguer density based on the Parasnis approach is 2.71 g/cm3 for the whole area, except the edifice area where average topography density estimates are 2.21 g/cm3 where Bouguer density estimates from previous gravity survey of 1975 are 2.67 g/cm3. The Bouguer density in La Soufriere Volcano was uncertainly estimated to be 0.1 g/cm3. For the studied area, the density deduced from refraction seismic data is coherent with the recent Bouguer density estimates. New Bouguer anomaly map based on these Bouguer density values allows to a better geological intepretation.
Directory of Open Access Journals (Sweden)
Iman Yousefi
2015-01-01
Full Text Available This paper presents parameter estimation of Permanent Magnet Synchronous Motor (PMSM using a combinatorial algorithm. Nonlinear fourth-order space state model of PMSM is selected. This model is rewritten to the linear regression form without linearization. Noise is imposed to the system in order to provide a real condition, and then combinatorial Orthogonal Projection Algorithm and Recursive Least Squares (OPA&RLS method is applied in the linear regression form to the system. Results of this method are compared to the Orthogonal Projection Algorithm (OPA and Recursive Least Squares (RLS methods to validate the feasibility of the proposed method. Simulation results validate the efficacy of the proposed algorithm.
Distributed parameter estimation in unreliable sensor networks via broadcast gossip algorithms.
Wang, Huiwei; Liao, Xiaofeng; Wang, Zidong; Huang, Tingwen; Chen, Guo
2016-01-01
In this paper, we present an asynchronous algorithm to estimate the unknown parameter under an unreliable network which allows new sensors to join and old sensors to leave, and can tolerate link failures. Each sensor has access to partially informative measurements when it is awakened. In addition, the proposed algorithm can avoid the interference among messages and effectively reduce the accumulated measurement and quantization errors. Based on the theory of stochastic approximation, we prove that our proposed algorithm almost surely converges to the unknown parameter. Finally, we present a numerical example to assess the performance and the communication cost of the algorithm. Copyright © 2015 Elsevier Ltd. All rights reserved.
Renewable Energy Power Generation Estimation Using Consensus Algorithm
Ahmad, Jehanzeb; Najm-ul-Islam, M.; Ahmed, Salman
2017-08-01
At the small consumer level, Photo Voltaic (PV) panel based grid tied systems are the most common form of Distributed Energy Resources (DER). Unlike wind which is suitable for only selected locations, PV panels can generate electricity almost anywhere. Pakistan is currently one of the most energy deficient countries in the world. In order to mitigate this shortage the Government has recently announced a policy of net-metering for residential consumers. After wide spread adoption of DERs, one of the issues that will be faced by load management centers would be accurate estimate of the amount of electricity being injected in the grid at any given time through these DERs. This becomes a critical issue once the penetration of DER increases beyond a certain limit. Grid stability and management of harmonics becomes an important consideration where electricity is being injected at the distribution level and through solid state controllers instead of rotating machinery. This paper presents a solution using graph theoretic methods for the estimation of total electricity being injected in the grid in a wide spread geographical area. An agent based consensus approach for distributed computation is being used to provide an estimate under varying generation conditions.
Estimation of current density distribution under electrodes for external defibrillation
Directory of Open Access Journals (Sweden)
Papazov Sava P
2002-12-01
Full Text Available Abstract Background Transthoracic defibrillation is the most common life-saving technique for the restoration of the heart rhythm of cardiac arrest victims. The procedure requires adequate application of large electrodes on the patient chest, to ensure low-resistance electrical contact. The current density distribution under the electrodes is non-uniform, leading to muscle contraction and pain, or risks of burning. The recent introduction of automatic external defibrillators and even wearable defibrillators, presents new demanding requirements for the structure of electrodes. Method and Results Using the pseudo-elliptic differential equation of Laplace type with appropriate boundary conditions and applying finite element method modeling, electrodes of various shapes and structure were studied. The non-uniformity of the current density distribution was shown to be moderately improved by adding a low resistivity layer between the metal and tissue and by a ring around the electrode perimeter. The inclusion of openings in long-term wearable electrodes additionally disturbs the current density profile. However, a number of small-size perforations may result in acceptable current density distribution. Conclusion The current density distribution non-uniformity of circular electrodes is about 30% less than that of square-shaped electrodes. The use of an interface layer of intermediate resistivity, comparable to that of the underlying tissues, and a high-resistivity perimeter ring, can further improve the distribution. The inclusion of skin aeration openings disturbs the current paths, but an appropriate selection of number and size provides a reasonable compromise.
Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison
Directory of Open Access Journals (Sweden)
Olympia Roeva
2005-12-01
Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.
Experimental Results of Novel DoA Estimation Algorithms for Compact Reconfigurable Antennas
Directory of Open Access Journals (Sweden)
Henna Paaso
2017-01-01
Full Text Available Reconfigurable antenna systems have gained much attention for potential use in the next generation wireless systems. However, conventional direction-of-arrival (DoA estimation algorithms for antenna arrays cannot be used directly in reconfigurable antennas due to different design of the antennas. In this paper, we present an adjacent pattern power ratio (APPR algorithm for two-port composite right/left-handed (CRLH reconfigurable leaky-wave antennas (LWAs. Additionally, we compare the performances of the APPR algorithm and LWA-based MUSIC algorithms. We study how the computational complexity and the performance of the algorithms depend on number of selected radiation patterns. In addition, we evaluate the performance of the APPR and MUSIC algorithms with numerical simulations as well as with real world indoor measurements having both line-of-sight and non-line-of-sight components. Our performance evaluations show that the DoA estimates are in a considerably good agreement with the real DoAs, especially with the APPR algorithm. In summary, the APPR and MUSIC algorithms for DoA estimation along with the planar and compact LWA layout can be a valuable solution to enhance the performance of the wireless communication in the next generation systems.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.
Directory of Open Access Journals (Sweden)
Xiao-Lin Wu
Full Text Available Low-density (LD single nucleotide polymorphism (SNP arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD or high-density (HD SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE or haplotype-averaged Shannon entropy (HASE and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus
DEFF Research Database (Denmark)
Shutin, Dmitriy; Fleury, Bernard Henri
2011-01-01
In this paper, we develop a sparse variational Bayesian (VB) extension of the space-alternating generalized expectation-maximization (SAGE) algorithm for the high resolution estimation of the parameters of relevant multipath components in the response of frequency and spatially selective wireless...... channels. The application context of the algorithm considered in this contribution is parameter estimation from channel sounding measurements for radio channel modeling purpose. The new sparse VB-SAGE algorithm extends the classical SAGE algorithm in two respects: i) by monotonically minimizing...... parametric sparsity priors for the weights of the multipath components. We revisit the Gaussian sparsity priors within the sparse VB-SAGE framework and extend the results by considering Laplace priors. The structure of the VB-SAGE algorithm allows for an analytical stability analysis of the update expression...
Comparison of breast percent density estimation from raw versus processed digital mammograms
Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina
2011-03-01
We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.
Campos, Andre N.; Souza, Efren L.; Nakamura, Fabiola G.; Nakamura, Eduardo F.; Rodrigues, Joel J. P. C.
2012-01-01
Target tracking is an important application of wireless sensor networks. The networks' ability to locate and track an object is directed linked to the nodes' ability to locate themselves. Consequently, localization systems are essential for target tracking applications. In addition, sensor networks are often deployed in remote or hostile environments. Therefore, density control algorithms are used to increase network lifetime while maintaining its sensing capabilities. In this work, we analyze the impact of localization algorithms (RPE and DPE) and density control algorithms (GAF, A3 and OGDC) on target tracking applications. We adapt the density control algorithms to address the k-coverage problem. In addition, we analyze the impact of network density, residual integration with density control, and k-coverage on both target tracking accuracy and network lifetime. Our results show that DPE is a better choice for target tracking applications than RPE. Moreover, among the evaluated density control algorithms, OGDC is the best option among the three. Although the choice of the density control algorithm has little impact on the tracking precision, OGDC outperforms GAF and A3 in terms of tracking time. PMID:22969329
DEFF Research Database (Denmark)
Jensen, Peter Bjerre; Lysgaard, Steen; Quaade, Ulrich J.
2014-01-01
electrolyte membrane fuel cells (PEMFC). We use genetic algorithms (GAs) to search for materials containing up to three different metals (alkaline-earth, 3d and 4d) and two different halides (Cl, Br and I) – almost 27000 combinations, and have identified novel mixtures, with significantly improved storage......Metal halide ammines have great potential as a future, high-density energy carrier in vehicles. So far known materials, e.g. Mg(NH3)6Cl2 and Sr(NH3)8Cl2, are not suitable for automotive, fuel cell applications, because the release of ammonia is a multi-step reaction, requiring too much heat...
Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.
Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M
2015-05-01
Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different
Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems: Preprint
Energy Technology Data Exchange (ETDEWEB)
Wang, Dexin; Yang, Liuqing; Florita, Anthony; Alam, S.M. Shafiul; Elgindy, Tarek; Hodge, Bri-Mathias
2016-08-01
The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the help of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.
Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue
Directory of Open Access Journals (Sweden)
Chih-Feng Chao
2015-01-01
Full Text Available Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.
An algorithm to estimate the volume of the thyroid lesions using SPECT
International Nuclear Information System (INIS)
Pina, Jorge Luiz Soares de; Mello, Rossana Corbo de; Rebelo, Ana Maria
2000-01-01
An algorithm was developed to estimate the volume of the thyroid and its functioning lesions, that is, those which capture iodine. This estimate is achieved by the use of SPECT, Single Photon Emission Computed Tomography. The algorithm was written in an extended PASCAL language subset and was accomplished to run on Siemens ICON System, a special Macintosh environment that controls the tomographic image acquisition and processing. In spite of be developed for the Siemens DIACAN gamma camera, the algorithm can be easily adapted for the ECAN camera. These two Cameras models are among the most common ones used in Nuclear Medicine in Brazil Nowadays. A phantom study was used to validate the algorithm that have shown that a threshold of 42% of maximum pixel intensity of the images it is possible to estimate the volume of the phantoms with an error of 10% in the range of 30 to 70 ml. (author)
Directory of Open Access Journals (Sweden)
Arvind Sharma
2016-01-01
Full Text Available There are many techniques available in the field of data mining and its subfield spatial data mining is to understand relationships between data objects. Data objects related with spatial features are called spatial databases. These relationships can be used for prediction and trend detection between spatial and nonspatial objects for social and scientific reasons. A huge data set may be collected from different sources as satellite images, X-rays, medical images, traffic cameras, and GIS system. To handle this large amount of data and set relationship between them in a certain manner with certain results is our primary purpose of this paper. This paper gives a complete process to understand how spatial data is different from other kinds of data sets and how it is refined to apply to get useful results and set trends to predict geographic information system and spatial data mining process. In this paper a new improved algorithm for clustering is designed because role of clustering is very indispensable in spatial data mining process. Clustering methods are useful in various fields of human life such as GIS (Geographic Information System, GPS (Global Positioning System, weather forecasting, air traffic controller, water treatment, area selection, cost estimation, planning of rural and urban areas, remote sensing, and VLSI designing. This paper presents study of various clustering methods and algorithms and an improved algorithm of DBSCAN as IDBSCAN (Improved Density Based Spatial Clustering of Application of Noise. The algorithm is designed by addition of some important attributes which are responsible for generation of better clusters from existing data sets in comparison of other methods.
Density Estimation in Several Populations With Uncertain Population Membership
Ma, Yanyuan; Hart, Jeffrey D.; Carroll, Raymond J.
2011-01-01
sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Directory of Open Access Journals (Sweden)
Rongda Chen
Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Application of Density Estimation Methods to Datasets from a Glider
2014-09-30
humpback and sperm whales as well as different dolphin species. OBJECTIVES The objective of this research is to extend existing methods for cetacean...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...estimation from single sensor datasets. Required steps for a cue counting approach, where a cue has been defined as a clicking event (Küsel et al., 2011), to
Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms
P.A.N. Bosman (Peter); D. Thierens (Dirk); D. Thierens (Dirk)
2007-01-01
htmlabstractRecent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we
Comparing algorithms for estimating foliar biomass of conifers in the Pacific Northwest
Crystal L. Raymond; Donald. McKenzie
2013-01-01
Accurate estimates of foliar biomass (FB) are important for quantifying carbon storage in forest ecosystems, but FB is not always reported in regional or national inventories. Foliar biomass also drives key ecological processes in ecosystem models. Published algorithms for estimating FB in conifer species of the Pacific Northwest can yield signifi cantly different...
Density functional theory and evolution algorithm calculations of elastic properties of AlON
Energy Technology Data Exchange (ETDEWEB)
Batyrev, I. G.; Taylor, D. E.; Gazonas, G. A.; McCauley, J. W. [U.S. Army Research Laboratory, Aberdeen Proving Ground, Maryland 21005 (United States)
2014-01-14
Different models for aluminum oxynitride (AlON) were calculated using density functional theory and optimized using an evolutionary algorithm. Evolutionary algorithm and density functional theory (DFT) calculations starting from several models of AlON with different Al or O vacancy locations and different positions for the N atoms relative to the vacancy were carried out. The results show that the constant anion model [McCauley et al., J. Eur. Ceram. Soc. 29(2), 223 (2009)] with a random distribution of N atoms not adjacent to the Al vacancy has the lowest energy configuration. The lowest energy structure is in a reasonable agreement with experimental X-ray diffraction spectra. The optimized structure of a 55 atom unit cell was used to construct 220 and 440 atom models for simulation cells using DFT with a Gaussian basis set. Cubic elastic constant predictions were found to approach the experimentally determined AlON single crystal elastic constants as the model size increased from 55 to 440 atoms. The pressure dependence of the elastic constants found from simulated stress-strain relations were in overall agreement with experimental measurements of polycrystalline and single crystal AlON. Calculated IR intensity and Raman spectra are compared with available experimental data.
International Nuclear Information System (INIS)
Frink, L.J.D.; Salinger, A.G.
2000-01-01
Fluids adsorbed near surfaces, near macromolecules, and in porous materials are inhomogeneous, exhibiting spatially varying density distributions. This inhomogeneity in the fluid plays an important role in controlling a wide variety of complex physical phenomena including wetting, self-assembly, corrosion, and molecular recognition. One of the key methods for studying the properties of inhomogeneous fluids in simple geometries has been density functional theory (DFT). However, there has been a conspicuous lack of calculations in complex two- and three-dimensional geometries. The computational difficulty arises from the need to perform nested integrals that are due to nonlocal terms in the free energy functional. These integral equations are expensive both in evaluation time and in memory requirements; however, the expense can be mitigated by intelligent algorithms and the use of parallel computers. This paper details the efforts to develop efficient numerical algorithms so that nonlocal DFT calculations in complex geometries that require two or three dimensions can be performed. The success of this implementation will enable the study of solvation effects at heterogeneous surfaces, in zeolites, in solvated (bio)polymers, and in colloidal suspensions
Directory of Open Access Journals (Sweden)
Ted W. Sammis
2013-09-01
Full Text Available Net radiation is a key component of the energy balance, whose estimation accuracy has an impact on energy flux estimates from satellite data. In typical remote sensing evapotranspiration (ET algorithms, the outgoing shortwave and longwave components of net radiation are obtained from remote sensing data, while the incoming shortwave (RS and longwave (RL components are typically estimated from weather data using empirical equations. This study evaluates the accuracy of empirical equations commonly used in remote sensing ET algorithms for estimating RS and RL radiation. Evaluation is carried out through comparison of estimates and observations at five sites that represent different climatic regions from humid to arid. Results reveal (1 both RS and RL estimates from all evaluated equations well correlate with observations (R2 ≥ 0.92, (2 RS estimating equations tend to overestimate, especially at higher values, (3 RL estimating equations tend to give more biased values in arid and semi-arid regions, (4 a model that parameterizes the diffuse component of radiation using two clearness indices and a simple model that assumes a linear increase of atmospheric transmissivity with elevation give better RS estimates, and (5 mean relative absolute errors in the net radiation (Rn estimates caused by the use of RS and RL estimating equations varies from 10% to 22%. This study suggests that Rn estimates using recommended incoming radiation estimating equations could improve ET estimates.
Directory of Open Access Journals (Sweden)
Hayley Evers-King
2017-08-01
Full Text Available Particulate Organic Carbon (POC plays a vital role in the ocean carbon cycle. Though relatively small compared with other carbon pools, the POC pool is responsible for large fluxes and is linked to many important ocean biogeochemical processes. The satellite ocean-color signal is influenced by particle composition, size, and concentration and provides a way to observe variability in the POC pool at a range of temporal and spatial scales. To provide accurate estimates of POC concentration from satellite ocean color data requires algorithms that are well validated, with uncertainties characterized. Here, a number of algorithms to derive POC using different optical variables are applied to merged satellite ocean color data provided by the Ocean Color Climate Change Initiative (OC-CCI and validated against the largest database of in situ POC measurements currently available. The results of this validation exercise indicate satisfactory levels of performance from several algorithms (highest performance was observed from the algorithms of Loisel et al., 2002; Stramski et al., 2008 and uncertainties that are within the requirements of the user community. Estimates of the standing stock of the POC can be made by applying these algorithms, and yield an estimated mixed-layer integrated global stock of POC between 0.77 and 1.3 Pg C of carbon. Performance of the algorithms vary regionally, suggesting that blending of region-specific algorithms may provide the best way forward for generating global POC products.
Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling
2018-01-01
We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.
Mesin, Luca
2015-02-01
Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright © 2014 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Yun, Hyong Geun; Shin, Kyo Chul; Hun, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo
2004-01-01
In vivo dosimetry is very important for quality assurance purpose in high energy radiation treatment. Measurement of transmission dose is a new method of in vivo dosimetry which is noninvasive and easy for daily performance. This study is to develop a tumor dose estimation algorithm using measured transmission dose for open radiation field. For basic beam data, transmission dose was measured with various field size (FS) of square radiation field, phantom thickness (Tp), and phantom chamber distance (PCD) with a acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. By using regression analysis of measured basic beam data, a transmission dose estimation algorithm was developed. Accuracy of the algorithm was tested with flat solid phantom with various thickness in various settings of rectangular fields and various PCD. In our developed algorithm, transmission dose was equated to quadratic function of log(A/P) (where A/P is area-perimeter ratio) and the coefficients of the quadratic functions were equated to tertiary functions of PCD. Our developed algorithm could estimate the radiation dose with the errors within ±0.5% for open square field, and with the errors within ±1.0% for open elongated radiation field. Developed algorithm could accurately estimate the transmission dose in open radiation fields with various treatment settings of high energy radiation treatment. (author)
A Streaming Algorithm for Online Estimation of Temporal and Spatial Extent of Delays
Directory of Open Access Journals (Sweden)
Kittipong Hiriotappa
2017-01-01
Full Text Available Knowing traffic congestion and its impact on travel time in advance is vital for proactive travel planning as well as advanced traffic management. This paper proposes a streaming algorithm to estimate temporal and spatial extent of delays online which can be deployed with roadside sensors. First, the proposed algorithm uses streaming input from individual sensors to detect a deviation from normal traffic patterns, referred to as anomalies, which is used as an early indication of delay occurrence. Then, a group of consecutive sensors that detect anomalies are used to temporally and spatially estimate extent of delay associated with the detected anomalies. Performance evaluations are conducted using a real-world data set collected by roadside sensors in Bangkok, Thailand, and the NGSIM data set collected in California, USA. Using NGSIM data, it is shown qualitatively that the proposed algorithm can detect consecutive occurrences of shockwaves and estimate their associated delays. Then, using a data set from Thailand, it is shown quantitatively that the proposed algorithm can detect and estimate delays associated with both recurring congestion and incident-induced nonrecurring congestion. The proposed algorithm also outperforms the previously proposed streaming algorithm.
Adjusting forest density estimates for surveyor bias in historical tree surveys
Brice B. Hanberry; Jian Yang; John M. Kabrick; Hong S. He
2012-01-01
The U.S. General Land Office surveys, conducted between the late 1700s to early 1900s, provide records of trees prior to widespread European and American colonial settlement. However, potential and documented surveyor bias raises questions about the reliability of historical tree density estimates and other metrics based on density estimated from these records. In this...
Reliability and precision of pellet-group counts for estimating landscape-level deer density
David S. deCalesta
2013-01-01
This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...
Maadooliat, Mehdi
2015-10-21
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.
2015-01-01
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
Adaptive Kalman filter based state of charge estimation algorithm for lithium-ion battery
International Nuclear Information System (INIS)
Zheng Hong; Liu Xu; Wei Min
2015-01-01
In order to improve the accuracy of the battery state of charge (SOC) estimation, in this paper we take a lithium-ion battery as an example to study the adaptive Kalman filter based SOC estimation algorithm. Firstly, the second-order battery system model is introduced. Meanwhile, the temperature and charge rate are introduced into the model. Then, the temperature and the charge rate are adopted to estimate the battery SOC, with the help of the parameters of an adaptive Kalman filter based estimation algorithm model. Afterwards, it is verified by the numerical simulation that in the ideal case, the accuracy of SOC estimation can be enhanced by adding two elements, namely, the temperature and charge rate. Finally, the actual road conditions are simulated with ADVISOR, and the simulation results show that the proposed method improves the accuracy of battery SOC estimation under actual road conditions. Thus, its application scope in engineering is greatly expanded. (paper)
Energy Technology Data Exchange (ETDEWEB)
Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in
2016-09-07
Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Kittisuwan, Pichid
2015-03-01
The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.
Volumetric breast density estimation from full-field digital mammograms.
Engeland, S. van; Snoeren, P.R.; Huisman, H.J.; Boetes, C.; Karssemeijer, N.
2006-01-01
A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast
Estimating the chance of success in IVF treatment using a ranking algorithm.
Güvenir, H Altay; Misirli, Gizem; Dilbaz, Serdar; Ozdegirmenci, Ozlem; Demir, Berfu; Dilbaz, Berna
2015-09-01
In medicine, estimating the chance of success for treatment is important in deciding whether to begin the treatment or not. This paper focuses on the domain of in vitro fertilization (IVF), where estimating the outcome of a treatment is very crucial in the decision to proceed with treatment for both the clinicians and the infertile couples. IVF treatment is a stressful and costly process. It is very stressful for couples who want to have a baby. If an initial evaluation indicates a low pregnancy rate, decision of the couple may change not to start the IVF treatment. The aim of this study is twofold, firstly, to develop a technique that can be used to estimate the chance of success for a couple who wants to have a baby and secondly, to determine the attributes and their particular values affecting the outcome in IVF treatment. We propose a new technique, called success estimation using a ranking algorithm (SERA), for estimating the success of a treatment using a ranking-based algorithm. The particular ranking algorithm used here is RIMARC. The performance of the new algorithm is compared with two well-known algorithms that assign class probabilities to query instances. The algorithms used in the comparison are Naïve Bayes Classifier and Random Forest. The comparison is done in terms of area under the ROC curve, accuracy and execution time, using tenfold stratified cross-validation. The results indicate that the proposed SERA algorithm has a potential to be used successfully to estimate the probability of success in medical treatment.
Arabzadeh, Vida; Niaki, S. T. A.; Arabzadeh, Vahid
2017-10-01
One of the most important processes in the early stages of construction projects is to estimate the cost involved. This process involves a wide range of uncertainties, which make it a challenging task. Because of unknown issues, using the experience of the experts or looking for similar cases are the conventional methods to deal with cost estimation. The current study presents data-driven methods for cost estimation based on the application of artificial neural network (ANN) and regression models. The learning algorithms of the ANN are the Levenberg-Marquardt and the Bayesian regulated. Moreover, regression models are hybridized with a genetic algorithm to obtain better estimates of the coefficients. The methods are applied in a real case, where the input parameters of the models are assigned based on the key issues involved in a spherical tank construction. The results reveal that while a high correlation between the estimated cost and the real cost exists; both ANNs could perform better than the hybridized regression models. In addition, the ANN with the Levenberg-Marquardt learning algorithm (LMNN) obtains a better estimation than the ANN with the Bayesian-regulated learning algorithm (BRNN). The correlation between real data and estimated values is over 90%, while the mean square error is achieved around 0.4. The proposed LMNN model can be effective to reduce uncertainty and complexity in the early stages of the construction project.
Liu, Z.; Kar, J.; Zeng, S.; Tackett, J. L.; Vaughan, M.; Trepte, C. R.; Omar, A. H.; Hu, Y.; Winker, D. M.
2017-12-01
In the CALIPSO retrieval algorithm, detection layers in the lidar measurements is followed by their classification as a "cloud" or "aerosol" using 5-dimensional probability density functions (PDFs). The five dimensions are the mean attenuated backscatter at 532 nm, the layer integrated total attenuated color ratio, the mid-layer altitude, integrated volume depolarization ratio and latitude. The new version 4 (V4) level 2 (L2) data products, released in November 2016, are the first major revision to the L2 product suite since May 2010. Significant calibration changes in the V4 level 1 data necessitated substantial revisions to the V4 L2 CAD algorithm. Accordingly, a new set of PDFs was generated to derive the V4 L2 data products. The V4 CAD algorithm is now applied to layers detected in the stratosphere, where volcanic layers and occasional cloud and smoke layers are observed. Previously, these layers were designated as `stratospheric', and not further classified. The V4 CAD algorithm is also applied to all layers detected at single shot (333 m) resolution. In prior data releases, single shot detections were uniformly classified as clouds. The CAD PDFs used in the earlier releases were generated using a full year (2008) of CALIPSO measurements. Because the CAD algorithm was not applied to stratospheric features, the properties of these layers were not incorporated into the PDFs. When building the V4 PDFs, the 2008 data were augmented with additional data from June 2011, and all stratospheric features were included. The Nabro and Puyehue-Cordon volcanos erupted in June 2011, and volcanic aerosol layers were observed in the upper troposphere and lower stratosphere in both the northern and southern hemispheres. The June 2011 data thus provides the stratospheric aerosol properties needed for comprehensive PDF generation. In contrast to earlier versions of the PDFs, which were generated based solely on observed distributions, construction of the V4 PDFs considered the
A Study on Fuel Estimation Algorithms for a Geostationary Communication & Broadcasting Satellite
Jong Won Eun
2000-01-01
It has been developed to calculate fuel budget for a geostationary communication and broadcasting satellite. It is quite essential that the pre-launch fuel budget estimation must account for the deterministic transfer and drift orbit maneuver requirements. After on-station, the calculation of satellite lifetime should be based on the estimation of remaining fuel and assessment of actual performance. These estimations step from the proper algorithms to produce the prediction of satellite lifet...
Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas
Harabi, F.; Akkar, S.; Gharsallah, A.
2016-07-01
Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.
MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.
2017-01-01
This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.
An algorithm for 3D target scatterer feature estimation from sparse SAR apertures
Jackson, Julie Ann; Moses, Randolph L.
2009-05-01
We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.
METAPHOR: Probability density estimation for machine learning based photometric redshifts
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-06-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).
International Nuclear Information System (INIS)
Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol
2013-01-01
In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner
Energy Technology Data Exchange (ETDEWEB)
Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)
2013-12-15
In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.
Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun
2018-03-01
Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.
Volumetric breast density estimation from full-field digital mammograms.
van Engeland, Saskia; Snoeren, Peter R; Huisman, Henkjan; Boetes, Carla; Karssemeijer, Nico
2006-03-01
A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear attenuation coefficients of these tissues are derived from empirical data as a function of tube voltage (kVp), anode material, filtration, and compressed breast thickness. By employing these, tissue composition at a given pixel is computed after performing breast thickness compensation, using a reference value for fatty tissue determined by the maximum pixel value in the breast tissue projection. Validation has been performed using 22 FFDM cases acquired with a GE Senographe 2000D by comparing the volume estimates with volumes obtained by semi-automatic segmentation of breast magnetic resonance imaging (MRI) data. The correlation between MRI and mammography volumes was 0.94 on a per image basis and 0.97 on a per patient basis. Using the dense tissue volumes from MRI data as the gold standard, the average relative error of the volume estimates was 13.6%.
International Nuclear Information System (INIS)
Sundararaman, Ravishankar; Goddard, William A. III; Arias, Tomas A.
2017-01-01
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.
Sundararaman, Ravishankar; Goddard, William A.; Arias, Tomas A.
2017-03-01
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.
A novel adaptive joint power control algorithm with channel estimation in a CDMA cellular system
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
Joint power control has advantages of multi-user detection and power control; and it can combat the multi-access interference and the near-far problem. A novel adaptive joint power control algorithm with channel estimation in a CDMA cellular system was designed. Simulation results show that the algorithm can control the power not only quickly but also precisely with a time change. The method is useful for increasing system capacity.
Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.
Energy Technology Data Exchange (ETDEWEB)
Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.
Group-SMA Algorithm Based Joint Estimation of Train Parameter and State
Directory of Open Access Journals (Sweden)
Wei Zheng
2015-03-01
Full Text Available The braking rate and train arresting operation is important in the train braking performance. It is difficult to obtain the states of the train on time because of the measurement noise and a long calculation time. A type of Group Stochastic M-algorithm (GSMA based on Rao-Blackwellization Particle Filter (RBPF algorithm and Stochastic M-algorithm (SMA is proposed in this paper. Compared with RBPF, GSMA based estimation precisions for the train braking rate and the control accelerations were improved by 78% and 62%, respectively. The calculation time of the GSMA was decreased by 70% compared with SMA.
2014-09-30
No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...will be applied also to other species such as sperm whale (Physeter macrocephalus) (whose high source level assures long range detection and amplifies...improve the accuracy of marine mammal density estimation based on counting echolocation clicks, and will be applicable to density estimates obtained
Directory of Open Access Journals (Sweden)
Kaifeng Yang
2014-01-01
Full Text Available A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.
Yang, Kaifeng; Mu, Li; Yang, Dongdong; Zou, Feng; Wang, Lei; Jiang, Qiaoyong
2014-01-01
A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.
Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm
Directory of Open Access Journals (Sweden)
Jinsheng Yang
2012-01-01
Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.
Light element nucleosynthesis and estimates of the universal baryon density
International Nuclear Information System (INIS)
Mathews, G.J.; Viola, V.E.
1978-01-01
The present mean universal baryon density rho/sub b/, is of interest because it and the Hubble constant determine the curvature of the Universe. The available indicators of rho/sub b/ come from the present deuterium abundance, if it is assumed that ''big-bang'' nucleosynthesis must produce enough D to at least match the abundance of this nuclide in the interstellar medium. An alternative method utilizing the 7 Li/D ratio is used to evaluate rho/sub b/. With this method the difficulty associated with the astration process can be essentially canceled from the problem. The results obtained indicate an open Universe with a best guess for rho/sub b/ of 7.1 x 10 -31 g/cm 3 . 1 figure, 1 table
Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm
International Nuclear Information System (INIS)
Lazzús, Juan A.; Rivera, Marco; López-Caraballo, Carlos H.
2016-01-01
A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.
Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm
Energy Technology Data Exchange (ETDEWEB)
Lazzús, Juan A., E-mail: jlazzus@dfuls.cl; Rivera, Marco; López-Caraballo, Carlos H.
2016-03-11
A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.
Directory of Open Access Journals (Sweden)
Manan Gupta
Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates
Directory of Open Access Journals (Sweden)
Bickel David R
2010-01-01
Full Text Available Abstract Background Sustained research on the problem of determining which genes are differentially expressed on the basis of microarray data has yielded a plethora of statistical algorithms, each justified by theory, simulation, or ad hoc validation and yet differing in practical results from equally justified algorithms. Recently, a concordance method that measures agreement among gene lists have been introduced to assess various aspects of differential gene expression detection. This method has the advantage of basing its assessment solely on the results of real data analyses, but as it requires examining gene lists of given sizes, it may be unstable. Results Two methodologies for assessing predictive error are described: a cross-validation method and a posterior predictive method. As a nonparametric method of estimating prediction error from observed expression levels, cross validation provides an empirical approach to assessing algorithms for detecting differential gene expression that is fully justified for large numbers of biological replicates. Because it leverages the knowledge that only a small portion of genes are differentially expressed, the posterior predictive method is expected to provide more reliable estimates of algorithm performance, allaying concerns about limited biological replication. In practice, the posterior predictive method can assess when its approximations are valid and when they are inaccurate. Under conditions in which its approximations are valid, it corroborates the results of cross validation. Both comparison methodologies are applicable to both single-channel and dual-channel microarrays. For the data sets considered, estimating prediction error by cross validation demonstrates that empirical Bayes methods based on hierarchical models tend to outperform algorithms based on selecting genes by their fold changes or by non-hierarchical model-selection criteria. (The latter two approaches have comparable
Application of a Channel Estimation Algorithm to Spectrum Sensing in a Cognitive Radio Context
Directory of Open Access Journals (Sweden)
Vincent Savaux
2014-01-01
Full Text Available This paper deals with spectrum sensing in an orthogonal frequency division multiplexing (OFDM context, allowing an opportunistic user to detect a vacant spectrum resource in a licensed band. The proposed method is based on an iterative algorithm used for the joint estimation of noise variance and frequency selective channel. It can be seen as a second-order detector, since it is performed by means of the minimum mean square error criterion. The main advantage of the proposed algorithm is its capability to perform spectrum sensing, noise variance estimation, and channel estimation in the presence of a signal. Furthermore, the sensing duration is limited to only one OFDM symbol. We theoretically show the convergence of the algorithm, and we derive its analytical detection and false alarm probabilities. Furthermore, we show that the detector is very efficient, even for low SNR values, and is robust against a channel uncertainty.
Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences
Directory of Open Access Journals (Sweden)
Guo Bao-long
2004-09-01
Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.
International Nuclear Information System (INIS)
Humbert, Ludovic; Hazrati Marangalou, Javad; Rietbergen, Bert van; Río Barquero, Luis Miguel del; Lenthe, G. Harry van
2016-01-01
Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was used as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 ± 0.19 mm (−18 ± 92 mg/cm"3) using the simulated clinical CT volumes with the smallest voxel size (0.33 × 0.33 × 0.5 mm"3), and 0.10 ± 0.24 mm (−10 ± 115 mg/cm"3) using the volumes with the largest voxel size (1.0 × 1.0 × 3.0 mm"3). A trend for the cortical thickness and
Energy Technology Data Exchange (ETDEWEB)
Humbert, Ludovic, E-mail: ludohumberto@gmail.com [Galgo Medical, Barcelona 08036 (Spain); Hazrati Marangalou, Javad; Rietbergen, Bert van [Orthopaedic Biomechanics, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB (Netherlands); Río Barquero, Luis Miguel del [CETIR Centre Medic, Barcelona 08029 (Spain); Lenthe, G. Harry van [Biomechanics Section, KU Leuven–University of Leuven, Leuven 3001 (Belgium)
2016-04-15
Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was used as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 ± 0.19 mm (−18 ± 92 mg/cm{sup 3}) using the simulated clinical CT volumes with the smallest voxel size (0.33 × 0.33 × 0.5 mm{sup 3}), and 0.10 ± 0.24 mm (−10 ± 115 mg/cm{sup 3}) using the volumes with the largest voxel size (1.0 × 1.0 × 3.0 mm{sup 3}). A trend for the
PDE-Foam - a probability-density estimation method using self-adapting phase-space binning
Dannheim, Dominik; Voigt, Alexander; Grahn, Karl-Johan; Speckmayer, Peter
2009-01-01
Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. To efficiently use large event samples to estimate the probability density, a binary search tree (range searching) is used in the PDE-RS implementation. It is a generalisation of standard likelihood methods and a powerful classification tool for problems with highly non-linearly correlated observables. In this paper, we present an innovative improvement of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multidimensional phase space, minimizing the variance of the signal and background densities inside the cells. The binned density information is stored in binary trees, allowing for a very ...
Directory of Open Access Journals (Sweden)
Santosh Kumar Singh
2017-06-01
Full Text Available This paper presents a new hybrid method based on Gravity Search Algorithm (GSA and Recursive Least Square (RLS, known as GSA-RLS, to solve the harmonic estimation problems in the case of time varying power signals in presence of different noises. GSA is based on the Newton’s law of gravity and mass interactions. In the proposed method, the searcher agents are a collection of masses that interact with each other using Newton’s laws of gravity and motion. The basic GSA algorithm strategy is combined with RLS algorithm sequentially in an adaptive way to update the unknown parameters (weights of the harmonic signal. Simulation and practical validation are made with the experimentation of the proposed algorithm with real time data obtained from a heavy paper industry. A comparative performance of the proposed algorithm is evaluated with other recently reported algorithms like, Differential Evolution (DE, Particle Swarm Optimization (PSO, Bacteria Foraging Optimization (BFO, Fuzzy-BFO (F-BFO hybridized with Least Square (LS and BFO hybridized with RLS algorithm, which reveals that the proposed GSA-RLS algorithm is the best in terms of accuracy, convergence and computational time.
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali Ali; Zhang, Xiangliang; Wang, Suojin
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.
Use of spatial capture–recapture to estimate density of Andean bears in northern Ecuador
Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew
2017-01-01
The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial capture–recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2 within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm
International Nuclear Information System (INIS)
Takaishi, Tetsuya
2014-01-01
The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model
A generalized model for estimating the energy density of invertebrates
James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.
2012-01-01
Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2 = 0.96, p cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.
Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm
International Nuclear Information System (INIS)
Oliva, Diego; Abd El Aziz, Mohamed; Ella Hassanien, Aboul
2017-01-01
Highlights: •We modify the whale algorithm using chaotic maps. •We apply a chaotic algorithm to estimate parameter of photovoltaic cells. •We perform a study of chaos in whale algorithm. •Several comparisons and metrics support the experimental results. •We test the method with data from real solar cells. -- Abstract: The using of solar energy has been increased since it is a clean source of energy. In this way, the design of photovoltaic cells has attracted the attention of researchers over the world. There are two main problems in this field: having a useful model to characterize the solar cells and the absence of data about photovoltaic cells. This situation even affects the performance of the photovoltaic modules (panels). The characteristics of the current vs. voltage are used to describe the behavior of solar cells. Considering such values, the design problem involves the solution of the complex non-linear and multi-modal objective functions. Different algorithms have been proposed to identify the parameters of the photovoltaic cells and panels. Most of them commonly fail in finding the optimal solutions. This paper proposes the Chaotic Whale Optimization Algorithm (CWOA) for the parameters estimation of solar cells. The main advantage of the proposed approach is using the chaotic maps to compute and automatically adapt the internal parameters of the optimization algorithm. This situation is beneficial in complex problems, because along the iterative process, the proposed algorithm improves their capabilities to search for the best solution. The modified method is able to optimize complex and multimodal objective functions. For example, the function for the estimation of parameters of solar cells. To illustrate the capabilities of the proposed algorithm in the solar cell design, it is compared with other optimization methods over different datasets. Moreover, the experimental results support the improved performance of the proposed approach
International Nuclear Information System (INIS)
Sanchez, M.; Esteban, L.; Kornejew, P.; Hirsch, M.
2008-01-01
Mid Infrared (10,6 μm CO 2 laser lines) interferometers as a plasma density diagnostic must use two-colour systems with superposed interferometers beams at different wavelengths in order to cope with mechanical vibrations and drifts. They require a highly precise phase difference measurement where all sources of error must be reduced. One of these is the cross-talk between the signals which creates nonlinear spurious periodic mixing products. The reason may be either optical or electrical crosstalk both resulting in similar perturbations of the measurement. In the TJII interferometer a post-processing algorithm is used to reduce the crosstalk in the data. This post-processing procedure is not appropriate for very long pulses, as it is the case for in new tokamak (ITER) or stellarator (W7-X) projects. In both cases an on-line reduction process is required or--even better--the unwanted signal components must be reduced in the system itself CO 2 laser interferometers which as the second wavelength use the CO laser line (5,3 μm), may apply a single common detector sensitive to both wavelengths and separate the corresponding IF signals by appropriate bandpass filters. This reduces complexity of the optical arrangement and avoids a possible source of vibration induced phase noise as both signals share the same beam path. To avoid cross talk in this arrangement filtering must be appropriate. In this paper we present calculations to define the limits of crosstalk for a desired plasma density precision. A crosstalk reduction algorithm has been developed and is applied to experimental results from TJ-II pulses. Results from a single detector arrangement as under investigation for the CO 2 /CO laser interferometer developed for W7-X are presented
Optimized LTE cell planning for multiple user density subareas using meta-heuristic algorithms
Ghazzai, Hakim
2014-09-01
Base station deployment in cellular networks is one of the most fundamental problems in network design. This paper proposes a novel method for the cell planning problem for the fourth generation 4G-LTE cellular networks using meta heuristic algorithms. In this approach, we aim to satisfy both coverage and cell capacity constraints simultaneously by formulating a practical optimization problem. We start by performing a typical coverage and capacity dimensioning to identify the initial required number of base stations. Afterwards, we implement a Particle Swarm Optimization algorithm or a recently-proposed Grey Wolf Optimizer to find the optimal base station locations that satisfy both problem constraints in the area of interest which can be divided into several subareas with different user densities. Subsequently, an iterative approach is executed to eliminate eventual redundant base stations. We have also performed Monte Carlo simulations to study the performance of the proposed scheme and computed the average number of users in outage. Results show that our proposed approach respects in all cases the desired network quality of services even for large-scale dimension problems.
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
Genetic algorithm-based improved DOA estimation using fourth-order cumulants
Ahmed, Ammar; Tufail, Muhammad
2017-05-01
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S
2005-05-15
Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE
Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment
Energy Technology Data Exchange (ETDEWEB)
Mohayai, Tanaz Angelina [IIT, Chicago; Snopok, Pavel [IIT, Chicago; Neuffer, David [Fermilab; Rogers, Chris [Rutherford
2017-10-12
The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Directory of Open Access Journals (Sweden)
Qingyang Xu
2014-01-01
Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
Comparison Study on the Battery SoC Estimation with EKF and UKF Algorithms
Directory of Open Access Journals (Sweden)
Hongwen He
2013-09-01
Full Text Available The battery state of charge (SoC, whose estimation is one of the basic functions of battery management system (BMS, is a vital input parameter in the energy management and power distribution control of electric vehicles (EVs. In this paper, two methods based on an extended Kalman filter (EKF and unscented Kalman filter (UKF, respectively, are proposed to estimate the SoC of a lithium-ion battery used in EVs. The lithium-ion battery is modeled with the Thevenin model and the model parameters are identified based on experimental data and validated with the Beijing Driving Cycle. Then space equations used for SoC estimation are established. The SoC estimation results with EKF and UKF are compared in aspects of accuracy and convergence. It is concluded that the two algorithms both perform well, while the UKF algorithm is much better with a faster convergence ability and a higher accuracy.
A New Missing Values Estimation Algorithm in Wireless Sensor Networks Based on Convolution
Directory of Open Access Journals (Sweden)
Feng Liu
2013-04-01
Full Text Available Nowadays, with the rapid development of Internet of Things (IoT applications, data missing phenomenon becomes very common in wireless sensor networks. This problem can greatly and directly threaten the stability and usability of the Internet of things applications which are constructed based on wireless sensor networks. How to estimate the missing value has attracted wide interest, and some solutions have been proposed. Different with the previous works, in this paper, we proposed a new convolution based missing value estimation algorithm. The convolution theory, which is usually used in the area of signal and image processing, can also be a practical and efficient way to estimate the missing sensor data. The results show that the proposed algorithm in this paper is practical and effective, and can estimate the missing value accurately.
Jensen, Peter Bjerre; Lysgaard, Steen; Quaade, Ulrich J; Vegge, Tejs
2014-09-28
Metal halide ammines have great potential as a future, high-density energy carrier in vehicles. So far known materials, e.g. Mg(NH3)6Cl2 and Sr(NH3)8Cl2, are not suitable for automotive, fuel cell applications, because the release of ammonia is a multi-step reaction, requiring too much heat to be supplied, making the total efficiency lower. Here, we apply density functional theory (DFT) calculations to predict new mixed metal halide ammines with improved storage capacities and the ability to release the stored ammonia in one step, at temperatures suitable for system integration with polymer electrolyte membrane fuel cells (PEMFC). We use genetic algorithms (GAs) to search for materials containing up to three different metals (alkaline-earth, 3d and 4d) and two different halides (Cl, Br and I) - almost 27,000 combinations, and have identified novel mixtures, with significantly improved storage capacities. The size of the search space and the chosen fitness function make it possible to verify that the found candidates are the best possible candidates in the search space, proving that the GA implementation is ideal for this kind of computational materials design, requiring calculations on less than two percent of the candidates to identify the global optimum.
A physics-based algorithm for the estimation of bearing spall width using vibrations
Kogan, G.; Klein, R.; Bortman, J.
2018-05-01
Evaluation of the damage severity in a mechanical system is required for the assessment of its remaining useful life. In rotating machines, bearings are crucial components. Hence, the estimation of the size of spalls in bearings is important for prognostics of the remaining useful life. Recently, this topic has been extensively studied and many of the methods used for the estimation of spall size are based on the analysis of vibrations. A new tool is proposed in the current study for the estimation of the spall width on the outer ring raceway of a rolling element bearing. The understanding and analysis of the dynamics of the rolling element-spall interaction enabled the development of a generic and autonomous algorithm. The algorithm is generic in the sense that it does not require any human interference to make adjustments for each case. All of the algorithm's parameters are defined by analytical expressions describing the dynamics of the system. The required conditions, such as sampling rate, spall width and depth, defining the feasible region of such algorithms, are analyzed in the paper. The algorithm performance was demonstrated with experimental data for different spall widths.
Tan, Jun; Nie, Zaiping
2018-05-12
Direction of Arrival (DOA) estimation of low-altitude targets is difficult due to the multipath coherent interference from the ground reflection image of the targets, especially for very high frequency (VHF) radars, which have antennae that are severely restricted in terms of aperture and height. The polarization smoothing generalized multiple signal classification (MUSIC) algorithm, which combines polarization smoothing and generalized MUSIC algorithm for polarization sensitive arrays (PSAs), was proposed to solve this problem in this paper. Firstly, the polarization smoothing pre-processing was exploited to eliminate the coherence between the direct and the specular signals. Secondly, we constructed the generalized MUSIC algorithm for low angle estimation. Finally, based on the geometry information of the symmetry multipath model, the proposed algorithm was introduced to convert the two-dimensional searching into one-dimensional searching, thus reducing the computational burden. Numerical results were provided to verify the effectiveness of the proposed method, showing that the proposed algorithm has significantly improved angle estimation performance in the low-angle area compared with the available methods, especially when the grazing angle is near zero.
Estimation of dislocations density and distribution of dislocations during ECAP-Conform process
Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza
2018-01-01
Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
Majeed, Muhammad Usman
2017-07-19
Steady-state elliptic partial differential equations (PDEs) are frequently used to model a diverse range of physical phenomena. The source and boundary data estimation problems for such PDE systems are of prime interest in various engineering disciplines including biomedical engineering, mechanics of materials and earth sciences. Almost all existing solution strategies for such problems can be broadly classified as optimization-based techniques, which are computationally heavy especially when the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time-like. In this regard, first, an iterative observer algorithm is developed that sweeps over regular-shaped domains and solves boundary estimation problems for steady-state Laplace equation. It is well-known that source and boundary estimation problems for the elliptic PDEs are highly sensitive to noise in the data. For this, an optimal iterative observer algorithm, which is a robust counterpart of the iterative observer, is presented to tackle the ill-posedness due to noise. The iterative observer algorithm and the optimal iterative algorithm are then used to solve source localization and estimation problems for Poisson equation for noise-free and noisy data cases respectively. Next, a divide and conquer approach is developed for three-dimensional domains with two congruent parallel surfaces to solve the boundary and the source data estimation problems for the steady-state Laplace and Poisson kind of systems respectively. Theoretical results are shown using a functional analysis framework, and consistent numerical simulation results are presented for several test cases using finite difference discretization schemes.
Directory of Open Access Journals (Sweden)
MILIVOJEVIC, Z. N.
2010-02-01
Full Text Available In this paper the fundamental frequency estimation results of the MP3 modeled speech signal are analyzed. The estimation of the fundamental frequency was performed by the Picking-Peaks algorithm with the implemented Parametric Cubic Convolution (PCC interpolation. The efficiency of PCC was tested for Catmull-Rom, Greville and Greville two-parametric kernel. Depending on MSE, a window that gives optimal results was chosen.
Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank
Directory of Open Access Journals (Sweden)
LI Lan-yin
2017-04-01
Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank，which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes，topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs，and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.
Recursive parameter estimation for Hammerstein-Wiener systems using modified EKF algorithm.
Yu, Feng; Mao, Zhizhong; Yuan, Ping; He, Dakuo; Jia, Mingxing
2017-09-01
This paper focuses on the recursive parameter estimation for the single input single output Hammerstein-Wiener system model, and the study is then extended to a rarely mentioned multiple input single output Hammerstein-Wiener system. Inspired by the extended Kalman filter algorithm, two basic recursive algorithms are derived from the first and the second order Taylor approximation. Based on the form of the first order approximation algorithm, a modified algorithm with larger parameter convergence domain is proposed to cope with the problem of small parameter convergence domain of the first order one and the application limit of the second order one. The validity of the modification on the expansion of convergence domain is shown from the convergence analysis and is demonstrated with two simulation cases. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation
Directory of Open Access Journals (Sweden)
Suk-Ju Kang
2016-12-01
Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.
A simple algorithm for estimation of source-to-detector distance in Compton imaging
International Nuclear Information System (INIS)
Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.
2008-01-01
Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
Directory of Open Access Journals (Sweden)
Yu Huang
Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.
A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain
Directory of Open Access Journals (Sweden)
Ibn-Elhaj E
2009-01-01
Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.
A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain
Directory of Open Access Journals (Sweden)
E. M. Ismaili Aalaoui
2009-02-01
Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.
A note on the conditional density estimate in single functional index model
2010-01-01
Abstract In this paper, we consider estimation of the conditional density of a scalar response variable Y given a Hilbertian random variable X when the observations are linked with a single-index structure. We establish the pointwise and the uniform almost complete convergence (with the rate) of the kernel estimate of this model. As an application, we show how our result can be applied in the prediction problem via the conditional mode estimate. Finally, the estimation of the funct...
Directory of Open Access Journals (Sweden)
Apurva Samdurkar
2018-06-01
Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.
Directory of Open Access Journals (Sweden)
Pengfei Sun
Full Text Available Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI, an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.
Wang, Rongxiao; Chen, B.; Qiu, S.; Ma, Liang; Zhu, Zhengqiu; Wang, Yiping; Qiu, Xiaogang
2018-01-01
Locating and quantifying the emission source plays a significant role in the emergency management of hazardous gas leak accidents. Due to the lack of a desirable atmospheric dispersion model, current source estimation algorithms cannot meet the requirements of both accuracy and efficiency. In
3D head pose estimation and tracking using particle filtering and ICP algorithm
Ben Ghorbel, Mahdi; Baklouti, Malek; Couvet, Serge
2010-01-01
This paper addresses the issue of 3D head pose estimation and tracking. Existing approaches generally need huge database, training procedure, manual initialization or use face feature extraction manually extracted. We propose a framework for estimating the 3D head pose in its fine level and tracking it continuously across multiple Degrees of Freedom (DOF) based on ICP and particle filtering. We propose to approach the problem, using 3D computational techniques, by aligning a face model to the 3D dense estimation computed by a stereo vision method, and propose a particle filter algorithm to refine and track the posteriori estimate of the position of the face. This work comes with two contributions: the first concerns the alignment part where we propose an extended ICP algorithm using an anisotropic scale transformation. The second contribution concerns the tracking part. We propose the use of the particle filtering algorithm and propose to constrain the search space using ICP algorithm in the propagation step. The results show that the system is able to fit and track the head properly, and keeps accurate the results on new individuals without a manual adaptation or training. © Springer-Verlag Berlin Heidelberg 2010.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
EnviroAtlas - New Bedford, MA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Woodbine, IA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Green Bay, WI - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Des Moines, IA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Durham, NC - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Fresno, CA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Cleveland, OH - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Portland, ME - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - New York, NY - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Memphis, TN - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Milwaukee, WI - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas Estimated Intersection Density of Walkable Roads Web Service
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in each EnviroAtlas community....
EnviroAtlas - Portland, OR - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Tampa, FL - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Austin, TX - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Paterson, NJ - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Phoenix, AZ - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Pittsburgh, PA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.
Joo, Hyun; Chavan, Archana G; Day, Ryan; Lennox, Kristin P; Sukhanov, Paul; Dahl, David B; Vannucci, Marina; Tsai, Jerry
2011-10-01
Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.
Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.
Directory of Open Access Journals (Sweden)
Hyun Joo
2011-10-01
Full Text Available Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM. Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å, this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.
Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity
Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry
2011-01-01
Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638
Habarulema, J. B.; McKinnell, L.-A.
2012-05-01
In this work, results obtained by investigating the application of different neural network backpropagation training algorithms are presented. This was done to assess the performance accuracy of each training algorithm in total electron content (TEC) estimations using identical datasets in models development and verification processes. Investigated training algorithms are standard backpropagation (SBP), backpropagation with weight delay (BPWD), backpropagation with momentum (BPM) term, backpropagation with chunkwise weight update (BPC) and backpropagation for batch (BPB) training. These five algorithms are inbuilt functions within the Stuttgart Neural Network Simulator (SNNS) and the main objective was to find out the training algorithm that generates the minimum error between the TEC derived from Global Positioning System (GPS) observations and the modelled TEC data. Another investigated algorithm is the MatLab based Levenberg-Marquardt backpropagation (L-MBP), which achieves convergence after the least number of iterations during training. In this paper, neural network (NN) models were developed using hourly TEC data (for 8 years: 2000-2007) derived from GPS observations over a receiver station located at Sutherland (SUTH) (32.38° S, 20.81° E), South Africa. Verification of the NN models for all algorithms considered was performed on both "seen" and "unseen" data. Hourly TEC values over SUTH for 2003 formed the "seen" dataset. The "unseen" dataset consisted of hourly TEC data for 2002 and 2008 over Cape Town (CPTN) (33.95° S, 18.47° E) and SUTH, respectively. The models' verification showed that all algorithms investigated provide comparable results statistically, but differ significantly in terms of time required to achieve convergence during input-output data training/learning. This paper therefore provides a guide to neural network users for choosing appropriate algorithms based on the availability of computation capabilities used for research.
Directory of Open Access Journals (Sweden)
YU Wenhao
2015-01-01
Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.
Chatzidakis, Stylianos; Liu, Zhengzhi; Hayward, Jason P.; Scaglione, John M.
2018-03-01
This work presents a generalized muon trajectory estimation algorithm to estimate the path of a muon in either uniform or nonuniform media. The use of cosmic ray muons in nuclear nonproliferation and safeguard verification applications has recently gained attention due to the non-intrusive and passive nature of the inspection, penetrating capabilities, as well as recent advances in detectors that measure position and direction of the individual muons before and after traversing the imaged object. However, muon image reconstruction techniques are limited in resolution due to low muon flux and the effects of multiple Coulomb scattering (MCS). Current reconstruction algorithms, e.g., point of closest approach (PoCA) or straight-line path (SLP), rely on overly simple assumptions for muon path estimation through the imaged object. For robust muon tomography, efficient and flexible physics-based algorithms are needed to model the MCS process and accurately estimate the most probable trajectory of a muon as it traverses an object. In the present work, the use of a Bayesian framework and a Gaussian approximation of MCS is explored for estimation of the most likely path of a cosmic ray muon traversing uniform or nonuniform media and undergoing MCS. The algorithm's precision is compared to Monte Carlo simulated muon trajectories. It was found that the algorithm is expected to be able to predict muon tracks to less than 1.5 mm root mean square (RMS) for 0.5 GeV muons and 0.25 mm RMS for 3 GeV muons, a 50% improvement compared to SLP and 15% improvement when compared to PoCA. Further, a 30% increase in useful muon flux was observed relative to PoCA. Muon track prediction improved for higher muon energies or smaller penetration depth where energy loss is not significant. The effect of energy loss due to ionization is investigated, and a linear energy loss relation that is easy to use is proposed.
Effects of stand density on top height estimation for ponderosa pine
Martin Ritchie; Jianwei Zhang; Todd Hamilton
2012-01-01
Site index, estimated as a function of dominant-tree height and age, is often used as an expression of site quality. This expression is assumed to be effectively independent of stand density. Observation of dominant height at two different ponderosa pine levels-of-growing-stock studies revealed that top height stability with respect to stand density depends on the...
KDE-Track: An Efficient Dynamic Density Estimator for Data Streams
Qahtan, Abdulhakim Ali Ali; Wang, Suojin; Zhang, Xiangliang
2016-01-01
Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.
KDE-Track: An Efficient Dynamic Density Estimator for Data Streams
Qahtan, Abdulhakim Ali Ali
2016-11-08
Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.
International Nuclear Information System (INIS)
Ozturk, H.K.; Canyurt, O.E.; Hepbasli, A.; Utlu, Z.
2004-01-01
The main objective of the present study is to develop the energy input estimation equations for the residential-commercial sector (RCS) in order to estimate the future projections based on genetic algorithm (GA) notion and to examine the effect of the design parameters on the energy input of the sector. For this purpose, the Turkish RCS is given as an example. The GA Energy Input Estimation Model (GAEIEM) is used to estimate Turkey's future residential-commercial energy input demand based on gross domestic product (GDP), population, import, export, house production, cement production and basic house appliances consumption figures. It may be concluded that the three various forms of models proposed here can be used as an alternative solution and estimation techniques to available estimation techniques. It is also expected that this study will be helpful in developing highly applicable and productive planning for energy policies. (author)
Development of estimation algorithm of loose parts and analysis of impact test data
International Nuclear Information System (INIS)
Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho
1999-11-01
Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)
Directory of Open Access Journals (Sweden)
Dongming Li
2017-04-01
Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
FPSoC-Based Architecture for a Fast Motion Estimation Algorithm in H.264/AVC
Directory of Open Access Journals (Sweden)
Obianuju Ndili
2009-01-01
Full Text Available There is an increasing need for high quality video on low power, portable devices. Possible target applications range from entertainment and personal communications to security and health care. While H.264/AVC answers the need for high quality video at lower bit rates, it is significantly more complex than previous coding standards and thus results in greater power consumption in practical implementations. In particular, motion estimation (ME, in H.264/AVC consumes the largest power in an H.264/AVC encoder. It is therefore critical to speed-up integer ME in H.264/AVC via fast motion estimation (FME algorithms and hardware acceleration. In this paper, we present our hardware oriented modifications to a hybrid FME algorithm, our architecture based on the modified algorithm, and our implementation and prototype on a PowerPC-based Field Programmable System on Chip (FPSoC. Our results show that the modified hybrid FME algorithm on average, outperforms previous state-of-the-art FME algorithms, while its losses when compared with FSME, in terms of PSNR performance and computation time, are insignificant. We show that although our implementation platform is FPGA-based, our implementation results compare favourably with previous architectures implemented on ASICs. Finally we also show an improvement over some existing architectures implemented on FPGAs.
A practical algorithm for distribution state estimation including renewable energy sources
Energy Technology Data Exchange (ETDEWEB)
Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)
2009-11-15
Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Wolf, Michael
2012-01-01
A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.
Sparse Adaptive Channel Estimation Based on lp-Norm-Penalized Affine Projection Algorithm
Directory of Open Access Journals (Sweden)
Yingsong Li
2014-01-01
Full Text Available We propose an lp-norm-penalized affine projection algorithm (LP-APA for broadband multipath adaptive channel estimations. The proposed LP-APA is realized by incorporating an lp-norm into the cost function of the conventional affine projection algorithm (APA to exploit the sparsity property of the broadband wireless multipath channel, by which the convergence speed and steady-state performance of the APA are significantly improved. The implementation of the LP-APA is equivalent to adding a zero attractor to its iterations. The simulation results, which are obtained from a sparse channel estimation, demonstrate that the proposed LP-APA can efficiently improve channel estimation performance in terms of both the convergence speed and steady-state performance when the channel is exactly sparse.
Online Estimation of Time-Varying Volatility Using a Continuous-Discrete LMS Algorithm
Directory of Open Access Journals (Sweden)
Jacques Oksman
2008-09-01
Full Text Available The following paper addresses a problem of inference in financial engineering, namely, online time-varying volatility estimation. The proposed method is based on an adaptive predictor for the stock price, built from an implicit integration formula. An estimate for the current volatility value which minimizes the mean square prediction error is calculated recursively using an LMS algorithm. The method is then validated on several synthetic examples as well as on real data. Throughout the illustration, the proposed method is compared with both UKF and offline volatility estimation.
Savaux, Vincent
2014-01-01
This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr
Bandyopadhyay, Saptarshi
Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm
Directory of Open Access Journals (Sweden)
L Max Tarjan
Full Text Available Parametric and nonparametric kernel methods dominate studies of animal home ranges and space use. Most existing methods are unable to incorporate information about the underlying physical environment, leading to poor performance in excluding areas that are not used. Using radio-telemetry data from sea otters, we developed and evaluated a new algorithm for estimating home ranges (hereafter Permissible Home Range Estimation, or "PHRE" that reflects habitat suitability. We began by transforming sighting locations into relevant landscape features (for sea otters, coastal position and distance from shore. Then, we generated a bivariate kernel probability density function in landscape space and back-transformed this to geographic space in order to define a permissible home range. Compared to two commonly used home range estimation methods, kernel densities and local convex hulls, PHRE better excluded unused areas and required a smaller sample size. Our PHRE method is applicable to species whose ranges are restricted by complex physical boundaries or environmental gradients and will improve understanding of habitat-use requirements and, ultimately, aid in conservation efforts.
Directory of Open Access Journals (Sweden)
Jian Zhao
2014-01-01
Full Text Available Road friction information is very important for vehicle active braking control systems such as ABS, ASR, or ESP. It is not easy to estimate the tire/road friction forces and coefficient accurately because of the nonlinear system, parameters uncertainties, and signal noises. In this paper, a robust and effective tire/road friction estimation algorithm for ABS is proposed, and its performance is further discussed by simulation and experiment. The tire forces were observed by the discrete Kalman filter, and the road friction coefficient was estimated by the recursive least square method consequently. Then, the proposed algorithm was analysed and verified by simulation and road test. A sliding mode based ABS with smooth wheel slip ratio control and a threshold based ABS by pulse pressure control with significant fluctuations were used for the simulation. Finally, road tests were carried out in both winter and summer by the car equipped with the same threshold based ABS, and the algorithm was evaluated on different road surfaces. The results show that the proposed algorithm can identify the variation of road conditions with considerable accuracy and response speed.
A Scalable GVT Estimation Algorithm for PDES: Using Lower Bound of Event-Bulk-Time
Directory of Open Access Journals (Sweden)
Yong Peng
2015-01-01
Full Text Available Global Virtual Time computation of Parallel Discrete Event Simulation is crucial for conducting fossil collection and detecting the termination of simulation. The triggering condition of GVT computation in typical approaches is generally based on the wall-clock time or logical time intervals. However, the GVT value depends on the timestamps of events rather than the wall-clock time or logical time intervals. Therefore, it is difficult for the existing approaches to select appropriate time intervals to compute the GVT value. In this study, we propose a scalable GVT estimation algorithm based on Lower Bound of Event-Bulk-Time, which triggers the computation of the GVT value according to the number of processed events. In order to calculate the number of transient messages, our algorithm employs Event-Bulk to record the messages sent and received by Logical Processes. To eliminate the performance bottleneck, we adopt an overlapping computation approach to distribute the workload of GVT computation to all worker-threads. We compare our algorithm with the fast asynchronous GVT algorithm using PHOLD benchmark on the shared memory machine. Experimental results indicate that our algorithm has a light overhead and shows higher speedup and accuracy of GVT computation than the fast asynchronous GVT algorithm.
Directory of Open Access Journals (Sweden)
Delaram Houshmand Kouchi
2017-05-01
Full Text Available The successful application of hydrological models relies on careful calibration and uncertainty analysis. However, there are many different calibration/uncertainty analysis algorithms, and each could be run with different objective functions. In this paper, we highlight the fact that each combination of optimization algorithm-objective functions may lead to a different set of optimum parameters, while having the same performance; this makes the interpretation of dominant hydrological processes in a watershed highly uncertain. We used three different optimization algorithms (SUFI-2, GLUE, and PSO, and eight different objective functions (R2, bR2, NSE, MNS, RSR, SSQR, KGE, and PBIAS in a SWAT model to calibrate the monthly discharges in two watersheds in Iran. The results show that all three algorithms, using the same objective function, produced acceptable calibration results; however, with significantly different parameter ranges. Similarly, an algorithm using different objective functions also produced acceptable calibration results, but with different parameter ranges. The different calibrated parameter ranges consequently resulted in significantly different water resource estimates. Hence, the parameters and the outputs that they produce in a calibrated model are “conditioned” on the choices of the optimization algorithm and objective function. This adds another level of non-negligible uncertainty to watershed models, calling for more attention and investigation in this area.
Gunawan, Hendra; Micheldiament, Micheldiament; Mikhailov, Valentin
2008-01-01
http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density) estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting ...
Investigating the impact of uneven magnetic flux density distribution on core loss estimation
DEFF Research Database (Denmark)
Niroumand, Farideh Javidi; Nymand, Morten; Wang, Yiren
2017-01-01
is calculated according to an effective flux density value and the macroscopic dimensions of the cores. However, the flux distribution in the core can alter by core shapes and/or operating conditions due to nonlinear material properties. This paper studies the element-wise estimation of the loss in magnetic......There are several approaches for loss estimation in magnetic cores, and all these approaches highly rely on accurate information about flux density distribution in the cores. It is often assumed that the magnetic flux density evenly distributes throughout the core and the overall core loss...
A new algorithm for recursive estimation of ARMA parameters in reactor noise analysis
International Nuclear Information System (INIS)
Tran Dinh Tri
1992-01-01
In this paper a new recursive algorithm for estimating the parameters of the Autoregressive Moving Average (ARMA) model from measured data is presented. The Yule-Walker equations for the case of the ARMA model are derived from the ARMA equation with innovations. The recursive algorithm is based on choosing an appropriate form of the operator functions and suitable representation of the (n + 1)-th order operator functions according to those with lower order. Two cases, when the order of the AR part is equal to that of the MA part, and the general case, were considered. (Author)
Angular-contact ball-bearing internal load estimation algorithm using runtime adaptive relaxation
Medina, H.; Mutu, R.
2017-07-01
An algorithm to estimate internal loads for single-row angular contact ball bearings due to externally applied thrust loads and high-operating speeds is presented. A new runtime adaptive relaxation procedure and blending function is proposed which ensures algorithm stability whilst also reducing the number of iterations needed to reach convergence, leading to an average reduction in computation time in excess of approximately 80%. The model is validated based on a 218 angular contact bearing and shows excellent agreement compared to published results.
Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models.
Directory of Open Access Journals (Sweden)
Maarten Marsman
Full Text Available The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution.
Bonnema, Matthew G.; Sikder, Safat; Hossain, Faisal; Durand, Michael; Gleason, Colin J.; Bjerklie, David M.
2016-04-01
The objective of this study is to compare the effectiveness of three algorithms that estimate discharge from remotely sensed observables (river width, water surface height, and water surface slope) in anticipation of the forthcoming NASA/CNES Surface Water and Ocean Topography (SWOT) mission. SWOT promises to provide these measurements simultaneously, and the river discharge algorithms included here are designed to work with these data. Two algorithms were built around Manning's equation, the Metropolis Manning (MetroMan) method, and the Mean Flow and Geomorphology (MFG) method, and one approach uses hydraulic geometry to estimate discharge, the at-many-stations hydraulic geometry (AMHG) method. A well-calibrated and ground-truthed hydrodynamic model of the Ganges river system (HEC-RAS) was used as reference for three rivers from the Ganges River Delta: the main stem of Ganges, the Arial-Khan, and the Mohananda Rivers. The high seasonal variability of these rivers due to the Monsoon presented a unique opportunity to thoroughly assess the discharge algorithms in light of typical monsoon regime rivers. It was found that the MFG method provides the most accurate discharge estimations in most cases, with an average relative root-mean-squared error (RRMSE) across all three reaches of 35.5%. It is followed closely by the Metropolis Manning algorithm, with an average RRMSE of 51.5%. However, the MFG method's reliance on knowledge of prior river discharge limits its application on ungauged rivers. In terms of input data requirement at ungauged regions with no prior records, the Metropolis Manning algorithm provides a more practical alternative over a region that is lacking in historical observations as the algorithm requires less ancillary data. The AMHG algorithm, while requiring the least prior river data, provided the least accurate discharge measurements with an average wet and dry season RRMSE of 79.8% and 119.1%, respectively, across all rivers studied. This poor
Cardinality Estimation Algorithm in Large-Scale Anonymous Wireless Sensor Networks
Douik, Ahmed
2017-08-30
Consider a large-scale anonymous wireless sensor network with unknown cardinality. In such graphs, each node has no information about the network topology and only possesses a unique identifier. This paper introduces a novel distributed algorithm for cardinality estimation and topology discovery, i.e., estimating the number of node and structure of the graph, by querying a small number of nodes and performing statistical inference methods. While the cardinality estimation allows the design of more efficient coding schemes for the network, the topology discovery provides a reliable way for routing packets. The proposed algorithm is shown to produce a cardinality estimate proportional to the best linear unbiased estimator for dense graphs and specific running times. Simulation results attest the theoretical results and reveal that, for a reasonable running time, querying a small group of nodes is sufficient to perform an estimation of 95% of the whole network. Applications of this work include estimating the number of Internet of Things (IoT) sensor devices, online social users, active protein cells, etc.
2017-04-12
measurement of CT outside of stringent laboratory environments. This study evaluated ECTempTM, a heart rate- based extended Kalman Filter CT...were lower than heart-rate based models analyzed in previous studies. As such, ECTempTM demonstrates strong potential for estimating circadian CT...control of heat transfer from the core to the extremities [11]. As such, heart rate plays a pivotal role in thermoregulation as a primary
An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index
DEFF Research Database (Denmark)
Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle
2013-01-01
We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...
Kernel and wavelet density estimators on manifolds and more general metric spaces
DEFF Research Database (Denmark)
Cleanthous, G.; Georgiadis, Athanasios; Kerkyacharian, G.
We consider the problem of estimating the density of observations taking values in classical or nonclassical spaces such as manifolds and more general metric spaces. Our setting is quite general but also sufficiently rich in allowing the development of smooth functional calculus with well localized...... spectral kernels, Besov regularity spaces, and wavelet type systems. Kernel and both linear and nonlinear wavelet density estimators are introduced and studied. Convergence rates for these estimators are established, which are analogous to the existing results in the classical setting of real...
DEFF Research Database (Denmark)
Soliman, Hammam Abdelaal Hammam; Wang, Huai; Gadalla, Brwene Salah Abdelkarim
2015-01-01
challenges. A capacitance estimation method based on Artificial Neural Network (ANN) algorithm is therefore proposed in this paper. The implemented ANN estimated the capacitance of the DC-link capacitor in a back-toback converter. Analysis of the error of the capacitance estimation is also given......In power electronic converters, reliability of DC-link capacitors is one of the critical issues. The estimation of their health status as an application of condition monitoring have been an attractive subject for industrial field and hence for the academic research filed as well. More reliable...... solutions are required to be adopted by the industry applications in which usage of extra hardware, increased cost, and low estimation accuracy are the main challenges. Therefore, development of new condition monitoring methods based on software solutions could be the new era that covers the aforementioned...
Chang, Y.; Ding, Y.; Zhao, Q.; Zhang, S.
2017-12-01
The accurate estimation of evapotranspiration (ET) is crucial for managing water resources in areas with extreme climates affected by climate change, such as the Tibetan Plateau (TP). The MOD16 ET product has also been validated and applied in many countries with various climates, however, its performance varies under different climates and regions. Several have studied ET based on satellite-based models on the TP. However, only a few studies on the performance of MOD16 in the TP with heterogeneous land cover have been reported. This study proposes an improved algorithm for estimating ET based on a proposed modified MOD16 method over alpine meadow on the TP in China. Wind speed and vegetation height were integrated to estimate aerodynamic resistance, while the temperature and moisture constraint for stomatal conductance were revised based on the technique proposed by Fisher et al. (2008). Moreover, Fisher's method for soil evaporation was introduced to decrease the uncertainty of soil evaporation estimation. Five representative alpine meadow sites on the TP were selected to investigate the performance of the modified algorithm. Comparisons between ET observed using Eddy Covariance (EC) and estimated using both the original method and modified method suggest that the modified algorithm had better performance than the original MOD16 method. This result was achieved considering that the coefficient of determination (R2) increased from 0.28 to 0.70, and the root mean square error (RMSE) decreased from 1.31 to 0.77 mm d-1. The modified algorithm also outperformed on precipitation days compared to non-precipitation days at Suli and Hulugou sites, while it performed well for both non-precipitation and precipitation days at Tanggula site. Comparisons of the 8-day ET estimation using the MOD16 product, original MOD16 method, and modified MOD16 method with observed ET suggest that MOD16 product underestimated ET over the alpine meadow of the TP during the growing season
2015-09-30
titled “Ocean Basin Impact of Ambient Noise on Marine Mammal Detectability, Distribution, and Acoustic Communication ”. Patterns and trends of ocean... mammals in response to potentially negative interactions with human activity requires knowledge of how many animals are present in an area during a...specific time period. Many marine mammal species are relatively hard to sight, making standard visual methods of density estimation difficult and
Application of Matrix Pencil Algorithm to Mobile Robot Localization Using Hybrid DOA/TOA Estimation
Directory of Open Access Journals (Sweden)
Lan Anh Trinh
2012-12-01
Full Text Available Localization plays an important role in robotics for the tasks of monitoring, tracking and controlling a robot. Much effort has been made to address robot localization problems in recent years. However, despite many proposed solutions and thorough consideration, in terms of developing a low-cost and fast processing method for multiple-source signals, the robot localization problem is still a challenge. In this paper, we propose a solution for robot localization with regards to these concerns. In order to locate the position of a robot, both the coordinate and the orientation of a robot are necessary. We develop a localization method using the Matrix Pencil (MP algorithm for hybrid detection of direction of arrival (DOA and time of arrival (TOA. TOA of the signal is estimated for computing the distance between the mobile robot and a base station (BS. Based on the distance and the estimated DOA, we can estimate the mobile robot's position. The characteristics of the algorithm are examined through analysing simulated experiments and the results demonstrate the advantages of our method over previous works in dealing with the above challenges. The method is constructed based on the low-cost infrastructure of radio frequency devices; the DOA/TOA estimation is performed with just single value decomposition for fast processing. Finally, the MP algorithm combined with tracking using a Kalman filter allows our proposed method to locate the positions of multiple source signals.
International Nuclear Information System (INIS)
Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng
2011-01-01
Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.
Directory of Open Access Journals (Sweden)
Ramakrishna R. Nemani
2012-01-01
Full Text Available Algorithms that use remotely-sensed vegetation indices to estimate gross primary production (GPP, a key component of the global carbon cycle, have gained a lot of popularity in the past decade. Yet despite the amount of research on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of different vegetation indices from the Moderate Resolution Imaging Spectroradiometer (MODIS in capturing the seasonal and the annual variability of GPP estimates from an optimal network of 21 FLUXNET forest towers sites. The tested indices include the Normalized Difference Vegetation Index (NDVI, Enhanced Vegetation Index (EVI, Leaf Area Index (LAI, and Fraction of Photosynthetically Active Radiation absorbed by plant canopies (FPAR. Our results indicated that single vegetation indices captured 50–80% of the variability of tower-estimated GPP, but no one index performed universally well in all situations. In particular, EVI outperformed the other MODIS products in tracking seasonal variations in tower-estimated GPP, but annual mean MODIS LAI was the best estimator of the spatial distribution of annual flux-tower GPP (GPP = 615 × LAI − 376, where GPP is in g C/m2/year. This simple algorithm rehabilitated earlier approaches linking ground measurements of LAI to flux-tower estimates of GPP and produced annual GPP estimates comparable to the MODIS 17 GPP product. As such, remote sensing-based estimates of GPP continue to offer a useful alternative to estimates from biophysical models, and the choice of the most appropriate approach depends on whether the estimates are required at annual or sub-annual temporal resolution.
Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret
1992-01-01
Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.
Energy Technology Data Exchange (ETDEWEB)
Lee, Kyun Ho [Sejong University, Sejong (Korea, Republic of); Kim, Ki Wan [Agency for Defense Development, Daejeon (Korea, Republic of)
2014-09-15
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem.
International Nuclear Information System (INIS)
Lee, Kyun Ho; Kim, Ki Wan
2014-01-01
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem
Directory of Open Access Journals (Sweden)
Junjie Lu
2018-01-01
Full Text Available Establishing the schemes of accurate and computationally efficient performance estimation and fault diagnosis for turbofan engines has become a new research focus and challenges. It is able to increase reliability and stability of turbofan engine and reduce the life cycle costs. Accurate estimation of turbofan engine performance counts on thoroughly understanding the components’ performance, which is described by component characteristic maps and the fault of each component can be regarded as the change of characteristic maps. In this paper, a novel method based on a Levenberg–Marquardt (LM algorithm is proposed to enhance the fidelity of the performance estimation and the credibility of the fault diagnosis for the turbofan engine. The presented method utilizes the LM algorithm to figure out the operating point in the characteristic maps, preparing for performance estimation and fault diagnosis. The accuracy of the proposed method is evaluated for estimating performance parameters in the transient case with Rayleigh process noise and Gaussian measurement noise. The comparison among the extended Kalman filter (EKF method, the particle filter (PF method and the proposed method is implemented in the abrupt fault case and the gradual degeneration case and it has been shown that the proposed method has the capability to lead to more accurate result for performance estimation and fault diagnosis of turbofan engine than current popular EKF and PF diagnosis methods.
International Nuclear Information System (INIS)
Lu, Ning; Qin, Jun; Yang, Kun; Sun, Jiulin
2011-01-01
Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.
International Nuclear Information System (INIS)
Parra, J.C.; Acevedo, P.S.; Sobrino, J.A.; Morales, L.J.
2006-01-01
Four algorithms based on the technique of split-window, to estimate the land surface temperature starting from the data provided by the sensor Advanced Very High Resolution radiometer (AVHRR), on board the series of satellites of the National Oceanic and Atmospheric Administration (NOAA), are carried out. These algorithms consider corrections for atmospheric characteristics and emissivity of the different surfaces of the land. Fourteen images AVHRR-NOAA corresponding to the months of October of 2003, and January of 2004 were used. Simultaneously, measurements of soil temperature in the Carillanca hydro-meteorological station were collected in the Region of La Araucana, Chile (38 deg 41 min S; 72 deg 25 min W). Of all the used algorithms, the best results correspond to the model proposed by Sobrino and Raussoni (2000), with a media and standard deviation corresponding to the difference among the temperature of floor measure in situ and the estimated for this algorithm, of -0.06 and 2.11 K, respectively. (Author)
An Updated Algorithm for Estimation of Pesticide Exposure Intensity in the Agricultural Health Study
Directory of Open Access Journals (Sweden)
Aaron Blair
2011-12-01
Full Text Available An algorithm developed to estimate pesticide exposure intensity for use in epidemiologic analyses was revised based on data from two exposure monitoring studies. In the first study, we estimated relative exposure intensity based on the results of measurements taken during the application of the herbicide 2,4-dichlorophenoxyacetic acid (2,4-D (n = 88 and the insecticide chlorpyrifos (n = 17. Modifications to the algorithm weighting factors were based on geometric means (GM of post-application urine concentrations for applicators grouped by application method and use of chemically-resistant (CR gloves. Measurement data from a second study were also used to evaluate relative exposure levels associated with airblast as compared to hand spray application methods. Algorithm modifications included an increase in the exposure reduction factor for use of CR gloves from 40% to 60%, an increase in the application method weight for boom spray relative to in-furrow and for air blast relative to hand spray, and a decrease in the weight for mixing relative to the new weights assigned for application methods. The weighting factors for the revised algorithm now incorporate exposure measurements taken on Agricultural Health Study (AHS participants for the application methods and personal protective equipment (PPE commonly reported by study participants.
Estimating population density and connectivity of American mink using spatial capture-recapture.
Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P
2016-06-01
Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.
Estimating population density and connectivity of American mink using spatial capture-recapture
Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.
2016-01-01
Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Cheng, X C; Su, S J; Wang, Y K; Du, J B
2006-01-01
In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily
Energy Technology Data Exchange (ETDEWEB)
Cheng, X C; Su, S J; Wang, Y K; Du, J B [Instrument Department, College of Mechatronics Engineering and Automation, National University of Defense Technology, ChangSha, Hunan, 410073 (China)
2006-10-15
In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily.
Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad
2017-09-01
This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.
Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.
Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich
2016-01-01
We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.
Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao; Schwerdtfeger, Christine; Mazziotti, David
2013-08-07
Tensor hypercontraction is a method that allows the representation of a high-rank tensor as a product of lower-rank tensors. In this paper, we show how tensor hypercontraction can be applied to both the electron repulsion integral tensor and the two-particle excitation amplitudes used in the parametric 2-electron reduced density matrix (p2RDM) algorithm. Because only O(r) auxiliary functions are needed in both of these approximations, our overall algorithm can be shown to scale as O(r(4)), where r is the number of single-particle basis functions. We apply our algorithm to several small molecules, hydrogen chains, and alkanes to demonstrate its low formal scaling and practical utility. Provided we use enough auxiliary functions, we obtain accuracy similar to that of the standard p2RDM algorithm, somewhere between that of CCSD and CCSD(T).
International Nuclear Information System (INIS)
Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.
2016-01-01
Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.
Directory of Open Access Journals (Sweden)
Milinkovitch Michel C
2007-11-01
Full Text Available Abstract Background Distance matrix methods constitute a major family of phylogenetic estimation methods, and the minimum evolution (ME principle (aiming at recovering the phylogeny with shortest length is one of the most commonly used optimality criteria for estimating phylogenetic trees. The major difficulty for its application is that the number of possible phylogenies grows exponentially with the number of taxa analyzed and the minimum evolution principle is known to belong to the NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGacaGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8xdX7Kaeeiuaafaaa@3888@-hard class of problems. Results In this paper, we introduce an Ant Colony Optimization (ACO algorithm to estimate phylogenies under the minimum evolution principle. ACO is an optimization technique inspired from the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the search of approximate solutions to discrete optimization problems. Conclusion We show that the ACO algorithm is potentially competitive in comparison with state-of-the-art algorithms for the minimum evolution principle. This is the first application of an ACO algorithm to the phylogenetic estimation problem.
Parametric estimation of the Duffing system by using a modified gradient algorithm
International Nuclear Information System (INIS)
Aguilar-Ibanez, Carlos; Sanchez Herrera, Jorge; Garrido-Moctezuma, Ruben
2008-01-01
The Letter presents a strategy for recovering the unknown parameters of the Duffing oscillator using a measurable output signal. The suggested approach employs the construction of an integral parametrization of one auxiliary output. It is calculated by measuring the difference between the output and its respective delay output. First we estimate the auxiliary output, followed by the application of a modified gradient algorithm, then we adjust the gains of the proposed linear estimator, until this error converges to zero. The convergence of the proposed scheme is shown using Lyapunov method
Application of Firefly Algorithm for Parameter Estimation of Damped Compound Pendulum
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available This paper presents an investigation into the parameter estimation of the damped compound pendulum using Firefly algorithm method. In estimating the damped compound pendulum, the system necessarily needs a good model. Therefore, the aim of the work described in this paper is to obtain a dynamic model of the damped compound pendulum. By considering a discrete time form for the system, an autoregressive with exogenous input (ARX model structures was selected. In order to collect input-output data from the experiment, the PRBS signal is used to be input signal to regulate the motor speed. Where, the output signal is taken from position sensor. Firefly algorithm (FA algorithm is used to estimate the model parameters based on model 2nd orders. The model validation was done by comparing the measured output against the predicted output in terms of the closeness of both outputs via mean square error (MSE value. The performance of FA is measured in terms of mean square error (MSE.
MUSIC algorithm DoA estimation for cooperative node location in mobile ad hoc networks
Warty, Chirag; Yu, Richard Wai; ElMahgoub, Khaled; Spinsante, Susanna
In recent years the technological development has encouraged several applications based on distributed communications network without any fixed infrastructure. The problem of providing a collaborative early warning system for multiple mobile nodes against a fast moving object. The solution is provided subject to system level constraints: motion of nodes, antenna sensitivity and Doppler effect at 2.4 GHz and 5.8 GHz. This approach consists of three stages. The first phase consists of detecting the incoming object using a highly directive two element antenna at 5.0 GHz band. The second phase consists of broadcasting the warning message using a low directivity broad antenna beam using 2× 2 antenna array which then in third phase will be detected by receiving nodes by using direction of arrival (DOA) estimation technique. The DOA estimation technique is used to estimate the range and bearing of the incoming nodes. The position of fast arriving object can be estimated using the MUSIC algorithm for warning beam DOA estimation. This paper is mainly intended to demonstrate the feasibility of early detection and warning system using a collaborative node to node communication links. The simulation is performed to show the behavior of detecting and broadcasting antennas as well as performance of the detection algorithm. The idea can be further expanded to implement commercial grade detection and warning system
Energy Technology Data Exchange (ETDEWEB)
Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina [Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Applied Mathematics and Computational Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)
2012-08-15
Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely
Chen, Jun; Quan, Wenting; Cui, Tingwei
2015-01-01
In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric
2017-01-01
This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...
Bulk density estimation using a 3-dimensional image acquisition and analysis system
Directory of Open Access Journals (Sweden)
Heyduk Adam
2016-01-01
Full Text Available The paper presents a concept of dynamic bulk density estimation of a particulate matter stream using a 3-d image analysis system and a conveyor belt scale. A method of image acquisition should be adjusted to the type of scale. The paper presents some laboratory results of static bulk density measurements using the MS Kinect time-of-flight camera and OpenCV/Matlab software. Measurements were made for several different size classes.
Trap array configuration influences estimates and precision of black bear density and abundance.
Directory of Open Access Journals (Sweden)
Clay M Wilton
Full Text Available Spatial capture-recapture (SCR models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193-406 bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of
Estimating peer density effects on oral health for community-based older adults.
Chakraborty, Bibhas; Widener, Michael J; Mirzaei Salehabadi, Sedigheh; Northridge, Mary E; Kum, Susan S; Jin, Zhu; Kunzel, Carol; Palmer, Harvey D; Metcalf, Sara S
2017-12-29
As part of a long-standing line of research regarding how peer density affects health, researchers have sought to understand the multifaceted ways that the density of contemporaries living and interacting in proximity to one another influence social networks and knowledge diffusion, and subsequently health and well-being. This study examined peer density effects on oral health for racial/ethnic minority older adults living in northern Manhattan and the Bronx, New York, NY. Peer age-group density was estimated by smoothing US Census data with 4 kernel bandwidths ranging from 0.25 to 1.50 mile. Logistic regression models were developed using these spatial measures and data from the ElderSmile oral and general health screening program that serves predominantly racial/ethnic minority older adults at community centers in northern Manhattan and the Bronx. The oral health outcomes modeled as dependent variables were ordinal dentition status and binary self-rated oral health. After construction of kernel density surfaces and multiple imputation of missing data, logistic regression analyses were performed to estimate the effects of peer density and other sociodemographic characteristics on the oral health outcomes of dentition status and self-rated oral health. Overall, higher peer density was associated with better oral health for older adults when estimated using smaller bandwidths (0.25 and 0.50 mile). That is, statistically significant relationships (p density and improved dentition status were found when peer density was measured assuming a more local social network. As with dentition status, a positive significant association was found between peer density and fair or better self-rated oral health when peer density was measured assuming a more local social network. This study provides novel evidence that the oral health of community-based older adults is affected by peer density in an urban environment. To the extent that peer density signifies the potential for
A citizen science based survey method for estimating the density of urban carnivores
Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.
2018-01-01
Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on
A group contribution method to estimate the densities of ionic liquids
International Nuclear Information System (INIS)
Qiao Yan; Ma Youguang; Huo Yan; Ma Peisheng; Xia Shuqian
2010-01-01
Densities of ionic liquids at different temperature and pressure were collected from 84 references. The collection contains 7381 data points derived from 123 pure ionic liquids and 13 kinds of binary ionic liquids mixtures. In terms of the collected database, a group contribution method based on 51 groups was used to predict the densities of ionic liquids. In group partition, the effect of interaction among several substitutes on the same center was considered. The same structure in different substitutes may have different group values. According to the estimation of pure ionic liquids' densities, the results show that the average relative error is 0.88% and the standard deviation (S) is 0.0181. Using the set of group values three pure ionic liquids densities were predicted, the average relative error is 0.27% and the S is 0.0048. For ionic liquid mixtures, they are thought considered as idea mixtures, so the group contribution method was used to estimate their densities and the average relative error is 1.22% with S is 0.0607. And the method can also be used to estimate the densities of MCl x type ionic liquids which are produced by mixing an ionic liquid with a Cl - anion and a kind of metal chloride.
Estimating the amount and distribution of radon flux density from the soil surface in China
International Nuclear Information System (INIS)
Zhuo Weihai; Guo Qiuju; Chen Bo; Cheng Guan
2008-01-01
Based on an idealized model, both the annual and the seasonal radon ( 222 Rn) flux densities from the soil surface at 1099 sites in China were estimated by linking a database of soil 226 Ra content and a global ecosystems database. Digital maps of the 222 Rn flux density in China were constructed in a spatial resolution of 25 km x 25 km by interpolation among the estimated data. An area-weighted annual average 222 Rn flux density from the soil surface across China was estimated to be 29.7 ± 9.4 mBq m -2 s -1 . Both regional and seasonal variations in the 222 Rn flux densities are significant in China. Annual average flux densities in the southeastern and northwestern China are generally higher than those in other regions of China, because of high soil 226 Ra content in the southeastern area and high soil aridity in the northwestern one. The seasonal average flux density is generally higher in summer/spring than winter, since relatively higher soil temperature and lower soil water saturation in summer/spring than other seasons are common in China
PEDO-TRANSFER FUNCTIONS FOR ESTIMATING SOIL BULK DENSITY IN CENTRAL AMAZONIA
Directory of Open Access Journals (Sweden)
Henrique Seixas Barros
2015-04-01
Full Text Available Under field conditions in the Amazon forest, soil bulk density is difficult to measure. Rigorous methodological criteria must be applied to obtain reliable inventories of C stocks and soil nutrients, making this process expensive and sometimes unfeasible. This study aimed to generate models to estimate soil bulk density based on parameters that can be easily and reliably measured in the field and that are available in many soil-related inventories. Stepwise regression models to predict bulk density were developed using data on soil C content, clay content and pH in water from 140 permanent plots in terra firme (upland forests near Manaus, Amazonas State, Brazil. The model results were interpreted according to the coefficient of determination (R2 and Akaike information criterion (AIC and were validated with a dataset consisting of 125 plots different from those used to generate the models. The model with best performance in estimating soil bulk density under the conditions of this study included clay content and pH in water as independent variables and had R2 = 0.73 and AIC = -250.29. The performance of this model for predicting soil density was compared with that of models from the literature. The results showed that the locally calibrated equation was the most accurate for estimating soil bulk density for upland forests in the Manaus region.
Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery
Directory of Open Access Journals (Sweden)
Shouyang Liu
2017-05-01
Full Text Available Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages.
Kamali, Tahereh; Stashuk, Daniel
2016-10-01
Robust and accurate segmentation of brain white matter (WM) fiber bundles assists in diagnosing and assessing progression or remission of neuropsychiatric diseases such as schizophrenia, autism and depression. Supervised segmentation methods are infeasible in most applications since generating gold standards is too costly. Hence, there is a growing interest in designing unsupervised methods. However, most conventional unsupervised methods require the number of clusters be known in advance which is not possible in most applications. The purpose of this study is to design an unsupervised segmentation algorithm for brain white matter fiber bundles which can automatically segment fiber bundles using intrinsic diffusion tensor imaging data information without considering any prior information or assumption about data distributions. Here, a new density based clustering algorithm called neighborhood distance entropy consistency (NDEC), is proposed which discovers natural clusters within data by simultaneously utilizing both local and global density information. The performance of NDEC is compared with other state of the art clustering algorithms including chameleon, spectral clustering, DBSCAN and k-means using Johns Hopkins University publicly available diffusion tensor imaging data. The performance of NDEC and other employed clustering algorithms were evaluated using dice ratio as an external evaluation criteria and density based clustering validation (DBCV) index as an internal evaluation metric. Across all employed clustering algorithms, NDEC obtained the highest average dice ratio (0.94) and DBCV value (0.71). NDEC can find clusters with arbitrary shapes and densities and consequently can be used for WM fiber bundle segmentation where there is no distinct boundary between various bundles. NDEC may also be used as an effective tool in other pattern recognition and medical diagnostic systems in which discovering natural clusters within data is a necessity. Copyright
Directory of Open Access Journals (Sweden)
Gustavo Sanchez
2012-01-01
Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.
Leakage Detection and Estimation Algorithm for Loss Reduction in Water Piping Networks
Directory of Open Access Journals (Sweden)
Kazeem B. Adedeji
2017-10-01
Full Text Available Water loss through leaking pipes constitutes a major challenge to the operational service of water utilities. In recent years, increasing concern about the financial loss and environmental pollution caused by leaking pipes has been driving the development of efficient algorithms for detecting leakage in water piping networks. Water distribution networks (WDNs are disperse in nature with numerous number of nodes and branches. Consequently, identifying the segment(s of the network and the exact leaking pipelines connected to this segment(s where higher background leakage outflow occurs is a challenging task. Background leakage concerns the outflow from small cracks or deteriorated joints. In addition, because they are diffuse flow, they are not characterised by quick pressure drop and are not detectable by measuring instruments. Consequently, they go unreported for a long period of time posing a threat to water loss volume. Most of the existing research focuses on the detection and localisation of burst type leakages which are characterised by a sudden pressure drop. In this work, an algorithm for detecting and estimating background leakage in water distribution networks is presented. The algorithm integrates a leakage model into a classical WDN hydraulic model for solving the network leakage flows. The applicability of the developed algorithm is demonstrated on two different water networks. The results of the tested networks are discussed and the solutions obtained show the benefits of the proposed algorithm. A noteworthy evidence is that the algorithm permits the detection of critical segments or pipes of the network experiencing higher leakage outflow and indicates the probable pipes of the network where pressure control can be performed. However, the possible position of pressure control elements along such critical pipes will be addressed in future work.
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Estimation of current density distribution of PAFC by analysis of cell exhaust gas
Energy Technology Data Exchange (ETDEWEB)
Kato, S.; Seya, A. [Fuji Electric Co., Ltd., Ichihara-shi (Japan); Asano, A. [Fuji Electric Corporate, Ltd., Yokosuka-shi (Japan)
1996-12-31
To estimate distributions of Current densities, voltages, gas concentrations, etc., in phosphoric acid fuel cell (PAFC) stacks, is very important for getting fuel cells with higher quality. In this work, we leave developed a numerical simulation tool to map out the distribution in a PAFC stack. And especially to Study Current density distribution in the reaction area of the cell, we analyzed gas composition in several positions inside a gas outlet manifold of the PAFC stack. Comparing these measured data with calculated data, the current density distribution in a cell plane calculated by the simulation, was certified.
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.
A hierarchical model for estimating density in camera-trap studies
Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.
2009-01-01
Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574
Directory of Open Access Journals (Sweden)
Eléanor Brassine
Full Text Available Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9 cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km². While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200, no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.
Application of genetic algorithm (GA) technique on demand estimation of fossil fuels in Turkey
International Nuclear Information System (INIS)
Canyurt, Olcay Ersel; Ozturk, Harun Kemal
2008-01-01
The main objective is to investigate Turkey's fossil fuels demand, projection and supplies by using the structure of the Turkish industry and economic conditions. This study develops scenarios to analyze fossil fuels consumption and makes future projections based on a genetic algorithm (GA). The models developed in the nonlinear form are applied to the coal, oil and natural gas demand of Turkey. Genetic algorithm demand estimation models (GA-DEM) are developed to estimate the future coal, oil and natural gas demand values based on population, gross national product, import and export figures. It may be concluded that the proposed models can be used as alternative solutions and estimation techniques for the future fossil fuel utilization values of any country. In the study, coal, oil and natural gas consumption of Turkey are projected. Turkish fossil fuel demand is increased dramatically. Especially, coal, oil and natural gas consumption values are estimated to increase almost 2.82, 1.73 and 4.83 times between 2000 and 2020. In the figures GA-DEM results are compared with World Energy Council Turkish National Committee (WECTNC) projections. The observed results indicate that WECTNC overestimates the fossil fuel consumptions. (author)
Giorgis, L; Frogerais, P; Amblard, A; Donal, E; Mabo, P; Senhadji, L; Hernández, A I
2012-11-01
Previous studies have shown that cardiac microacceleration signals, recorded either cutaneously, or embedded into the tip of an endocardial pacing lead, provide meaningful information to characterize the cardiac mechanical function. This information may be useful to personalize and optimize the cardiac resynchronization therapy, delivered by a biventricular pacemaker, for patients suffering from chronic heart failure (HF). This paper focuses on the improvement of a previously proposed method for the estimation of the systole period from a signal acquired with a cardiac microaccelerometer (SonR sensor, Sorin CRM SAS, France). We propose an optimal algorithm switching approach, to dynamically select the best configuration of the estimation method, as a function of different control variables, such as the signal-to-noise ratio or heart rate. This method was evaluated on a database containing recordings from 31 patients suffering from chronic HF and implanted with a biventricular pacemaker, for which various cardiac pacing configurations were tested. Ultrasound measurements of the systole period were used as a reference and the improved method was compared with the original estimator. A reduction of 11% on the absolute estimation error was obtained for the systole period with the proposed algorithm switching approach.
Directory of Open Access Journals (Sweden)
G. López
2004-09-01
Full Text Available Atmospheric turbidity is an important parameter for assessing the air pollution in local areas, as well as being the main parameter controlling the attenuation of solar radiation reaching the Earth's surface under cloudless sky conditions. Among the different turbidity indices, the Ångström turbidity coefficient β is frequently used. In this work, we analyse the performance of three methods based on broad-band solar irradiance measurements in the estimation of β. The evaluation of the performance of the models was undertaken by graphical and statistical (root mean square errors and mean bias errors means. The data sets used in this study comprise measurements of broad-band solar irradiance obtained at eight radiometric stations and aerosol optical thickness measurements obtained at one co-located radiometric station. Since all three methods require estimates of precipitable water content, three common methods for calculating atmospheric precipitable water content from surface air temperature and relative humidity are evaluated. Results show that these methods exhibit significant differences for low values of precipitable water. The effect of these differences in precipitable water estimates on turbidity algorithms is discussed. Differences in hourly turbidity estimates are later examined. The effects of random errors in pyranometer measurements and cloud interferences on the performance of the models are also presented. Examination of the annual cycle of monthly mean values of β for each location has shown that all three turbidity algorithms are suitable for analysing long-term trends and seasonal patterns.
Directory of Open Access Journals (Sweden)
G. López
2004-09-01
Full Text Available Atmospheric turbidity is an important parameter for assessing the air pollution in local areas, as well as being the main parameter controlling the attenuation of solar radiation reaching the Earth's surface under cloudless sky conditions. Among the different turbidity indices, the Ångström turbidity coefficient β is frequently used. In this work, we analyse the performance of three methods based on broad-band solar irradiance measurements in the estimation of β. The evaluation of the performance of the models was undertaken by graphical and statistical (root mean square errors and mean bias errors means. The data sets used in this study comprise measurements of broad-band solar irradiance obtained at eight radiometric stations and aerosol optical thickness measurements obtained at one co-located radiometric station. Since all three methods require estimates of precipitable water content, three common methods for calculating atmospheric precipitable water content from surface air temperature and relative humidity are evaluated. Results show that these methods exhibit significant differences for low values of precipitable water. The effect of these differences in precipitable water estimates on turbidity algorithms is discussed. Differences in hourly turbidity estimates are later examined. The effects of random errors in pyranometer measurements and cloud interferences on the performance of the models are also presented. Examination of the annual cycle of monthly mean values of β for each location has shown that all three turbidity algorithms are suitable for analysing long-term trends and seasonal patterns.
Energy Technology Data Exchange (ETDEWEB)
Lopez, G.; Batlles, F.J. [Dept. de Ingenieria Electrica y Termica, EPS La Rabida, Univ. de Huelva, Huelva (Spain)
2004-07-01
Atmospheric turbidity is an important parameter for assessing the air pollution in local areas, as well as being the main parameter controlling the attenuation of solar radiation reaching the Earth's surface under cloudless sky conditions. Among the different turbidity indices, the Aangstroem turbidity coefficient {beta} is frequently used. In this work, we analyse the performance of three methods based on broadband solar irradiance measurements in the estimation of {beta}. The evaluation of the performance of the models was undertaken by graphical and statistical (root mean square errors and mean bias errors) means. The data sets used in this study comprise measurements of broad-band solar irradiance obtained at eight radiometric stations and aerosol optical thickness measurements obtained at one co-located radiometric station. Since all three methods require estimates of precipitable water content, three common methods for calculating atmospheric precipitable water content from surface air temperature and relative humidity are evaluated. Results show that these methods exhibit significant differences for low values of precipitable water. The effect of these differences in precipitable water estimates on turbidity algorithms is discussed. Differences in hourly turbidity estimates are later examined. The effects of random errors in pyranometer measurements and cloud interferences on the performance of the models are also presented. Examination of the annual cycle of monthly mean values of {beta} for each location has shown that all three turbidity algorithms are suitable for analysing long-term trends and seasonal patterns. (orig.)
Nonparametric Bayesian density estimation on manifolds with applications to planar shapes.
Bhattacharya, Abhishek; Dunson, David B
2010-12-01
Statistical analysis on landmark-based shape spaces has diverse applications in morphometrics, medical diagnostics, machine vision and other areas. These shape spaces are non-Euclidean quotient manifolds. To conduct nonparametric inferences, one may define notions of centre and spread on this manifold and work with their estimates. However, it is useful to consider full likelihood-based methods, which allow nonparametric estimation of the probability density. This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular. Following a Bayesian approach with a nonparametric prior on the mixing distribution, conditions are obtained under which the Kullback-Leibler property holds, implying large support and weak posterior consistency. Gibbs sampling methods are developed for posterior computation, and the methods are applied to problems in density estimation and classification with shape-based predictors. Simulation studies show improved estimation performance relative to existing approaches.
Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm
Sun, Haisheng; Xu, Rui; Chen, Huaping
2018-04-01
To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.
Multi-objective optimization with estimation of distribution algorithm in a noisy environment.
Shim, Vui Ann; Tan, Kay Chen; Chia, Jun Yong; Al Mamun, Abdullah
2013-01-01
Many real-world optimization problems are subjected to uncertainties that may be characterized by the presence of noise in the objective functions. The estimation of distribution algorithm (EDA), which models the global distribution of the population for searching tasks, is one of the evolutionary computation techniques that deals with noisy information. This paper studies the potential of EDAs; particularly an EDA based on restricted Boltzmann machines that handles multi-objective optimization problems in a noisy environment. Noise is introduced to the objective functions in the form of a Gaussian distribution. In order to reduce the detrimental effect of noise, a likelihood correction feature is proposed to tune the marginal probability distribution of each decision variable. The EDA is subsequently hybridized with a particle swarm optimization algorithm in a discrete domain to improve its search ability. The effectiveness of the proposed algorithm is examined via eight benchmark instances with different characteristics and shapes of the Pareto optimal front. The scalability, hybridization, and computational time are rigorously studied. Comparative studies show that the proposed approach outperforms other state of the art algorithms.
Trial latencies estimation of event-related potentials in EEG by means of genetic algorithms
Da Pelo, P.; De Tommaso, M.; Monaco, A.; Stramaglia, S.; Bellotti, R.; Tangaro, S.
2018-04-01
Objective. Event-related potentials (ERPs) are usually obtained by averaging thus neglecting the trial-to-trial latency variability in cognitive electroencephalography (EEG) responses. As a consequence the shape and the peak amplitude of the averaged ERP are smeared and reduced, respectively, when the single-trial latencies show a relevant variability. To date, the majority of the methodologies for single-trial latencies inference are iterative schemes providing suboptimal solutions, the most commonly used being the Woody’s algorithm. Approach. In this study, a global approach is developed by introducing a fitness function whose global maximum corresponds to the set of latencies which renders the trial signals most aligned as possible. A suitable genetic algorithm has been implemented to solve the optimization problem, characterized by new genetic operators tailored to the present problem. Main results. The results, on simulated trials, showed that the proposed algorithm performs better than Woody’s algorithm in all conditions, at the cost of an increased computational complexity (justified by the improved quality of the solution). Application of the proposed approach on real data trials, resulted in an increased correlation between latencies and reaction times w.r.t. the output from RIDE method. Significance. The above mentioned results on simulated and real data indicate that the proposed method, providing a better estimate of single-trial latencies, will open the way to more accurate study of neural responses as well as to the issue of relating the variability of latencies to the proper cognitive and behavioural correlates.
The importance of spatial models for estimating the strength of density dependence
DEFF Research Database (Denmark)
Thorson, James T.; Skaug, Hans J.; Kristensen, Kasper
2014-01-01
the California Coast. In this case, the nonspatial model estimates implausible oscillatory dynamics on an annual time scale, while the spatial model estimates strong autocorrelation and is supported by model selection tools. We conclude by discussing the importance of improved data archiving techniques, so...... that spatial models can be used to re-examine classic questions regarding the presence and strength of density dependence in wild populations Read More: http://www.esajournals.org/doi/abs/10.1890/14-0739.1...
Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo
2017-01-01
The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.
Fitness Estimation Based Particle Swarm Optimization Algorithm for Layout Design of Truss Structures
Directory of Open Access Journals (Sweden)
Ayang Xiao
2014-01-01
Full Text Available Due to the fact that vastly different variables and constraints are simultaneously considered, truss layout optimization is a typical difficult constrained mixed-integer nonlinear program. Moreover, the computational cost of truss analysis is often quite expensive. In this paper, a novel fitness estimation based particle swarm optimization algorithm with an adaptive penalty function approach (FEPSO-AP is proposed to handle this problem. FEPSO-AP adopts a special fitness estimate strategy to evaluate the similar particles in the current population, with the purpose to reduce the computational cost. Further more, a laconic adaptive penalty function is employed by FEPSO-AP, which can handle multiple constraints effectively by making good use of historical iteration information. Four benchmark examples with fixed topologies and up to 44 design dimensions were studied to verify the generality and efficiency of the proposed algorithm. Numerical results of the present work compared with results of other state-of-the-art hybrid algorithms shown in the literature demonstrate that the convergence rate and the solution quality of FEPSO-AP are essentially competitive.
An Online Tilt Estimation and Compensation Algorithm for a Small Satellite Camera
Lee, Da-Hyun; Hwang, Jai-hyuk
2018-04-01
In the case of a satellite camera designed to execute an Earth observation mission, even after a pre-launch precision alignment process has been carried out, misalignment will occur due to external factors during the launch and in the operating environment. In particular, for high-resolution satellite cameras, which require submicron accuracy for alignment between optical components, misalignment is a major cause of image quality degradation. To compensate for this, most high-resolution satellite cameras undergo a precise realignment process called refocusing before and during the operation process. However, conventional Earth observation satellites only execute refocusing upon de-space. Thus, in this paper, an online tilt estimation and compensation algorithm that can be utilized after de-space correction is executed. Although the sensitivity of the optical performance degradation due to the misalignment is highest in de-space, the MTF can be additionally increased by correcting tilt after refocusing. The algorithm proposed in this research can be used to estimate the amount of tilt that occurs by taking star images, and it can also be used to carry out automatic tilt corrections by employing a compensation mechanism that gives angular motion to the secondary mirror. Crucially, this algorithm is developed using an online processing system so that it can operate without communication with the ground.
Empirical algorithms to estimate water column pH in the Southern Ocean
Williams, N. L.; Juranek, L. W.; Johnson, K. S.; Feely, R. A.; Riser, S. C.; Talley, L. D.; Russell, J. L.; Sarmiento, J. L.; Wanninkhof, R.
2016-04-01
Empirical algorithms are developed using high-quality GO-SHIP hydrographic measurements of commonly measured parameters (temperature, salinity, pressure, nitrate, and oxygen) that estimate pH in the Pacific sector of the Southern Ocean. The coefficients of determination, R2, are 0.98 for pH from nitrate (pHN) and 0.97 for pH from oxygen (pHOx) with RMS errors of 0.010 and 0.008, respectively. These algorithms are applied to Southern Ocean Carbon and Climate Observations and Modeling (SOCCOM) biogeochemical profiling floats, which include novel sensors (pH, nitrate, oxygen, fluorescence, and backscatter). These algorithms are used to estimate pH on floats with no pH sensors and to validate and adjust pH sensor data from floats with pH sensors. The adjusted float data provide, for the first time, seasonal cycles in surface pH on weekly resolution that range from 0.05 to 0.08 on weekly resolution for the Pacific sector of the Southern Ocean.
Dikbas, Salih; Altunbasak, Yucel
2013-08-01
In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.
Aid decision algorithms to estimate the risk in congenital heart surgery.
Ruiz-Fernández, Daniel; Monsalve Torra, Ana; Soriano-Payá, Antonio; Marín-Alonso, Oscar; Triana Palencia, Eddy
2016-04-01
In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.
Directory of Open Access Journals (Sweden)
Alexander Richard Braczkowski
Full Text Available Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96 or temporal activity of female (p = 0.12 or male leopards (p = 0.79, and the assumption of geographic closure was met for both surveys (p >0.05. The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90. Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2 were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2. The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.
Directory of Open Access Journals (Sweden)
Z. Lari
2012-07-01
Full Text Available Over the past few years, LiDAR systems have been established as a leading technology for the acquisition of high density point clouds over physical surfaces. These point clouds will be processed for the extraction of geo-spatial information. Local point density is one of the most important properties of the point cloud that highly affects the performance of data processing techniques and the quality of extracted information from these data. Therefore, it is necessary to define a standard methodology for the estimation of local point density indices to be considered for the precise processing of LiDAR data. Current definitions of local point density indices, which only consider the 2D neighbourhood of individual points, are not appropriate for 3D LiDAR data and cannot be applied for laser scans from different platforms. In order to resolve the drawbacks of these methods, this paper proposes several approaches for the estimation of the local point density index which take the 3D relationship among the points and the physical properties of the surfaces they belong to into account. In the simplest approach, an approximate value of the local point density for each point is defined while considering the 3D relationship among the points. In the other approaches, the local point density is estimated by considering the 3D neighbourhood of the point in question and the physical properties of the surface which encloses this point. The physical properties of the surfaces enclosing the LiDAR points are assessed through eigen-value analysis of the 3D neighbourhood of individual points and adaptive cylinder methods. This paper will discuss these approaches and highlight their impact on various LiDAR data processing activities (i.e., neighbourhood definition, region growing, segmentation, boundary detection, and classification. Experimental results from airborne and terrestrial LiDAR data verify the efficacy of considering local point density variation for
Estimation of tool wear length in finish milling using a fuzzy inference algorithm
Ko, Tae Jo; Cho, Dong Woo
1993-10-01
The geometric accuracy and surface roughness are mainly affected by the flank wear at the minor cutting edge in finish machining. A fuzzy estimator obtained by a fuzzy inference algorithm with a max-min composition rule to evaluate the minor flank wear length in finish milling is introduced. The features sensitive to minor flank wear are extracted from the dispersion analysis of a time series AR model of the feed directional acceleration of the spindle housing. Linguistic rules for fuzzy estimation are constructed using these features, and then fuzzy inferences are carried out with test data sets under various cutting conditions. The proposed system turns out to be effective for estimating minor flank wear length, and its mean error is less than 12%.
Three different applications of genetic algorithm (GA) search techniques on oil demand estimation
International Nuclear Information System (INIS)
Canyurt, Olcay Ersel; Oztuerk, Harun Kemal
2006-01-01
This present study develops three scenarios to analyze oil consumption and make future projections based on the Genetic algorithm (GA) notion, and examines the effect of the design parameters on the oil utilization values. The models developed in the non-linear form are applied to the oil demand of Turkey. The GA Oil Demand Estimation Model (GAODEM) is developed to estimate the future oil demand values based on Gross National Product (GNP), population, import, export, oil production, oil import and car, truck and bus sales figures. Among these models, the GA-PGOiTI model, which uses population, GNP, oil import, truck sales and import as design parameters/indicators, was found to provide the best fit solution with the observed data. It may be concluded that the proposed models can be used as alternative solution and estimation techniques for the future oil utilization values of any country
Estimation Algorithm of Machine Operational Intention by Bayes Filtering with Self-Organizing Map
Directory of Open Access Journals (Sweden)
Satoshi Suzuki
2012-01-01
Full Text Available We present an intention estimator algorithm that can deal with dynamic change of the environment in a man-machine system and will be able to be utilized for an autarkical human-assisting system. In the algorithm, state transition relation of intentions is formed using a self-organizing map (SOM from the measured data of the operation and environmental variables with the reference intention sequence. The operational intention modes are identified by stochastic computation using a Bayesian particle filter with the trained SOM. This method enables to omit the troublesome process to specify types of information which should be used to build the estimator. Applying the proposed method to the remote operation task, the estimator's behavior was analyzed, the pros and cons of the method were investigated, and ways for the improvement were discussed. As a result, it was confirmed that the estimator can identify the intention modes at 44–94 percent concordance ratios against normal intention modes whose periods can be found by about 70 percent of members of human analysts. On the other hand, it was found that human analysts' discrimination which was used as canonical data for validation differed depending on difference of intention modes. Specifically, an investigation of intentions pattern discriminated by eight analysts showed that the estimator could not identify the same modes that human analysts could not discriminate. And, in the analysis of the multiple different intentions, it was found that the estimator could identify the same type of intention modes to human-discriminated ones as well as 62–73 percent when the first and second dominant intention modes were considered.
International Nuclear Information System (INIS)
Sasaki, Makoto; Kudo, Kohsuke; Uwano, Ikuko; Goodwin, Jonathan; Higuchi, Satomi; Ito, Kenji; Yamashita, Fumio; Boutelier, Timothe; Pautot, Fabrice; Christensen, Soren
2013-01-01
A new deconvolution algorithm, the Bayesian estimation algorithm, was reported to improve the precision of parametric maps created using perfusion computed tomography. However, it remains unclear whether quantitative values generated by this method are more accurate than those generated using optimized deconvolution algorithms of other software packages. Hence, we compared the accuracy of the Bayesian and deconvolution algorithms by using a digital phantom. The digital phantom data, in which concentration-time curves reflecting various known values for cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer delays were embedded, were analyzed using the Bayesian estimation algorithm as well as delay-insensitive singular value decomposition (SVD) algorithms of two software packages that were the best benchmarks in a previous cross-validation study. Correlation and agreement of quantitative values of these algorithms with true values were examined. CBF, CBV, and MTT values estimated by all the algorithms showed strong correlations with the true values (r = 0.91-0.92, 0.97-0.99, and 0.91-0.96, respectively). In addition, the values generated by the Bayesian estimation algorithm for all of these parameters showed good agreement with the true values [intraclass correlation coefficient (ICC) = 0.90, 0.99, and 0.96, respectively], while MTT values from the SVD algorithms were suboptimal (ICC = 0.81-0.82). Quantitative analysis using a digital phantom revealed that the Bayesian estimation algorithm yielded CBF, CBV, and MTT maps strongly correlated with the true values and MTT maps with better agreement than those produced by delay-insensitive SVD algorithms. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Sasaki, Makoto; Kudo, Kohsuke; Uwano, Ikuko; Goodwin, Jonathan; Higuchi, Satomi; Ito, Kenji; Yamashita, Fumio [Iwate Medical University, Division of Ultrahigh Field MRI, Institute for Biomedical Sciences, Yahaba (Japan); Boutelier, Timothe; Pautot, Fabrice [Olea Medical, Department of Research and Innovation, La Ciotat (France); Christensen, Soren [University of Melbourne, Department of Neurology and Radiology, Royal Melbourne Hospital, Victoria (Australia)
2013-10-15
A new deconvolution algorithm, the Bayesian estimation algorithm, was reported to improve the precision of parametric maps created using perfusion computed tomography. However, it remains unclear whether quantitative values generated by this method are more accurate than those generated using optimized deconvolution algorithms of other software packages. Hence, we compared the accuracy of the Bayesian and deconvolution algorithms by using a digital phantom. The digital phantom data, in which concentration-time curves reflecting various known values for cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer delays were embedded, were analyzed using the Bayesian estimation algorithm as well as delay-insensitive singular value decomposition (SVD) algorithms of two software packages that were the best benchmarks in a previous cross-validation study. Correlation and agreement of quantitative values of these algorithms with true values were examined. CBF, CBV, and MTT values estimated by all the algorithms showed strong correlations with the true values (r = 0.91-0.92, 0.97-0.99, and 0.91-0.96, respectively). In addition, the values generated by the Bayesian estimation algorithm for all of these parameters showed good agreement with the true values [intraclass correlation coefficient (ICC) = 0.90, 0.99, and 0.96, respectively], while MTT values from the SVD algorithms were suboptimal (ICC = 0.81-0.82). Quantitative analysis using a digital phantom revealed that the Bayesian estimation algorithm yielded CBF, CBV, and MTT maps strongly correlated with the true values and MTT maps with better agreement than those produced by delay-insensitive SVD algorithms. (orig.)
Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm
Directory of Open Access Journals (Sweden)
Deok-Soon An
2013-01-01
Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.
Li, Ruixiao; Li, Kun; Zhao, Changming
2018-01-01
Coherent dual-frequency Lidar (CDFL) is a new development of Lidar which dramatically enhances the ability to decrease the influence of atmospheric interference by using dual-frequency laser to measure the range and velocity with high precision. Based on the nature of CDFL signals, we propose to apply the multiple signal classification (MUSIC) algorithm in place of the fast Fourier transform (FFT) to estimate the phase differences in dual-frequency Lidar. In the presence of Gaussian white noise, the simulation results show that the signal peaks are more evident when using MUSIC algorithm instead of FFT in condition of low signal-noise-ratio (SNR), which helps to improve the precision of detection on range and velocity, especially for the long distance measurement systems.
A new algorithm for epilepsy seizure onset detection and spread estimation from EEG signals
Quintero-Rincón, Antonio; Pereyra, Marcelo; D'Giano, Carlos; Batatia, Hadj; Risk, Marcelo
2016-04-01
Appropriate diagnosis and treatment of epilepsy is a main public health issue. Patients suffering from this disease often exhibit different physical characterizations, which result from the synchronous and excessive discharge of a group of neurons in the cerebral cortex. Extracting this information using EEG signals is an important problem in biomedical signal processing. In this work we propose a new algorithm for seizure onset detection and spread estimation in epilepsy patients. The algorithm is based on a multilevel 1-D wavelet decomposition that captures the physiological brain frequency signals coupled with a generalized gaussian model. Preliminary experiments with signals from 30 epilepsy crisis and 11 subjects, suggest that the proposed methodology is a powerful tool for detecting the onset of epilepsy seizures with his spread across the brain.
International Nuclear Information System (INIS)
Klee Barillas, Joaquín; Li, Jiahao; Günther, Clemens; Danzer, Michael A.
2015-01-01
Highlights: • Description of state observers for estimating the battery’s SOC. • Implementation of four estimation algorithms in a BMS. • Reliability and performance study of BMS regarding the estimation algorithms. • Analysis of the robustness and code properties of the estimation approaches. • Guide to evaluate estimation algorithms to improve the BMS performance. - Abstract: To increase lifetime, safety, and energy usage battery management systems (BMS) for Li-ion batteries have to be capable of estimating the state of charge (SOC) of the battery cells with a very low estimation error. The accurate SOC estimation and the real time reliability are critical issues for a BMS. In general an increasing complexity of the estimation methods leads to higher accuracy. On the other hand it also leads to a higher computational load and may exceed the BMS limitations or increase its costs. An approach to evaluate and verify estimation algorithms is presented as a requisite prior the release of the battery system. The approach consists of an analysis concerning the SOC estimation accuracy, the code properties, complexity, the computation time, and the memory usage. Furthermore, a study for estimation methods is proposed for their evaluation and validation with respect to convergence behavior, parameter sensitivity, initialization error, and performance. In this work, the introduced analysis is demonstrated with four of the most published model-based estimation algorithms including Luenberger observer, sliding-mode observer, Extended Kalman Filter and Sigma-point Kalman Filter. The experiments under dynamic current conditions are used to verify the real time functionality of the BMS. The results show that a simple estimation method like the sliding-mode observer can compete with the Kalman-based methods presenting less computational time and memory usage. Depending on the battery system’s application the estimation algorithm has to be selected to fulfill the
Yi, Wen; Xue, Xianghui; Reid, Iain M.; Younger, Joel P.; Chen, Jinsong; Chen, Tingdi; Li, Na
2018-04-01
Neutral mesospheric densities at a low latitude have been derived during April 2011 to December 2014 using data from the Kunming meteor radar in China (25.6°N, 103.8°E). The daily mean density at 90 km was estimated using the ambipolar diffusion coefficients from the meteor radar and temperatures from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument. The seasonal variations of the meteor radar-derived density are consistent with the density from the Mass Spectrometer and Incoherent Scatter (MSIS) model, show a dominant annual variation, with a maximum during winter, and a minimum during summer. A simple linear model was used to separate the effects of atmospheric density and the meteor velocity on the meteor radar peak detection height. We find that a 1 km/s difference in the vertical meteor velocity yields a change of approximately 0.42 km in peak height. The strong correlation between the meteor radar density and the velocity-corrected peak height indicates that the meteor radar density estimates accurately reflect changes in neutral atmospheric density and that meteor peak detection heights, when adjusted for meteoroid velocity, can serve as a convenient tool for measuring density variations around the mesopause. A comparison of the ambipolar diffusion coefficient and peak height observed simultaneously by two co-located meteor radars indicates that the relative errors of the daily mean ambipolar diffusion coefficient and peak height should be less than 5% and 6%, respectively, and that the absolute error of the peak height is less than 0.2 km.
International Nuclear Information System (INIS)
Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi
2016-01-01
Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.
Urban birds in the Sonoran Desert: estimating population density from point counts
Directory of Open Access Journals (Sweden)
Karina Johnston López
2015-01-01
Full Text Available We conducted bird surveys in Hermosillo, Sonora using distance sampling to characterize detection functions at point-transects for native and non-native urban birds in a desert environment. From March to August 2013 we sampled 240 plots in the city and its surroundings; each plot was visited three times. Our purpose was to provide information for a rapid assessment of bird density in this region by using point counts. We identified 72 species, including six non-native species. Sixteen species had sufficient detections to accurately estimate the parameters of the detection functions. To illustrate the estimation of density from bird count data using our inferred detection functions, we estimated the density of the Eurasian Collared-Dove (Streptopelia decaocto under two different levels of urbanization: highly urbanized (90-100% of urban impact and moderately urbanized zones (39-50% of urban impact. Density of S. decaocto in the highly-urbanized and moderately-urbanized zones was 3.97±0.52 and 2.92±0.52 individuals/ha, respectively. By using our detection functions, avian ecologists can efficiently relocate time and effort that is regularly used for the estimation of detection distances, to increase the number of sites surveyed and to collect other relevant ecological information.
DNA-based population density estimation of black bear at northern ...
African Journals Online (AJOL)
The analysis of deoxyribonucleic acid (DNA) microsatellites from hair samples obtained by the non-invasive method of traps was used to estimate the population density of black bears (Ursus americanus eremicus) in a mountain located at the county of Lampazos, Nuevo Leon, Mexico. The genotyping of bears was ...
Eurasian otter (Lutra lutra) density estimate based on radio tracking and other data sources
Czech Academy of Sciences Publication Activity Database
Quaglietta, L.; Hájková, Petra; Mira, A.; Boitani, L.
2015-01-01
Roč. 60, č. 2 (2015), s. 127-137 ISSN 2199-2401 R&D Projects: GA AV ČR KJB600930804 Institutional support: RVO:68081766 Keywords : Lutra lutra * Density estimation * Edge effect * Known-to-be-alive * Linear habitats * Sampling scale Subject RIV: EG - Zoology
The Wegner Estimate and the Integrated Density of States for some ...
Indian Academy of Sciences (India)
The integrated density of states (IDS) for random operators is an important function describing many physical characteristics of a random system. Properties of the IDS are derived from the Wegner estimate that describes the influence of finite-volume perturbations on a background system. In this paper, we present a simple ...
Vedantham, Srinivasan; Shi, Linxi; Michaelsen, Kelly E; Krishnaswamy, Venkataramanan; Pogue, Brian W; Poplack, Steven P; Karellas, Andrew; Paulsen, Keith D
A multimodality system combining a clinical prototype digital breast tomosynthesis with its imaging geometry modified to facilitate near-infrared spectroscopic imaging has been developed. The accuracy of parameters recovered from near-infrared spectroscopy is dependent on fibroglandular tissue content. Hence, in this study, volumetric estimates of fibroglandular tissue from tomosynthesis reconstructions were determined. A kernel-based fuzzy c-means algorithm was implemented to segment tomosynthesis reconstructed slices in order to estimate fibroglandular content and to provide anatomic priors for near-infrared spectroscopy. This algorithm was used to determine volumetric breast density (VBD), defined as the ratio of fibroglandular tissue volume to the total breast volume, expressed as percentage, from 62 tomosynthesis reconstructions of 34 study participants. For a subset of study participants who subsequently underwent mammography, VBD from mammography matched for subject, breast laterality and mammographic view was quantified using commercial software and statistically analyzed to determine if it differed from tomosynthesis. Summary statistics of the VBD from all study participants were compared with prior independent studies. The fibroglandular volume from tomosynthesis and mammography were not statistically different ( p =0.211, paired t-test). After accounting for the compressed breast thickness, which were different between tomosynthesis and mammography, the VBD from tomosynthesis was correlated with ( r =0.809, p 0.99, paired t-test), and was linearly related to, the VBD from mammography. Summary statistics of the VBD from tomosynthesis were not statistically different from prior studies using high-resolution dedicated breast computed tomography. The observation of correlation and linear association in VBD between mammography and tomosynthesis suggests that breast density associated risk measures determined for mammography are translatable to tomosynthesis
3D depth-to-basement and density contrast estimates using gravity and borehole data
Barbosa, V. C.; Martins, C. M.; Silva, J. B.
2009-05-01
We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding
Estimating abundance and density of Amur tigers along the Sino-Russian border.
Xiao, Wenhong; Feng, Limin; Mou, Pu; Miquelle, Dale G; Hebblewhite, Mark; Goldberg, Joshua F; Robinson, Hugh S; Zhao, Xiaodan; Zhou, Bo; Wang, Tianming; Ge, Jianping
2016-07-01
As an apex predator the Amur tiger (Panthera tigris altaica) could play a pivotal role in maintaining the integrity of forest ecosystems in Northeast Asia. Due to habitat loss and harvest over the past century, tigers rapidly declined in China and are now restricted to the Russian Far East and bordering habitat in nearby China. To facilitate restoration of the tiger in its historical range, reliable estimates of population size are essential to assess effectiveness of conservation interventions. Here we used camera trap data collected in Hunchun National Nature Reserve from April to June 2013 and 2014 to estimate tiger density and abundance using both maximum likelihood and Bayesian spatially explicit capture-recapture (SECR) methods. A minimum of 8 individuals were detected in both sample periods and the documentation of marking behavior and reproduction suggests the presence of a resident population. Using Bayesian SECR modeling within the 11 400 km(2) state space, density estimates were 0.33 and 0.40 individuals/100 km(2) in 2013 and 2014, respectively, corresponding to an estimated abundance of 38 and 45 animals for this transboundary Sino-Russian population. In a maximum likelihood framework, we estimated densities of 0.30 and 0.24 individuals/100 km(2) corresponding to abundances of 34 and 27, in 2013 and 2014, respectively. These density estimates are comparable to other published estimates for resident Amur tiger populations in the Russian Far East. This study reveals promising signs of tiger recovery in Northeast China, and demonstrates the importance of connectivity between the Russian and Chinese populations for recovering tigers in Northeast China. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Heasler, Patrick G.; Posse, Christian; Hylden, Jeff L.; Anderson, Kevin K.
2007-06-13
This paper presents a nonlinear Bayesian regression algorithm for the purpose of detecting and estimating gas plume content from hyper-spectral data. Remote sensing data, by its very nature, is collected under less controlled conditions than laboratory data. As a result, the physics-based model that is used to describe the relationship between the observed remotesensing spectra, and the terrestrial (or atmospheric) parameters that we desire to estimate, is typically littered with many unknown "nuisance" parameters (parameters that we are not interested in estimating, but also appear in the model). Bayesian methods are well-suited for this context as they automatically incorporate the uncertainties associated with all nuisance parameters into the error estimates of the parameters of interest. The nonlinear Bayesian regression methodology is illustrated on realistic simulated data from a three-layer model for longwave infrared (LWIR) measurements from a passive instrument. This shows that this approach should permit more accurate estimation as well as a more reasonable description of estimate uncertainty.
Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.
Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li
2017-05-03
In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.