WorldWideScience

Sample records for location estimation approach

  1. Multivariate Location Estimation Using Extension of $R$-Estimates Through $U$-Statistics Type Approach

    OpenAIRE

    Chaudhuri, Probal

    1992-01-01

    We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...

  2. Cover estimation and payload location using Markov random fields

    Science.gov (United States)

    Quach, Tu-Thach

    2014-02-01

    Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.

  3. Improvement Schemes for Indoor Mobile Location Estimation: A Survey

    Directory of Open Access Journals (Sweden)

    Jianga Shang

    2015-01-01

    Full Text Available Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hidden Markov models. Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR, and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research.

  4. Application of Digital Cellular Radio for Mobile Location Estimation

    Directory of Open Access Journals (Sweden)

    Farhat Anwar

    2012-08-01

    Full Text Available The capability to locate the position of mobiles is a prerequisite to implement a wide range of evolving ITS services. Radiolocation has the potential to serve a wide geographical area. This paper reports an investigation regarding the feasibility of utilizing cellular radio for the purpose of mobile location estimation. Basic strategies to be utilized for location estimation are elaborated. Two possible approaches for cellular based location estimation are investigated with the help of computer simulation. Their effectiveness and relative merits and demerits are identified. An algorithm specifically adapted for cellular environment is reported with specific features where mobiles, irrespective of their numbers, can locate their position without adversely loading the cellular system.Key Words: ITS, GSM, Cellular Radio, DRGS, GPS.

  5. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    Science.gov (United States)

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  6. Using cell phone location to assess misclassification errors in air pollution exposure estimation.

    Science.gov (United States)

    Yu, Haofei; Russell, Armistead; Mulholland, James; Huang, Zhijiong

    2018-02-01

    Air pollution epidemiologic and health impact studies often rely on home addresses to estimate individual subject's pollution exposure. In this study, we used detailed cell phone location data, the call detail record (CDR), to account for the impact of spatiotemporal subject mobility on estimates of ambient air pollutant exposure. This approach was applied on a sample with 9886 unique simcard IDs in Shenzhen, China, on one mid-week day in October 2013. Hourly ambient concentrations of six chosen pollutants were simulated by the Community Multi-scale Air Quality model fused with observational data, and matched with detailed location data for these IDs. The results were compared with exposure estimates using home addresses to assess potential exposure misclassification errors. We found the misclassifications errors are likely to be substantial when home location alone is applied. The CDR based approach indicates that the home based approach tends to over-estimate exposures for subjects with higher exposure levels and under-estimate exposures for those with lower exposure levels. Our results show that the cell phone location based approach can be used to assess exposure misclassification error and has the potential for improving exposure estimates in air pollution epidemiology studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    Science.gov (United States)

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  8. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.

    2012-12-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

  9. Wireless Indoor Location Estimation Based on Neural Network RSS Signature Recognition (LENSR)

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2008-06-01

    Location Based Services (LBS), context aware applications, and people and object tracking depend on the ability to locate mobile devices, also known as localization, in the wireless landscape. Localization enables a diverse set of applications that include, but are not limited to, vehicle guidance in an industrial environment, security monitoring, self-guided tours, personalized communications services, resource tracking, mobile commerce services, guiding emergency workers during fire emergencies, habitat monitoring, environmental surveillance, and receiving alerts. This paper presents a new neural network approach (LENSR) based on a competitive topological Counter Propagation Network (CPN) with k-nearest neighborhood vector mapping, for indoor location estimation based on received signal strength. The advantage of this approach is both speed and accuracy. The tested accuracy of the algorithm was 90.6% within 1 meter and 96.4% within 1.5 meters. Several approaches for location estimation using WLAN technology were reviewed for comparison of results.

  10. A Generalized Model for Indoor Location Estimation Using Environmental Sound from Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2018-02-01

    Full Text Available The indoor location of individuals is a key contextual variable for commercial and assisted location-based services and applications. Commercial centers and medical buildings (e.g., hospitals require location information of their users/patients to offer the services that are needed at the correct moment. Several approaches have been proposed to tackle this problem. In this paper, we present the development of an indoor location system which relies on the human activity recognition approach, using sound as an information source to infer the indoor location based on the contextual information of the activity that is realized at the moment. In this work, we analyze the sound information to estimate the location using the contextual information of the activity. A feature extraction approach to the sound signal is performed to feed a random forest algorithm in order to generate a model to estimate the location of the user. We evaluate the quality of the resulting model in terms of sensitivity and specificity for each location, and we also perform out-of-bag error estimation. Our experiments were carried out in five representative residential homes. Each home had four individual indoor rooms. Eleven activities (brewing coffee, cooking, eggs, taking a shower, etc. were performed to provide the contextual information. Experimental results show that developing an indoor location system (ILS that uses contextual information from human activities (identified with data provided from the environmental sound can achieve an estimation that is 95% correct.

  11. Joint sensor location/power rating optimization for temporally-correlated source estimation

    KAUST Repository

    Bushnaq, Osama M.

    2017-12-22

    The optimal sensor selection for scalar state parameter estimation in wireless sensor networks is studied in the paper. A subset of N candidate sensing locations is selected to measure a state parameter and send the observation to a fusion center via wireless AWGN channel. In addition to selecting the optimal sensing location, the sensor type to be placed in these locations is selected from a pool of T sensor types such that different sensor types have different power ratings and costs. The sensor transmission power is limited based on the amount of energy harvested at the sensing location and the type of the sensor. The Kalman filter is used to efficiently obtain the MMSE estimator at the fusion center. Sensors are selected such that the MMSE estimator error is minimized subject to a prescribed system budget. This goal is achieved using convex relaxation and greedy algorithm approaches.

  12. Location estimation in wireless sensor networks using spring-relaxation technique.

    Science.gov (United States)

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  13. Location Estimation in Wireless Sensor Networks Using Spring-Relaxation Technique

    Directory of Open Access Journals (Sweden)

    Qing Zhang

    2010-05-01

    Full Text Available Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN. Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  14. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  15. RSSI BASED LOCATION ESTIMATION IN A WI-FI ENVIRONMENT: AN EXPERIMENTAL STUDY

    Directory of Open Access Journals (Sweden)

    M. Ganesh Madhan

    2014-12-01

    Full Text Available In real life situations, location estimation of moving objects, armed personnel are of great importance. In this paper, we have attempted to locate targets which are mobile in a Wi-Fi environment. Radio Frequency (RF localization techniques based on Received Signal Strength Indication (RSSI algorithms are used. This study utilises Wireless Mon tool, software to provide complete technical information regarding received signal strength obtained from different wireless access points available in a campus Wi-Fi environment, considered for the study. All simulations have been done in MATLAB. The target location estimated by this approach agrees well with the actual GPS data.

  16. Optimal Smoothing in Adaptive Location Estimation

    OpenAIRE

    Mammen, Enno; Park, Byeong U.

    1997-01-01

    In this paper higher order performance of kernel basedadaptive location estimators are considered. Optimalchoice of smoothing parameters is discussed and it isshown how much is lossed in efficiency by not knowingthe underlying translation density.

  17. Time of arrival based location estimation for cooperative relay networks

    KAUST Repository

    Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Hussain, Syed Imtiaz; Qaraqe, Khalid A.; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.

  18. Time of arrival based location estimation for cooperative relay networks

    KAUST Repository

    Çelebi, Hasari Burak

    2010-09-01

    In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.

  19. Location theory a unified approach

    CERN Document Server

    Nickel, Stefan

    2006-01-01

    Although modern location theory is now more than 90 years old, the focus of researchers in this area has been mainly problem oriented. However, a common theory, which keeps the essential characteristics of classical location models, is still missing.This monograph addresses this issue. A flexible location problem called the Ordered Median Problem (OMP) is introduced. For all three main subareas of location theory (continuous, network and discrete location) structural properties of the OMP are presented and solution approaches provided. Numerous illustrations and examples help the reader to bec

  20. Improved estimation of leak location of pipelines using frequency band variation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Sup [Embedded System Engineering Department, Incheon National University, Incheon (Korea, Republic of); Yoon, Dong Jin [Safety Measurement Center, Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2014-02-15

    Leakage is an important factor to be considered for the management of underground water supply pipelines in a smart water grid system, especially if the pipelines are aged and buried under the pavement or various structures of a highly populated city. Because the exact detection of the location of such leaks in pipelines is essential for their efficient operation, a new methodology for leak location detection based on frequency band variation, windowing filters, and probability is proposed in this paper. Because the exact detection of the leak location depends on the precision of estimation of time delay between sensor signals due to leak noise, some window functions that offer weightings at significant frequencies are applied for calculating the improved cross-correlation function. Experimental results obtained by applying this methodology to an actual buried water supply pipeline, ∼ 253.9 m long and made of cast iron, revealed that the approach of frequency band variation with those windows and probability offers better performance for leak location detection.

  1. Improved estimation of leak location of pipelines using frequency band variation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin

    2014-01-01

    Leakage is an important factor to be considered for the management of underground water supply pipelines in a smart water grid system, especially if the pipelines are aged and buried under the pavement or various structures of a highly populated city. Because the exact detection of the location of such leaks in pipelines is essential for their efficient operation, a new methodology for leak location detection based on frequency band variation, windowing filters, and probability is proposed in this paper. Because the exact detection of the leak location depends on the precision of estimation of time delay between sensor signals due to leak noise, some window functions that offer weightings at significant frequencies are applied for calculating the improved cross-correlation function. Experimental results obtained by applying this methodology to an actual buried water supply pipeline, ∼ 253.9 m long and made of cast iron, revealed that the approach of frequency band variation with those windows and probability offers better performance for leak location detection.

  2. Estimation of distributed Fermat-point location for wireless sensor networking.

    Science.gov (United States)

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  3. Reciprocal Estimation of Pedestrian Location and Motion State toward a Smartphone Geo-Context Computing Solution

    Directory of Open Access Journals (Sweden)

    Jingbin Liu

    2015-06-01

    Full Text Available The rapid advance in mobile communications has made information and services ubiquitously accessible. Location and context information have become essential for the effectiveness of services in the era of mobility. This paper proposes the concept of geo-context that is defined as an integral synthesis of geographical location, human motion state and mobility context. A geo-context computing solution consists of a positioning engine, a motion state recognition engine, and a context inference component. In the geo-context concept, the human motion states and mobility context are associated with the geographical location where they occur. A hybrid geo-context computing solution is implemented that runs on a smartphone, and it utilizes measurements of multiple sensors and signals of opportunity that are available within a smartphone. Pedestrian location and motion states are estimated jointly under the framework of hidden Markov models, and they are used in a reciprocal manner to improve their estimation performance of one another. It is demonstrated that pedestrian location estimation has better accuracy when its motion state is known, and in turn, the performance of motion state recognition can be improved with increasing reliability when the location is given. The geo-context inference is implemented simply with the expert system principle, and more sophisticated approaches will be developed.

  4. Methods to estimate historical daily streamflow for ungaged stream locations in Minnesota

    Science.gov (United States)

    Lorenz, David L.; Ziegeweid, Jeffrey R.

    2016-03-14

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water; however, streamgages cannot be installed at every location where streamflow information is needed. Therefore, methods for estimating streamflow at ungaged stream locations need to be developed. This report presents a statewide study to develop methods to estimate the structure of historical daily streamflow at ungaged stream locations in Minnesota. Historical daily mean streamflow at ungaged locations in Minnesota can be estimated by transferring streamflow data at streamgages to the ungaged location using the QPPQ method. The QPPQ method uses flow-duration curves at an index streamgage, relying on the assumption that exceedance probabilities are equivalent between the index streamgage and the ungaged location, and estimates the flow at the ungaged location using the estimated flow-duration curve. Flow-duration curves at ungaged locations can be estimated using recently developed regression equations that have been incorporated into StreamStats (http://streamstats.usgs.gov/), which is a U.S. Geological Survey Web-based interactive mapping tool that can be used to obtain streamflow statistics, drainage-basin characteristics, and other information for user-selected locations on streams.

  5. Artificial Neural Network for Location Estimation in Wireless Communication Systems

    Directory of Open Access Journals (Sweden)

    Chien-Sheng Chen

    2012-03-01

    Full Text Available In a wireless communication system, wireless location is the technique used to estimate the location of a mobile station (MS. To enhance the accuracy of MS location prediction, we propose a novel algorithm that utilizes time of arrival (TOA measurements and the angle of arrival (AOA information to locate MS when three base stations (BSs are available. Artificial neural networks (ANN are widely used techniques in various areas to overcome the problem of exclusive and nonlinear relationships. When the MS is heard by only three BSs, the proposed algorithm utilizes the intersections of three TOA circles (and the AOA line, based on various neural networks, to estimate the MS location in non-line-of-sight (NLOS environments. Simulations were conducted to evaluate the performance of the algorithm for different NLOS error distributions. The numerical analysis and simulation results show that the proposed algorithms can obtain more precise location estimation under different NLOS environments.

  6. Artificial neural network for location estimation in wireless communication systems.

    Science.gov (United States)

    Chen, Chien-Sheng

    2012-01-01

    In a wireless communication system, wireless location is the technique used to estimate the location of a mobile station (MS). To enhance the accuracy of MS location prediction, we propose a novel algorithm that utilizes time of arrival (TOA) measurements and the angle of arrival (AOA) information to locate MS when three base stations (BSs) are available. Artificial neural networks (ANN) are widely used techniques in various areas to overcome the problem of exclusive and nonlinear relationships. When the MS is heard by only three BSs, the proposed algorithm utilizes the intersections of three TOA circles (and the AOA line), based on various neural networks, to estimate the MS location in non-line-of-sight (NLOS) environments. Simulations were conducted to evaluate the performance of the algorithm for different NLOS error distributions. The numerical analysis and simulation results show that the proposed algorithms can obtain more precise location estimation under different NLOS environments.

  7. Highly Efficient Estimators of Multivariate Location with High Breakdown Point

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1991-01-01

    We propose an affine equivariant estimator of multivariate location that combines a high breakdown point and a bounded influence function with high asymptotic efficiency. This proposal is basically a location $M$-estimator based on the observations obtained after scaling with an affine equivariant

  8. Estimation of Distributed Fermat-Point Location for Wireless Sensor Networking

    Directory of Open Access Journals (Sweden)

    Yanuarius Teofilus Larosa

    2011-04-01

    Full Text Available This work presents a localization scheme for use in wireless sensor networks (WSNs that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE. DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  9. On the relation between S-Estimators and M-Estimators of multivariate location and covariance

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1987-01-01

    We discuss the relation between S-estimators and M-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, S-estimators are shown to satisfy first-order conditions of M-estimators. We show that the influence function IF (x;S F) of

  10. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  11. Location Estimation using Delayed Measurements

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus

    1998-01-01

    When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new techn...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported...

  12. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  13. Methods for confidence interval estimation of a ratio parameter with application to location quotients

    Directory of Open Access Journals (Sweden)

    Beyene Joseph

    2005-10-01

    Full Text Available Abstract Background The location quotient (LQ ratio, a measure designed to quantify and benchmark the degree of relative concentration of an activity in the analysis of area localization, has received considerable attention in the geographic and economics literature. This index can also naturally be applied in the context of population health to quantify and compare health outcomes across spatial domains. However, one commonly observed limitation of LQ is its widespread use as only a point estimate without an accompanying confidence interval. Methods In this paper we present statistical methods that can be used to construct confidence intervals for location quotients. The delta and Fieller's methods are generic approaches for a ratio parameter and the generalized linear modelling framework is a useful re-parameterization particularly helpful for generating profile-likelihood based confidence intervals for the location quotient. A simulation experiment is carried out to assess the performance of each of the analytic approaches and a health utilization data set is used for illustration. Results Both the simulation results as well as the findings from the empirical data show that the different analytical methods produce very similar confidence limits for location quotients. When incidence of outcome is not rare and sample sizes are large, the confidence limits are almost indistinguishable. The confidence limits from the generalized linear model approach might be preferable in small sample situations. Conclusion LQ is a useful measure which allows quantification and comparison of health and other outcomes across defined geographical regions. It is a very simple index to compute and has a straightforward interpretation. Reporting this estimate with appropriate confidence limits using methods presented in this paper will make the measure particularly attractive for policy and decision makers.

  14. Estimation of bladder wall location in ultrasound images.

    Science.gov (United States)

    Topper, A K; Jernigan, M E

    1991-05-01

    A method of automatically estimating the location of the bladder wall in ultrasound images is proposed. Obtaining this estimate is intended to be the first stage in the development of an automatic bladder volume calculation system. The first step in the bladder wall estimation scheme involves globally processing the images using standard image processing techniques to highlight the bladder wall. Separate processing sequences are required to highlight the anterior bladder wall and the posterior bladder wall. The sequence to highlight the anterior bladder wall involves Gaussian smoothing and second differencing followed by zero-crossing detection. Median filtering followed by thresholding and gradient detection is used to highlight as much of the rest of the bladder wall as was visible in the original images. Then a 'bladder wall follower'--a line follower with rules based on the characteristics of ultrasound imaging and the anatomy involved--is applied to the processed images to estimate the bladder wall location by following the portions of the bladder wall which are highlighted and filling in the missing segments. The results achieved using this scheme are presented.

  15. Modern optimization algorithms for fault location estimation in power systems

    Directory of Open Access Journals (Sweden)

    A. Sanad Ahmed

    2017-10-01

    Full Text Available This paper presents a fault location estimation approach in two terminal transmission lines using Teaching Learning Based Optimization (TLBO technique, and Harmony Search (HS technique. Also, previous methods were discussed such as Genetic Algorithm (GA, Artificial Bee Colony (ABC, Artificial neural networks (ANN and Cause & effect (C&E with discussing advantages and disadvantages of all methods. Initial data for proposed techniques are post-fault measured voltages and currents from both ends, along with line parameters as initial inputs as well. This paper deals with several types of faults, L-L-L, L-L-L-G, L-L-G and L-G. Simulation of the model was performed on SIMULINK by extracting initial inputs from SIMULINK to MATLAB, where the objective function specifies the fault location with a very high accuracy, precision and within a very short time. Future works are discussed showing the benefit behind using the Differential Learning TLBO (DLTLBO was discussed as well.

  16. Simultaneous head tissue conductivity and EEG source location estimation.

    Science.gov (United States)

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.; Lombard, F.

    2012-01-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal

  18. Sensor-Data Fusion for Multi-Person Indoor Location Estimation.

    Science.gov (United States)

    Mohebbi, Parisa; Stroulia, Eleni; Nikolaidis, Ioanis

    2017-10-18

    We consider the problem of estimating the location of people as they move and work in indoor environments. More specifically, we focus on the scenario where one of the persons of interest is unable or unwilling to carry a smartphone, or any other "wearable" device, which frequently arises in caregiver/cared-for situations. We consider the case of indoor spaces populated with anonymous binary sensors (Passive Infrared motion sensors) and eponymous wearable sensors (smartphones interacting with Estimote beacons), and we propose a solution to the resulting sensor-fusion problem. Using a data set with sensor readings collected from one-person and two-person sessions engaged in a variety of activities of daily living, we investigate the relative merits of relying solely on anonymous sensors, solely on eponymous sensors, or on their combination. We examine how the lack of synchronization across different sensing sources impacts the quality of location estimates, and discuss how it could be mitigated without resorting to device-level mechanisms. Finally, we examine the trade-off between the sensors' coverage of the monitored space and the quality of the location estimates.

  19. Two Approaches for Estimating Discharge on Ungauged Basins in Oregon, USA

    Science.gov (United States)

    Wigington, P. J.; Leibowitz, S. G.; Comeleo, R. L.; Ebersole, J. L.; Copeland, E. A.

    2009-12-01

    Detailed information on the hydrologic behavior of streams is available for only a small proportion of all streams. Even in cases where discharge has been monitored, these measurements may not be available for a sufficiently long period to characterize the full behavior of a stream. In this presentation, we discuss two separate approaches for predicting discharge at ungauged locations. The first approach models discharge in the Calapooia Watershed, Oregon based on long-term US Geological Survey gauge stations located in two adjacent watersheds. Since late 2008, we have measured discharge and water level over a range of flow conditions at more than a dozen sites within the Calapooia. Initial results indicate that many of these sites, including the mainstem Calapooia and some of its tributaries, can be predicted by these outside gauge stations and simple landscape factors. This is not a true “ungauged” approach, since measurements are required to characterize the range of flow. However, the approach demonstrates how such measurements and more complete data from similar areas can be used to estimate a detailed record for a longer period. The second approach estimates 30 year average monthly discharge at ungauged locations based on a Hydrologic Landscape Region (HLR) model. We mapped HLR class over the entire state of Oregon using an assessment unit with an average size of 44 km2. We then calculated average statewide moisture surplus values for each HLR class, modified to account for snowpack accumulation and snowmelt. We calculated potential discharge by summing these values for each HLR within a watershed. The resulting monthly hydrograph is then transformed to estimate monthly discharge, based on aquifer and soil permeability and terrain. We hypothesize that these monthly values should provide good estimates of discharge in areas where imports from or exports to the deep groundwater system are not significant. We test the approach by comparing results with

  20. Estimating location without external cues.

    Directory of Open Access Journals (Sweden)

    Allen Cheung

    2014-10-01

    Full Text Available The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.

  1. Multi-Level Interval Estimation for Locating damage in Structures by Using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Pan Danguang; Gao Yanhua; Song Junlei

    2010-01-01

    A new analysis technique, called multi-level interval estimation method, is developed for locating damage in structures. In this method, the artificial neural networks (ANN) analysis method is combined with the statistics theory to estimate the range of damage location. The ANN is multilayer perceptron trained by back-propagation. Natural frequencies and modal shape at a few selected points are used as input to identify the location and severity of damage. Considering the large-scale structures which have lots of elements, multi-level interval estimation method is developed to reduce the estimation range of damage location step-by-step. Every step, estimation range of damage location is obtained from the output of ANN by using the method of interval estimation. The next ANN training cases are selected from the estimation range after linear transform, and the output of new ANN estimation range of damage location will gained a reduced estimation range. Two numerical example analyses on 10-bar truss and 100-bar truss are presented to demonstrate the effectiveness of the proposed method.

  2. Common Nearly Best Linear Estimates of Location and Scale ...

    African Journals Online (AJOL)

    Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...

  3. Sensor-Data Fusion for Multi-Person Indoor Location Estimation

    Directory of Open Access Journals (Sweden)

    Parisa Mohebbi

    2017-10-01

    Full Text Available We consider the problem of estimating the location of people as they move and work in indoor environments. More specifically, we focus on the scenario where one of the persons of interest is unable or unwilling to carry a smartphone, or any other “wearable” device, which frequently arises in caregiver/cared-for situations. We consider the case of indoor spaces populated with anonymous binary sensors (Passive Infrared motion sensors and eponymous wearable sensors (smartphones interacting with Estimote beacons, and we propose a solution to the resulting sensor-fusion problem. Using a data set with sensor readings collected from one-person and two-person sessions engaged in a variety of activities of daily living, we investigate the relative merits of relying solely on anonymous sensors, solely on eponymous sensors, or on their combination. We examine how the lack of synchronization across different sensing sources impacts the quality of location estimates, and discuss how it could be mitigated without resorting to device-level mechanisms. Finally, we examine the trade-off between the sensors’ coverage of the monitored space and the quality of the location estimates.

  4. Fixed-location hydroacoustic monitoring designs for estimating fish passage using stratified random and systematic sampling

    International Nuclear Information System (INIS)

    Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.

    1993-01-01

    Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs

  5. The first Australian gravimetric quasigeoid model with location-specific uncertainty estimates

    Science.gov (United States)

    Featherstone, W. E.; McCubbine, J. C.; Brown, N. J.; Claessens, S. J.; Filmer, M. S.; Kirby, J. F.

    2018-02-01

    We describe the computation of the first Australian quasigeoid model to include error estimates as a function of location that have been propagated from uncertainties in the EGM2008 global model, land and altimeter-derived gravity anomalies and terrain corrections. The model has been extended to include Australia's offshore territories and maritime boundaries using newer datasets comprising an additional {˜ }280,000 land gravity observations, a newer altimeter-derived marine gravity anomaly grid, and terrain corrections at 1^' ' }× 1^' ' } resolution. The error propagation uses a remove-restore approach, where the EGM2008 quasigeoid and gravity anomaly error grids are augmented by errors propagated through a modified Stokes integral from the errors in the altimeter gravity anomalies, land gravity observations and terrain corrections. The gravimetric quasigeoid errors (one sigma) are 50-60 mm across most of the Australian landmass, increasing to {˜ }100 mm in regions of steep horizontal gravity gradients or the mountains, and are commensurate with external estimates.

  6. Error Estimation for Indoor 802.11 Location Fingerprinting

    DEFF Research Database (Denmark)

    Lemelson, Hendrik; Kjærgaard, Mikkel Baun; Hansen, Rene

    2009-01-01

    providers could adapt their delivered services based on the estimated position error to achieve a higher service quality. Finally, system operators could use the information to inspect whether a location system provides satisfactory positioning accuracy throughout the covered area. For position error...

  7. Multi-location model for the estimation of the horizontal daily diffuse fraction of solar radiation in Europe

    International Nuclear Information System (INIS)

    Bortolini, Marco; Gamberi, Mauro; Graziani, Alessandro; Manzini, Riccardo; Mora, Cristina

    2013-01-01

    Highlights: ► A multi-location model to estimate solar radiation components is proposed. ► Proposed model joins solar radiation data from several weather stations. ► Clearness index is correlated to the diffuse component through analytic functions. ► Third degree polynomial function best fits data for annual and seasonal scenarios. ► A quality control procedure and independent datasets strength model performances. - Abstract: Hourly and daily solar radiation data are crucial for the design of energy systems based on the solar source. Global irradiance, measured on the horizontal plane, is, generally, available from weather station databases. The direct and diffuse fractions are measured rarely and should be analytically calculated for many geographical locations. Aim of this paper is to present a multi-location model to estimate the expected profiles of the horizontal daily diffuse component of solar radiation. It focuses on the European (EU) geographical area joining data from 44 weather stations located in 11 countries. Data are collected by the World Radiation Data Centre (WRDC) between 2004 and 2007. Different analytic functions, correlating the daily diffuse fraction of solar radiation to the clearness index, are calculated and compared to outline the analytic expressions of the best fitting curves. The effect of seasonality on solar irradiance is considered developing summer and winter scenarios together with annual models. Similarities among the trends for the 4 years are, further, discussed. The most adopted statistical indices are used as key performance factors. Finally, data from three locations not included in the dataset considered for model development allow to test the proposed approach against an independent dataset. Obtained results show the effectiveness of adopting a multi-location approach to estimate solar radiation components on the horizontal surface instead of developing several single location models. This is due to the increase

  8. Use of Geographically Weighted Regression (GWR Method to Estimate the Effects of Location Attributes on the Residential Property Values

    Directory of Open Access Journals (Sweden)

    Mohd Faris Dziauddin

    2017-07-01

    Full Text Available This study estimates the effect of locational attributes on residential property values in Kuala Lumpur, Malaysia. Geographically weighted regression (GWR enables the use of the local parameter rather than the global parameter to be estimated, with the results presented in map form. The results of this study reveal that residential property values are mainly determined by the property’s physical (structural attributes, but proximity to locational attributes also contributes marginally. The use of GWR in this study is considered a better approach than other methods to examine the effect of locational attributes on residential property values. GWR has the capability to produce meaningful results in which different locational attributes have differential spatial effects across a geographical area on residential property values. This method has the ability to determine the factors on which premiums depend, and in turn it can assist the government in taxation matters.

  9. Location, location, location: Extracting location value from house prices

    OpenAIRE

    Kolbe, Jens; Schulz, Rainer; Wersing, Martin; Werwatz, Axel

    2012-01-01

    The price for a single-family house depends both on the characteristics of the building and on its location. We propose a novel semiparametric method to extract location values from house prices. After splitting house prices into building and land components, location values are estimated with adaptive weight smoothing. The adaptive estimator requires neither strong smoothness assumptions nor local symmetry. We apply the method to house transactions from Berlin, Germany. The estimated surface...

  10. A hybrid stochastic approach for self-location of wireless sensors in indoor environments.

    Science.gov (United States)

    Lloret, Jaime; Tomas, Jesus; Garcia, Miguel; Canovas, Alejandro

    2009-01-01

    Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS) in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.

  11. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    Science.gov (United States)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified

  12. A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation

    Directory of Open Access Journals (Sweden)

    Tianshuang Qiu

    2007-12-01

    Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of “biased” or “unbiased” is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.

  13. A Hybrid Stochastic Approach for Self-Location of Wireless Sensors in Indoor Environments

    Directory of Open Access Journals (Sweden)

    Alejandro Canovas

    2009-05-01

    Full Text Available Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.

  14. Population Exposure Estimates in Proximity to Nuclear Power Plants, Locations

    Data.gov (United States)

    National Aeronautics and Space Administration — The Population Exposure Estimates in Proximity to Nuclear Power Plants, Locations data set combines information from a global data set developed by Declan Butler of...

  15. Method to Locate Contaminant Source and Estimate Emission Strength

    Directory of Open Access Journals (Sweden)

    Qu Hongquan

    2013-01-01

    Full Text Available People greatly concern the issue of air quality in some confined spaces, such as spacecraft, aircraft, and submarine. With the increase of residence time in such confined space, contaminant pollution has become a main factor which endangers life. It is urgent to identify a contaminant source rapidly so that a prompt remedial action can be taken. A procedure of source identification should be able to locate the position and to estimate the emission strength of the contaminant source. In this paper, an identification method was developed to realize these two aims. This method was developed based on a discrete concentration stochastic model. With this model, a sensitivity analysis algorithm was induced to locate the source position, and a Kalman filter was used to further estimate the contaminant emission strength. This method could track and predict the source strength dynamically. Meanwhile, it can predict the distribution of contaminant concentration. Simulation results have shown the virtues of the method.

  16. Low complexity algorithms to independently and jointly estimate the location and range of targets using FMCW

    KAUST Repository

    Ahmed, Sajid

    2017-05-12

    The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.

  17. Low complexity algorithms to independently and jointly estimate the location and range of targets using FMCW

    KAUST Repository

    Ahmed, Sajid; Jardak, Seifallah; Alouini, Mohamed-Slim

    2017-01-01

    The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.

  18. Development of a method for estimating oesophageal temperature by multi-locational temperature measurement inside the external auditory canal

    Science.gov (United States)

    Nakada, Hirofumi; Horie, Seichi; Kawanami, Shoko; Inoue, Jinro; Iijima, Yoshinori; Sato, Kiyoharu; Abe, Takeshi

    2017-09-01

    We aimed to develop a practical method to estimate oesophageal temperature by measuring multi-locational auditory canal temperatures. This method can be applied to prevent heatstroke by simultaneously and continuously monitoring the core temperatures of people working under hot environments. We asked 11 healthy male volunteers to exercise, generating 80 W for 45 min in a climatic chamber set at 24, 32 and 40 °C, at 50% relative humidity. We also exposed the participants to radiation at 32 °C. We continuously measured temperatures at the oesophagus, rectum and three different locations along the external auditory canal. We developed equations for estimating oesophageal temperatures from auditory canal temperatures and compared their fitness and errors. The rectal temperature increased or decreased faster than oesophageal temperature at the start or end of exercise in all conditions. Estimated temperature showed good similarity with oesophageal temperature, and the square of the correlation coefficient of the best fitting model reached 0.904. We observed intermediate values between rectal and oesophageal temperatures during the rest phase. Even under the condition with radiation, estimated oesophageal temperature demonstrated concordant movement with oesophageal temperature at around 0.1 °C overestimation. Our method measured temperatures at three different locations along the external auditory canal. We confirmed that the approach can credibly estimate the oesophageal temperature from 24 to 40 °C for people performing exercise in the same place in a windless environment.

  19. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  20. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.; Hussain, Syed Imtiaz; Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine

  1. Improvements to Earthquake Location with a Fuzzy Logic Approach

    Science.gov (United States)

    Gökalp, Hüseyin

    2018-01-01

    In this study, improvements to the earthquake location method were investigated using a fuzzy logic approach proposed by Lin and Sanford (Bull Seismol Soc Am 91:82-93, 2001). The method has certain advantages compared to the inverse methods in terms of eliminating the uncertainties of arrival times and reading errors. In this study, adopting this approach, epicentral locations were determined based on the results of a fuzzy logic space concerning the uncertainties in the velocity models. To map the uncertainties in arrival times into the fuzzy logic space, a trapezoidal membership function was constructed by directly using the travel time difference between the two stations for the P- and S-arrival times instead of the P- and S-wave models to eliminate the need for obtaining information concerning the velocity structure of the study area. The results showed that this method worked most effectively when earthquakes occurred away from a network or when the arrival time data contained phase reading errors. In this study, to resolve the problems related to determining the epicentral locations of the events, a forward modeling method like the grid search technique was used by applying different logical operations (i.e., intersection, union, and their combination) with a fuzzy logic approach. The locations of the events were depended on results of fuzzy logic outputs in fuzzy logic space by searching in a gridded region. The process of location determination with the defuzzification of only the grid points with the membership value of 1 obtained by normalizing all the maximum fuzzy output values of the highest values resulted in more reliable epicentral locations for the earthquakes than the other approaches. In addition, throughout the process, the center-of-gravity method was used as a defuzzification operation.

  2. Generation of common coefficients to estimate global solar radiation over different locations of India

    Science.gov (United States)

    Samanta, Suman; Patra, Pulak Kumar; Banerjee, Saon; Narsimhaiah, Lakshmi; Sarath Chandran, M. A.; Vijaya Kumar, P.; Bandyopadhyay, Sanjib

    2018-06-01

    In developing countries like India, global solar radiation (GSR) is measured at very few locations due to non-availability of radiation measuring instruments. To overcome the inadequacy of GSR measurements, scientists developed many empirical models to estimate location-wise GSR. In the present study, three simple forms of Angstrom equation [Angstrom-Prescott (A-P), Ogelman, and Bahel] were used to estimate GSR at six geographically and climatologically different locations across India with an objective to find out a set of common constants usable for whole country. Results showed that GSR values varied from 9.86 to 24.85 MJ m-2 day-1 for different stations. It was also observed that A-P model showed smaller errors than Ogelman and Bahel models. All the models well estimated GSR, as the 1:1 line between measured and estimated values showed Nash-Sutcliffe efficiency (NSE) values ≥ 0.81 for all locations. Measured data of GSR pooled over six selected locations was analyzed to obtain a new set of constants for A-P equation which can be applicable throughout the country. The set of constants (a = 0.29 and b = 0.40) was named as "One India One Constant (OIOC)," and the model was named as "MOIOC." Furthermore, the developed constants are validated statistically for another six locations of India and produce close estimation. High R 2 values (≥ 76%) along with low mean bias error (MBE) ranging from - 0.64 to 0.05 MJ m-2 day-1 revealed that the new constants are able to predict GSR with lesser percentage of error.

  3. ESTIMATION OF AGING EFFECTS OF PILES IN MALAYSIAN OFFSHORE LOCATIONS

    Directory of Open Access Journals (Sweden)

    JERIN M. GEORGE

    2017-04-01

    Full Text Available An increasing demand for extending life and subsequently higher loading requirements of offshore jacket platforms are among the key problems faced by the offshore industry. The Aging effect has been proved to increase the axial capacity of piles, but proper methods to estimate and quantify these effects have not been developed. Borehole data from ten different Malaysian offshore locations have been analysed and they were employed to estimate the setup factor for different locations using AAU method. The setup factors found were used in the Skov and Denver equation to calculate capacity ratios of the offshore piles. The study showed that there will be an average improvement in the axial capacity of offshore piles by 42.2% and 34.9% for clayey and mixed soils respectively after a time equal to the normal design life (25 years of a jacket platform.

  4. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

    Science.gov (United States)

    Zorzetto, E.; Marani, M.

    2017-12-01

    The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

  5. Systematic Approach for Decommissioning Planning and Estimating

    International Nuclear Information System (INIS)

    Dam, A. S.

    2002-01-01

    Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

  6. Query Language for Location-Based Services: A Model Checking Approach

    Science.gov (United States)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  7. Centralization Versus Decentralization: A Location Analysis Approach for Librarians.

    Science.gov (United States)

    Shishko, Robert; Raffel, Jeffrey

    One of the questions that seems to perplex many university and special librarians is whether to move in the direction of centralizing or decentralizing the library's collections and facilities. Presented is a theoretical approach, employing location theory, to the library centralization-decentralization question. Location theory allows the analyst…

  8. Location optimization of solar plants by an integrated hierarchical DEA PCA approach

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Maghsoudi, A.

    2008-01-01

    Unique features of renewable energies such as solar energy has caused increasing demands for such resources. In order to use solar energy as a natural resource, environmental circumstances and geographical location related to solar intensity must be considered. Different factors may affect on the selection of a suitable location for solar plants. These factors must be considered concurrently for optimum location identification of solar plants. This article presents an integrated hierarchical approach for location of solar plants by data envelopment analysis (DEA), principal component analysis (PCA) and numerical taxonomy (NT). Furthermore, an integrated hierarchical DEA approach incorporating the most relevant parameters of solar plants is introduced. Moreover, 2 multivariable methods namely, PCA and NT are used to validate the results of DEA model. The prescribed approach is tested for 25 different cities in Iran with 6 different regions within each city. This is the first study that considers an integrated hierarchical DEA approach for geographical location optimization of solar plants. Implementation of the proposed approach would enable the energy policy makers to select the best-possible location for construction of a solar power plant with lowest possible costs

  9. Technical Note: A comparison of two empirical approaches to estimate in-stream net nutrient uptake

    Science.gov (United States)

    von Schiller, D.; Bernal, S.; Martí, E.

    2011-04-01

    To establish the relevance of in-stream processes on nutrient export at catchment scale it is important to accurately estimate whole-reach net nutrient uptake rates that consider both uptake and release processes. Two empirical approaches have been used in the literature to estimate these rates: (a) the mass balance approach, which considers changes in ambient nutrient loads corrected by groundwater inputs between two stream locations separated by a certain distance, and (b) the spiralling approach, which is based on the patterns of longitudinal variation in ambient nutrient concentrations along a reach following the nutrient spiralling concept. In this study, we compared the estimates of in-stream net nutrient uptake rates of nitrate (NO3) and ammonium (NH4) and the associated uncertainty obtained with these two approaches at different ambient conditions using a data set of monthly samplings in two contrasting stream reaches during two hydrological years. Overall, the rates calculated with the mass balance approach tended to be higher than those calculated with the spiralling approach only at high ambient nitrogen (N) concentrations. Uncertainty associated with these estimates also differed between both approaches, especially for NH4 due to the general lack of significant longitudinal patterns in concentration. The advantages and disadvantages of each of the approaches are discussed.

  10. Using river distance and existing hydrography data can improve the geostatistical estimation of fish tissue mercury at unsampled locations.

    Science.gov (United States)

    Money, Eric S; Sackett, Dana K; Aday, D Derek; Serre, Marc L

    2011-09-15

    Mercury in fish tissue is a major human health concern. Consumption of mercury-contaminated fish poses risks to the general population, including potentially serious developmental defects and neurological damage in young children. Therefore, it is important to accurately identify areas that have the potential for high levels of bioaccumulated mercury. However, due to time and resource constraints, it is difficult to adequately assess fish tissue mercury on a basin wide scale. We hypothesized that, given the nature of fish movement along streams, an analytical approach that takes into account distance traveled along these streams would improve the estimation accuracy for fish tissue mercury in unsampled streams. Therefore, we used a river-based Bayesian Maximum Entropy framework (river-BME) for modern space/time geostatistics to estimate fish tissue mercury at unsampled locations in the Cape Fear and Lumber Basins in eastern North Carolina. We also compared the space/time geostatistical estimation using river-BME to the more traditional Euclidean-based BME approach, with and without the inclusion of a secondary variable. Results showed that this river-based approach reduced the estimation error of fish tissue mercury by more than 13% and that the median estimate of fish tissue mercury exceeded the EPA action level of 0.3 ppm in more than 90% of river miles for the study domain.

  11. Consistency of extreme flood estimation approaches

    Science.gov (United States)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  12. Properties of the histogram location approach and the extent and change of downward nominal wage rigidity in the EU

    Directory of Open Access Journals (Sweden)

    Andreas Behr

    2006-06-01

    Full Text Available The histogram location approach has been proposed by Kahn (1997 to estimate the fraction of wage cuts prevented by downward nominal wage rigidity. In this paper, we analyze the validity of the approach by means of a simulation study which yielded evidence of unbiasedness but also of potential underestimation of rigidity parameter uncertainty and therefore of potential anticonservative inference. We apply the histogram location approach to estimate the extent of downward nominal wage rigidity across the EU for 1995-2001. Our data base is the User Data Base (UDB of the European Community Household Panel (ECHP. The results show wide variation in the fraction of wage cuts prevented by nominal wage rigidity across the EU. The lowest rigidity parameters are found for the UK, Spain and Ireland, the largest for Portugal and Italy. Analyzing the change of rigidity between sub periods 1995-1997 and 1999-2001 even shows an widening of the differences in nominal wage rigidity. Due to the finding of large differences across the EU, the results imply that the costs of low inflation policies across the EU differ substantially.

  13. Spatial Working Memory Capacity Predicts Bias in Estimates of Location

    Science.gov (United States)

    Crawford, L. Elizabeth; Landy, David; Salthouse, Timothy A.

    2016-01-01

    Spatial memory research has attributed systematic bias in location estimates to a combination of a noisy memory trace with a prior structure that people impose on the space. Little is known about intraindividual stability and interindividual variation in these patterns of bias. In the current work, we align recent empirical and theoretical work on…

  14. Estimation of precipitable water at different locations using surface dew-point

    Science.gov (United States)

    Abdel Wahab, M.; Sharif, T. A.

    1995-09-01

    The Reitan (1963) regression equation of the form ln w = a + bT d has been examined and tested to estimate precipitable water vapor content from the surface dew point temperature at different locations. The results of this study indicate that the slope b of the above equation has a constant value of 0.0681, while the intercept a changes rapidly with latitude. The use of the variable intercept technique can improve the estimated result by about 2%.

  15. Accuracy of ARGOS locations of Pinnipeds at-sea estimated using Fastloc GPS.

    Directory of Open Access Journals (Sweden)

    Daniel P Costa

    Full Text Available BACKGROUND: ARGOS satellite telemetry is one of the most widely used methods to track the movements of free-ranging marine and terrestrial animals and is fundamental to studies of foraging ecology, migratory behavior and habitat-use. ARGOS location estimates do not include complete error estimations, and for many marine organisms, the most commonly acquired locations (Location Class 0, A, B, or Z are provided with no declared error estimate. METHODOLOGY/PRINCIPAL FINDINGS: We compared the accuracy of ARGOS Locations to those obtained using Fastloc GPS from the same electronic tags on five species of pinnipeds: 9 California sea lions (Zalophus californianus, 4 Galapagos sea lions (Zalophus wollebaeki, 6 Cape fur seals (Arctocephalus pusillus pusillus, 3 Australian fur seals (A. p. doriferus and 5 northern elephant seals (Mirounga angustirostris. These species encompass a range of marine habitats (highly pelagic vs coastal, diving behaviors (mean dive durations 2-21 min and range of latitudes (equator to temperate. A total of 7,318 ARGOS positions and 27,046 GPS positions were collected. Of these, 1,105 ARGOS positions were obtained within five minutes of a GPS position and were used for comparison. The 68(th percentile ARGOS location errors as measured in this study were LC-3 0.49 km, LC-2 1.01 km, LC-1 1.20 km, LC-0 4.18 km, LC-A 6.19 km, LC-B 10.28 km. CONCLUSIONS/SIGNIFICANCE: The ARGOS errors measured here are greater than those provided by ARGOS, but within the range of other studies. The error was non-normally distributed with each LC highly right-skewed. Locations of species that make short duration dives and spend extended periods on the surface (sea lions and fur seals had less error than species like elephant seals that spend more time underwater and have shorter surface intervals. Supplemental data (S1 are provided allowing the creation of density distributions that can be used in a variety of filtering algorithms to improve the

  16. Estimating the location and spatial extent of a covert anthrax release.

    Directory of Open Access Journals (Sweden)

    Judith Legrand

    2009-01-01

    Full Text Available Rapidly identifying the features of a covert release of an agent such as anthrax could help to inform the planning of public health mitigation strategies. Previous studies have sought to estimate the time and size of a bioterror attack based on the symptomatic onset dates of early cases. We extend the scope of these methods by proposing a method for characterizing the time, strength, and also the location of an aerosolized pathogen release. A back-calculation method is developed allowing the characterization of the release based on the data on the first few observed cases of the subsequent outbreak, meteorological data, population densities, and data on population travel patterns. We evaluate this method on small simulated anthrax outbreaks (about 25-35 cases and show that it could date and localize a release after a few cases have been observed, although misspecifications of the spore dispersion model, or the within-host dynamics model, on which the method relies can bias the estimates. Our method could also provide an estimate of the outbreak's geographical extent and, as a consequence, could help to identify populations at risk and, therefore, requiring prophylactic treatment. Our analysis demonstrates that while estimates based on the first ten or 15 observed cases were more accurate and less sensitive to model misspecifications than those based on five cases, overall mortality is minimized by targeting prophylactic treatment early on the basis of estimates made using data on the first five cases. The method we propose could provide early estimates of the time, strength, and location of an aerosolized anthrax release and the geographical extent of the subsequent outbreak. In addition, estimates of release features could be used to parameterize more detailed models allowing the simulation of control strategies and intervention logistics.

  17. A Comparative Study Of Source Location And Depth Estimates From ...

    African Journals Online (AJOL)

    ... the analytic signal amplitude (ASA) and the local wave number (LWN) of the total intensity magnetic field. In this study, a synthetic magnetic field due to four buried dipoles was analysed to show that estimates of source location and depth can be improved significantly by reducing the data to the pole prior to the application ...

  18. Dry Port Location Problem: A Hybrid Multi-Criteria Approach

    Directory of Open Access Journals (Sweden)

    BENTALEB Fatimazahra

    2016-03-01

    Full Text Available Choosing a location for a dry port is a problem which becomes more essential and crucial. This study deals with the problem of locating dry ports. On this matter, a model combining multi-criteria (MACBETH and mono-criteria (BARYCENTER methods to find a solution to dry port location problem has been proposed. In the first phase, a systematic literature review was carried out on dry port location problem and then a methodological classification was presented for this research. In the second phase, a hybrid multi-criteria approach was developed in order to determine the best dry port location taking different criteria into account. A Computational practice and a qualitative analysis from a case study in the Moroccan context have been provided. The results show that the optimal location is very convenient with the geographical region and the government policies.

  19. Determining potential locations for biomass valorization using a macro screening approach

    Energy Technology Data Exchange (ETDEWEB)

    Van Dael, M.; Van Passel, S.; Schreurs, E. [Research Group of Environmental Economics, Centre for Environmental Sciences, Hasselt University, Agoralaan Gebouw D, 3590 Diepenbeek (Belgium); Pelkmans, L.; Guisson, R. [VITO, Boeretang 200, 2400 Mol (Belgium); Swinnen, G. [Research Group of Marketing, Hasselt University, Agoralaan Gebouw D, 3590 Diepenbeek (Belgium)

    2012-10-15

    European policy states that by 2020 at least 20% of final energy consumption should come from renewable energy sources. Biomass as a renewable energy source cannot be disregarded in order to attain this target. In this study a macro screening approach is developed to determine potential locations for biomass valorization in a specified region. The approach consists of five steps: (1) criteria determination, (2) data gathering, (3) weight assignment, (4) final score, (5) spatial representation. The resulting outcome provides a first well balanced scan of the possibilities for energy production using regional biomass. This way policy makers and investors can be supported and motivated to study the possibilities of building energy production plants at specific locations in more detail, which can be described as a 'micro-screening'. In our case study the approach is applied to determine the potentially interesting locations to establish a biomass project. The region has been limited to the forty-four communities in the province of Limburg (Belgium). The macro screening approach has shown to be very effective since the amount of interesting locations has been reduced drastically.

  20. A Probabilistic, Facility-Centric Approach to Lightning Strike Location

    Science.gov (United States)

    Huddleston, Lisa L.; Roeder, William p.; Merceret, Francis J.

    2012-01-01

    A new probabilistic facility-centric approach to lightning strike location has been developed. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collisionith spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  1. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  2. A personalized method for estimating centre of mass location of the whole body based on differentiation of tissues of a multi-divided trunk.

    Science.gov (United States)

    Erdmann, Włodzimierz S; Kowalczyk, Radosław

    2015-01-02

    There are several methods for obtaining location of the centre of mass of the whole body. They are based on cadaver data, using volume and density of body parts, using radiation and image techniques. Some researchers treated the trunk as a one part only, while others divided the trunk into few parts. In addition some researchers divided the trunk with planes perpendicular to the longitudinal trunk's axis, although the best approach is to obtain trunk parts as anatomical and functional elements. This procedure was used by Dempster and Erdmann. The latter elaborated personalized estimating of inertial quantities of the trunk, while Clauser et al. gave similar approach for extremities. The aim of the investigation was to merge both indirect methods in order to obtain accurate location of the centre of mass of the whole body. As a reference location a direct method based on reaction board procedure, i.e. with a body lying on a board supported on a scale was used. The location of the centre of mass using Clauser's and Erdmann's method appeared almost identical with the location obtained with a direct method. This approach can be used for several situations, especially for people of different morphology, for the bent trunk, and for asymmetrical movements. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. High Sensitivity TSS Prediction: Estimates of Locations Where TSS Cannot Occur

    KAUST Repository

    Schaefer, Ulf

    2013-10-10

    Background Although transcription in mammalian genomes can initiate from various genomic positions (e.g., 3′UTR, coding exons, etc.), most locations on genomes are not prone to transcription initiation. It is of practical and theoretical interest to be able to estimate such collections of non-TSS locations (NTLs). The identification of large portions of NTLs can contribute to better focusing the search for TSS locations and thus contribute to promoter and gene finding. It can help in the assessment of 5′ completeness of expressed sequences, contribute to more successful experimental designs, as well as more accurate gene annotation. Methodology Using comprehensive collections of Cap Analysis of Gene Expression (CAGE) and other transcript data from mouse and human genomes, we developed a methodology that allows us, by performing computational TSS prediction with very high sensitivity, to annotate, with a high accuracy in a strand specific manner, locations of mammalian genomes that are highly unlikely to harbor transcription start sites (TSSs). The properties of the immediate genomic neighborhood of 98,682 accurately determined mouse and 113,814 human TSSs are used to determine features that distinguish genomic transcription initiation locations from those that are not likely to initiate transcription. In our algorithm we utilize various constraining properties of features identified in the upstream and downstream regions around TSSs, as well as statistical analyses of these surrounding regions. Conclusions Our analysis of human chromosomes 4, 21 and 22 estimates ~46%, ~41% and ~27% of these chromosomes, respectively, as being NTLs. This suggests that on average more than 40% of the human genome can be expected to be highly unlikely to initiate transcription. Our method represents the first one that utilizes high-sensitivity TSS prediction to identify, with high accuracy, large portions of mammalian genomes as NTLs. The server with our algorithm implemented is

  4. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  5. Seasonal rationalization of river water quality sampling locations: a comparative study of the modified Sanders and multivariate statistical approaches.

    Science.gov (United States)

    Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar

    2016-02-01

    The design of surface water quality sampling location is a crucial decision-making process for rationalization of monitoring network. The quantity, quality, and types of available dataset (watershed characteristics and water quality data) may affect the selection of appropriate design methodology. The modified Sanders approach and multivariate statistical techniques [particularly factor analysis (FA)/principal component analysis (PCA)] are well-accepted and widely used techniques for design of sampling locations. However, their performance may vary significantly with quantity, quality, and types of available dataset. In this paper, an attempt has been made to evaluate performance of these techniques by accounting the effect of seasonal variation, under a situation of limited water quality data but extensive watershed characteristics information, as continuous and consistent river water quality data is usually difficult to obtain, whereas watershed information may be made available through application of geospatial techniques. A case study of Kali River, Western Uttar Pradesh, India, is selected for the analysis. The monitoring was carried out at 16 sampling locations. The discrete and diffuse pollution loads at different sampling sites were estimated and accounted using modified Sanders approach, whereas the monitored physical and chemical water quality parameters were utilized as inputs for FA/PCA. The designed optimum number of sampling locations for monsoon and non-monsoon seasons by modified Sanders approach are eight and seven while that for FA/PCA are eleven and nine, respectively. Less variation in the number and locations of designed sampling sites were obtained by both techniques, which shows stability of results. A geospatial analysis has also been carried out to check the significance of designed sampling location with respect to river basin characteristics and land use of the study area. Both methods are equally efficient; however, modified Sanders

  6. Establishing an air pollution monitoring network for intra-urban population exposure assessment : a location-allocation approach

    Energy Technology Data Exchange (ETDEWEB)

    Kanaroglou, P.S. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology; Jerrett, M.; Beckerman, B.; Arain, M.A. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology]|[McMaster Univ., Hamilton, ON (Canada). McMaster Inst. of Environment and Health; Morrison, J. [Carleton Univ., Ottawa, ON (Canada). School of Computer Science; Gilbert, N.L. [Health Canada, Ottawa, ON (Canada). Air Health Effects Div; Brook, J.R. [Meteorological Service of Canada, Toronto, ON (Canada)

    2004-10-01

    A study was conducted to assess the relation between traffic-generated air pollution and health reactions ranging from childhood asthma to mortality from lung cancer. In particular, it developed a formal method of optimally locating a dense network of air pollution monitoring stations in order to derive an exposure assessment model based on the data obtained from the monitoring stations and related land use, population and biophysical information. The method for determining the locations of 100 nitrogen dioxide monitors in Toronto, Ontario focused on land use, transportation infrastructure and the distribution of at-risk populations. The exposure assessment produced reasonable estimates at the intra-urban scale. This method for locating air pollution monitors effectively maximizes sampling coverage in relation to important socio-demographic characteristics and likely pollution variability. The location-allocation approach integrates many variables into the demand surface to reconfigure a monitoring network and is especially useful for measuring traffic pollutants with fine-scale spatial variability. The method also shows great promise for improving the assessment of exposure to ambient air pollution in epidemiologic studies. 19 refs., 3 tabs., 4 figs.

  7. A slicing-based approach for locating type errors

    NARCIS (Netherlands)

    T.B. Dinesh; F. Tip (Frank)

    1998-01-01

    htmlabstractThe effectiveness of a type checking tool strongly depends on the accuracy of the positional information that is associated with type errors. We present an approach where the location associated with an error message e is defined as a slice P_e of the program P being type checked. We

  8. A slicing-based approach for locating type errors

    NARCIS (Netherlands)

    T.B. Dinesh; F. Tip (Frank)

    1998-01-01

    textabstractThe effectiveness of a type checking tool strongly depends on the accuracy of the positional information that is associated with type errors. We present an approach where the location associated with an error message e is defined as a slice P_e of the program P being type checked. We

  9. A combined vision-inertial fusion approach for 6-DoF object pose estimation

    Science.gov (United States)

    Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2015-02-01

    The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.

  10. An approach to evaluating alternatives for wind power plant locations

    Directory of Open Access Journals (Sweden)

    Rehman, Ateekh Ur

    2016-12-01

    Full Text Available Multi-criteria decision approaches are preferred for achieving multi-dimensional sustainable renewable energy goals. A more critical issue faced by the wind power industry is the selection of a location to tap prospective energy, which needs to be evaluated on multiple measures. In this paper, the aim is to assess and rank alternative wind power plant locations in Saudi Arabia. The approach presented here takes multiple criteria into consideration, such as wind speed, wind availability, site advantages, terrain details, risk and uncertainty, technology used, third party support, projected demand, types of customers, and government policies. A comparative analysis of feasible alternatives that satisfy all multi- criteria objectives is carried out. The results obtained are subjected to sensitivity analysis. Concepts such as ‘threshold values’ and ‘attribute weights’ make the approach more sensitive.

  11. Research on an estimation method of DOA for wireless location based on TD-SCDMA

    Science.gov (United States)

    Zhang, Yi; Luo, Yuan; Cheng, Shi-xin

    2004-03-01

    To meet the urgent need of personal communication and hign-speed data services,the standardization and products development for International Mobile Telecommunication-2000 (IMT-2000) have become a hot point in wordwide. The wireless location for mobile terminals has been an important research project. Unlike GPS which is located by 24 artificial satellities, it is based on the base-station of wireless cell network, and the research and development of it are correlative with IMT-2000. While the standard for the third generation mobile telecommunication (3G)-TD-SCDMA, which is proposed by China and the intellective property right of which is possessed by Chinese, is adopted by ITU-T at the first time, the research for wireless location based on TD-SCDMA has theoretic meaning, applied value and marketable foreground. First,the basic principle and method for wireless location, i.e. Direction of Angle(DOA), Time of Arrival(TOA) or Time Difference of Arrival(TDOA), hybridized location(TOA/DOA,TDOA/DOA,TDOA/DOA),etc. is introduced in the paper. So the research of DOA is very important in wireless location. Next, Main estimation methods of DOA for wireless location, i.e. ESPRIT, MUSIC, WSF, Min-norm, etc. are researched in the paper. In the end, the performances of DOA estimation for wireless location based on mobile telecommunication network are analyzed by the research of theory and simulation experiment and the contrast algorithms between and Cramer-Rao Bound. Its research results aren't only propitious to the choice of algorithms for wireless location, but also to the realization of new service of wireless location .

  12. Low complexity joint estimation of reflection coefficient, spatial location, and Doppler shift for MIMO-radar by exploiting 2D-FFT

    KAUST Repository

    Jardak, Seifallah

    2014-10-01

    In multiple-input multiple-output (MIMO) radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, maximum-likelihood (ML) estimation yields the best performance. For this problem, the ML estimation requires the joint estimation of spatial location and Doppler shift, which is a two dimensional search problem. Therefore, the computational complexity of ML estimation is prohibitively high. In this work, to estimate the parameters of a target, a reduced complexity optimum performance algorithm is proposed, which allow two dimensional fast Fourier transform to jointly estimate the spatial location and Doppler shift. To asses the performances of the proposed estimators, the Cramér-Rao-lower-bound (CRLB) is derived. Simulation results show that the mean square estimation error of the proposed estimators achieve the CRLB. © 2014 IEEE.

  13. Precipitation areal-reduction factor estimation using an annual-maxima centered approach

    Science.gov (United States)

    Asquith, W.H.; Famiglietti, J.S.

    2000-01-01

    The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are often computed by multiplying point depths by areal-reduction factors (ARF). ARF range from 0 to 1, vary according to storm characteristics, such as recurrence interval; and are a function of watershed characteristics, such as watershed size, shape, and geographic location. This paper presents a new approach for estimating ARF and includes applications for the 1-day design storm in Austin, Dallas, and Houston, Texas. The approach, termed 'annual-maxima centered,' specifically considers the distribution of concurrent precipitation surrounding an annual-precipitation maxima, which is a feature not seen in other approaches. The approach does not require the prior spatial averaging of precipitation, explicit determination of spatial correlation coefficients, nor explicit definition of a representative area of a particular storm in the analysis. The annual-maxima centered approach was designed to exploit the wide availability of dense precipitation gauge data in many regions of the world. The approach produces ARF that decrease more rapidly than those from TP-29. Furthermore, the ARF from the approach decay rapidly with increasing recurrence interval of the annual-precipitation maxima. (C) 2000 Elsevier Science B.V.The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are

  14. Factors influencing time-location patterns and their impact on estimates of exposure: the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air).

    Science.gov (United States)

    Spalt, Elizabeth W; Curl, Cynthia L; Allen, Ryan W; Cohen, Martin; Williams, Kayleen; Hirsch, Jana A; Adar, Sara D; Kaufman, Joel D

    2016-06-01

    We assessed time-location patterns and the role of individual- and residential-level characteristics on these patterns within the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) cohort and also investigated the impact of individual-level time-location patterns on individual-level estimates of exposure to outdoor air pollution. Reported time-location patterns varied significantly by demographic factors such as age, gender, race/ethnicity, income, education, and employment status. On average, Chinese participants reported spending significantly more time indoors and less time outdoors and in transit than White, Black, or Hispanic participants. Using a tiered linear regression approach, we predicted time indoors at home and total time indoors. Our model, developed using forward-selection procedures, explained 43% of the variability in time spent indoors at home, and incorporated demographic, health, lifestyle, and built environment factors. Time-weighted air pollution predictions calculated using recommended time indoors from USEPA overestimated exposures as compared with predictions made with MESA Air participant-specific information. These data fill an important gap in the literature by describing the impact of individual and residential characteristics on time-location patterns and by demonstrating the impact of population-specific data on exposure estimates.

  15. Methodological Approaches to Locating Outlets of the Franchise Retail Network

    OpenAIRE

    Grygorenko Tetyana M.

    2016-01-01

    Methodical approaches to selecting strategic areas of managing the future location of franchise retail network outlets are presented. The main stages in the assessment of strategic areas of managing the future location of franchise retail network outlets have been determined and the evaluation criteria have been suggested. Since such selection requires consideration of a variety of indicators and directions of the assessment, the author proposes a scale of evaluation, which ...

  16. Estimation of the location parameter of distributions with known coefficient of variation by record values

    Directory of Open Access Journals (Sweden)

    N. K. Sajeevkumar

    2014-09-01

    Full Text Available In this article, we derived the Best Linear Unbiased Estimator (BLUE of the location parameter of certain distributions with known coefficient of variation by record values. Efficiency comparisons are also made on the proposed estimator with some of the usual estimators. Finally we give a real life data to explain the utility of results developed in this article.

  17. Demand Externalities from Co-Location

    OpenAIRE

    Boudhayan Sen; Jiwoong Shin; K. Sudhir

    2012-01-01

    We illustrate an approach to measure demand externalities from co-location by estimating household level changes in grocery spending at a supermarket among households that also buy gas at a co-located gas station, relative to those who do not. Controlling for observable and unobserved selection in the use of gas station, we find significant demand externalities; on average a household that buys gas has 7.7% to 9.3% increase in spending on groceries. Accounting for differences in gross margins...

  18. Inferring the Origin Locations of Tweets with Quantitative Confidence.

    Science.gov (United States)

    Priedhorsky, Reid; Culotta, Aron; Del Valle, Sara Y

    2014-01-01

    Social Internet content plays an increasingly critical role in many domains, including public health, disaster management, and politics. However, its utility is limited by missing geographic information; for example, fewer than 1.6% of Twitter messages ( tweets ) contain a geotag. We propose a scalable, content-based approach to estimate the location of tweets using a novel yet simple variant of gaussian mixture models. Further, because real-world applications depend on quantified uncertainty for such estimates, we propose novel metrics of accuracy, precision, and calibration, and we evaluate our approach accordingly. Experiments on 13 million global, comprehensively multi-lingual tweets show that our approach yields reliable, well-calibrated results competitive with previous computationally intensive methods. We also show that a relatively small number of training data are required for good estimates (roughly 30,000 tweets) and models are quite time-invariant (effective on tweets many weeks newer than the training set). Finally, we show that toponyms and languages with small geographic footprint provide the most useful location signals.

  19. A systematic approach for the location of hand sanitizer dispensers in hospitals.

    Science.gov (United States)

    Cure, Laila; Van Enk, Richard; Tiong, Ewing

    2014-09-01

    Compliance with hand hygiene practices is directly affected by the accessibility and availability of cleaning agents. Nevertheless, the decision of where to locate these dispensers is often not explicitly or fully addressed in the literature. In this paper, we study the problem of selecting the locations to install alcohol-based hand sanitizer dispensers throughout a hospital unit as an indirect approach to maximize compliance with hand hygiene practices. We investigate the relevant criteria in selecting dispenser locations that promote hand hygiene compliance, propose metrics for the evaluation of various location configurations, and formulate a dispenser location optimization model that systematically incorporates such criteria. A complete methodology to collect data and obtain the model parameters is described. We illustrate the proposed approach using data from a general care unit at a collaborating hospital. A cost analysis was performed to study the trade-offs between usability and cost. The proposed methodology can help in evaluating the current location configuration, determining the need for change, and establishing the best possible configuration. It can be adapted to incorporate alternative metrics, tailored to different institutions and updated as needed with new internal policies or safety regulation.

  20. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    Science.gov (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  1. Intra-Urban Movement Flow Estimation Using Location Based Social Networking Data

    Science.gov (United States)

    Kheiri, A.; Karimipour, F.; Forghani, M.

    2015-12-01

    In recent years, there has been a rapid growth of location-based social networking services, such as Foursquare and Facebook, which have attracted an increasing number of users and greatly enriched their urban experience. Location-based social network data, as a new travel demand data source, seems to be an alternative or complement to survey data in the study of mobility behavior and activity analysis because of its relatively high access and low cost. In this paper, three OD estimation models have been utilized in order to investigate their relative performance when using Location-Based Social Networking (LBSN) data. For this, the Foursquare LBSN data was used to analyze the intra-urban movement behavioral patterns for the study area, Manhattan, the most densely populated of the five boroughs of New York city. The outputs of models are evaluated using real observations based on different criterions including distance distribution, destination travel constraints. The results demonstrate the promising potential of using LBSN data for urban travel demand analysis and monitoring.

  2. INTRA-URBAN MOVEMENT FLOW ESTIMATION USING LOCATION BASED SOCIAL NETWORKING DATA

    Directory of Open Access Journals (Sweden)

    A. Kheiri

    2015-12-01

    Full Text Available In recent years, there has been a rapid growth of location-based social networking services, such as Foursquare and Facebook, which have attracted an increasing number of users and greatly enriched their urban experience. Location-based social network data, as a new travel demand data source, seems to be an alternative or complement to survey data in the study of mobility behavior and activity analysis because of its relatively high access and low cost. In this paper, three OD estimation models have been utilized in order to investigate their relative performance when using Location-Based Social Networking (LBSN data. For this, the Foursquare LBSN data was used to analyze the intra-urban movement behavioral patterns for the study area, Manhattan, the most densely populated of the five boroughs of New York city. The outputs of models are evaluated using real observations based on different criterions including distance distribution, destination travel constraints. The results demonstrate the promising potential of using LBSN data for urban travel demand analysis and monitoring.

  3. Lightning Attachment Estimation to Wind Turbines by Utilizing Lightning Location Systems

    DEFF Research Database (Denmark)

    Vogel, Stephan; Holbøll, Joachim; Lopez, Javier

    2016-01-01

    three different wind power plant locations are analyzed and the impact of varying data qualities is evaluated regarding the ability to detect upward lightning. This work provides a variety of background information which is relevant to the exposure assessment of wind turbine and includes practical......The goal of a lightning exposure assessment is to identify the number, type and characteristics of lightning discharges to a certain structure. There are various Lightning Location System (LLS) technologies available, each of them are characterized by individual performance characteristics....... In this work, these technologies are reviewed and evaluated in order to obtain an estimation of which technologies are eligible to perform a lightning assessment to wind turbines. The results indicate that ground-based mid-range low frequency (LF) LLS systems are most qualified since they combine a wide...

  4. Estimation of microwave source location in precipitating electron fluxes according to Viking satellite data

    International Nuclear Information System (INIS)

    Khrushchinskij, A.A.; Ostapenko, A.A.; Gustafsson, G.; Eliasson, L.; Sandal, I.

    1989-01-01

    According to the Viking satellite data on electron fluxes in the 0.1-300 keV energy range, the microburst source location is estimated. On the basis of experimental delays in detected peaks in different energy channels and theoretical calculations of these delays within the dipole field model (L∼ 4-5.5), it is shown that the most probable source location is the equatorial region with the centre, 5-10 0 shifted towards the ionosphere

  5. A Bootstrap Approach to Computing Uncertainty in Inferred Oil and Gas Reserve Estimates

    International Nuclear Information System (INIS)

    Attanasi, Emil D.; Coburn, Timothy C.

    2004-01-01

    This study develops confidence intervals for estimates of inferred oil and gas reserves based on bootstrap procedures. Inferred reserves are expected additions to proved reserves in previously discovered conventional oil and gas fields. Estimates of inferred reserves accounted for 65% of the total oil and 34% of the total gas assessed in the U.S. Geological Survey's 1995 National Assessment of oil and gas in US onshore and State offshore areas. When the same computational methods used in the 1995 Assessment are applied to more recent data, the 80-year (from 1997 through 2076) inferred reserve estimates for pre-1997 discoveries located in the lower 48 onshore and state offshore areas amounted to a total of 39.7 billion barrels of oil (BBO) and 293 trillion cubic feet (TCF) of gas. The 90% confidence interval about the oil estimate derived from the bootstrap approach is 22.4 BBO to 69.5 BBO. The comparable 90% confidence interval for the inferred gas reserve estimate is 217 TCF to 413 TCF. The 90% confidence interval describes the uncertainty that should be attached to the estimates. It also provides a basis for developing scenarios to explore the implications for energy policy analysis

  6. Artificial neural network approach to spatial estimation of wind velocity data

    International Nuclear Information System (INIS)

    Oztopal, Ahmet

    2006-01-01

    In any regional wind energy assessment, equal wind velocity or energy lines provide a common basis for meaningful interpretations that furnish essential information for proper design purposes. In order to achieve regional variation descriptions, there are methods of optimum interpolation with classical weighting functions or variogram methods in Kriging methodology. Generally, the weighting functions are logically and geometrically deduced in a deterministic manner, and hence, they are imaginary first approximations for regional variability assessments, such as wind velocity. Geometrical weighting functions are necessary for regional estimation of the regional variable at a location with no measurement, which is referred to as the pivot station from the measurements of a set of surrounding stations. In this paper, weighting factors of surrounding stations necessary for the prediction of a pivot station are presented by an artificial neural network (ANN) technique. The wind speed prediction results are compared with measured values at a pivot station. Daily wind velocity measurements in the Marmara region from 1993 to 1997 are considered for application of the ANN methodology. The model is more appropriate for winter period daily wind velocities, which are significant for energy generation in the study area. Trigonometric point cumulative semivariogram (TPCSV) approach results are compared with the ANN estimations for the same set of data by considering the correlation coefficient (R). Under and over estimation problems in objective analysis can be avoided by the ANN approach

  7. Estimation of break location and size for loss of coolant accidents using neural networks

    International Nuclear Information System (INIS)

    Na, Man Gyun; Shin, Sun Ho; Jung, Dong Won; Kim, Soong Pyung; Jeong, Ji Hwan; Lee, Byung Chul

    2004-01-01

    In this work, a probabilistic neural network (PNN) that has been applied well to the classification problems is used in order to identify the break locations of loss of coolant accidents (LOCA) such as hot-leg, cold-leg and steam generator tubes. Also, a fuzzy neural network (FNN) is designed to estimate the break size. The inputs to PNN and FNN are time-integrated values obtained by integrating measurement signals during a short time interval after reactor scram. An automatic structure constructor for the fuzzy neural network automatically selects the input variables from the time-integrated values of many measured signals, and optimizes the number of rules and its related parameters. It is verified that the proposed algorithm identifies very well the break locations of LOCAs and also, estimate their break size accurately

  8. Inverse approach for determination of the coils location during magnetic stimulation

    International Nuclear Information System (INIS)

    Marinova, Iliana; Kovachev, Ludmil

    2002-01-01

    An inverse approach using neural networks is extended and applied for determination of coils location during magnetic stimulation. The major constructions of magnetic stimulation coils have been investigated. The electric and magnetic fields are modelled using finite element method and integral equation method. The effects of changing the construction of coils and the frequency to the effect of magnetic stimulation are analysed. The results show that the coils for magnetic stimulation characterize with different focality and magnetic field concentration. The proposed inverse approach using neural networks is very useful for determination the spatial position of the stimulation coils especially when the location of the coil system is required to be changed dynamically. (Author)

  9. Location Estimation for an Autonomously Guided Vehicle using an Augmented Kalman Filter to Autocalibrate the Odometry

    DEFF Research Database (Denmark)

    Larsen, Thomas Dall; Bak, Martin; Andersen, Nils Axel

    1998-01-01

    A Kalman filter using encoder readings as inputs and vision measurements as observations is designed as a location estimator for an autonomously guided vehicle (AGV). To reduce the effect of modelling errors an augmented filter that estimates the true system parameters is designed. The traditional...

  10. Photogrammetric Resection Approach Using Straight Line Features for Estimation of Cartosat-1 Platform Parameters

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2008-08-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. There are several other available methods using lines, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images are the main drawbacks of the classical approaches. The line based approach to overcome these problems includes usage of lines in the number of observations that can be provided, which improve significantly the overall system redundancy. This paper addresses mathematical model relating to both image and object reference system for solving the space resection problem which is generally used for upgrading the exterior orientation parameters. In order to solve the dynamic camera calibration parameters, a sequential estimator (Kalman Filtering is applied; in an iterative process to the image. For dynamic case, e.g. an image sequence of moving objects, a state prediction and a covariance matrix for the next instant is obtained using the available estimates and the system model. Filtered state estimates can be computed from these predicted estimates using the Kalman Filtering approach and basic physical sensor model for each instant of time. The proposed approach is tested with three real data sets and the result suggests that highly accurate space resection parameters can be obtained with or without using the control points and progressive processing time reduction.

  11. On locating steganographic payload using residuals

    Science.gov (United States)

    Quach, Tu-Thach

    2011-02-01

    Locating steganographic payload usingWeighted Stego-image (WS) residuals has been proven successful provided a large number of stego images are available. In this paper, we revisit this topic with two goals. First, we argue that it is a promising approach to locate payload by showing that in the ideal scenario where the cover images are available, the expected number of stego images needed to perfectly locate all load-carrying pixels is the logarithm of the payload size. Second, we generalize cover estimation to a maximum likelihood decoding problem and demonstrate that a second-order statistical cover model can be used to compute residuals to locate payload embedded by both LSB replacement and LSB matching steganography.

  12. Shooter position estimation with muzzle blast and shockwave measurements from separate locations

    Science.gov (United States)

    Grasing, David

    2016-05-01

    There are two acoustical events associated with small arms fire: the muzzle blast (created by bullets being expelled from the barrel of the weapon), and the shockwave (created by bullets which exceed the speed of sound). Assuming the ballistics of a round are known, the times and directions of arrival of the acoustic events furnish sufficient information to determine the origin of the shot. Existing methods tacitly assume that it is a single sensor which makes measurements of the times and direction of arrival. If the sensor is located past the point where the bullet goes transonic or if the sensor is far off the axis of the shot line a single sensor localization become highly inaccurate due to the ill-conditioning of the localization problem. In this paper, a more general approach is taken which allows for localizations from measurements made at separate locations. There are considerable advantages to this approach, the most noteworthy of which is the improvement in localization accuracy due to the improvement in the conditioning of the problem. Additional benefits include: the potential to locate in cases where a single sensor has insufficient information, furnishing high quality initialization to data fusion algorithms, and the potential to identify the round from a set of possible rounds.

  13. Fuzzy Inference System Approach for Locating Series, Shunt, and Simultaneous Series-Shunt Faults in Double Circuit Transmission Lines.

    Science.gov (United States)

    Swetapadma, Aleena; Yadav, Anamika

    2015-01-01

    Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance.

  14. MagicFinger: 3D Magnetic Fingerprints for Indoor Location

    Directory of Open Access Journals (Sweden)

    Daniel Carrillo

    2015-07-01

    Full Text Available Given the indispensable role of mobile phones in everyday life, phone-centric sensing systems are ideal candidates for ubiquitous observation purposes. This paper presents a novel approach for mobile phone-centric observation applied to indoor location. The approach involves a location fingerprinting methodology that takes advantage of the presence of magnetic field anomalies inside buildings. Unlike existing work on the subject, which uses the intensity of magnetic field for fingerprinting, our approach uses all three components of the measured magnetic field vectors to improve accuracy. By using adequate soft computing techniques, it is possible to adequately balance the constraints of common solutions. The resulting system does not rely on any infrastructure devices and therefore is easy to manage and deploy. The proposed system consists of two phases: the offline phase and the online phase. In the offline phase, magnetic field measurements are taken throughout the building, and 3D maps are generated. Then, during the online phase, the user’s location is estimated through the best estimator for each zone of the building. Experimental evaluations carried out in two different buildings confirm the satisfactory performance of indoor location based on magnetic field vectors. These evaluations provided an error of (11.34 m, 4.78 m in the (x; y components of the estimated positions in the first building where the experiments were carried out, with a standard deviation of (3.41 m, 4.68 m; and in the second building, an error of (4 m, 2.98 m with a deviation of (2.64 m, 2.33 m.

  15. Bayesian estimation of animal movement from archival and satellite tags.

    Directory of Open Access Journals (Sweden)

    Michael D Sumner

    Full Text Available The reliable estimation of animal location, and its associated error is fundamental to animal ecology. There are many existing techniques for handling location error, but these are often ad hoc or are used in isolation from each other. In this study we present a Bayesian framework for determining location that uses all the data available, is flexible to all tagging techniques, and provides location estimates with built-in measures of uncertainty. Bayesian methods allow the contributions of multiple data sources to be decomposed into manageable components. We illustrate with two examples for two different location methods: satellite tracking and light level geo-location. We show that many of the problems with uncertainty involved are reduced and quantified by our approach. This approach can use any available information, such as existing knowledge of the animal's potential range, light levels or direct location estimates, auxiliary data, and movement models. The approach provides a substantial contribution to the handling uncertainty in archival tag and satellite tracking data using readily available tools.

  16. GNSS-SLR satellite co-location for the estimate of local ties

    Science.gov (United States)

    Bruni, Sara; Zerbini, Susanna; Errico, Maddalena; Santi, Efisio

    2013-04-01

    The current realization of the International Terrestrial Reference Frame (ITRF) is based on four different space-geodetic techniques, so that the benefits brought by each observing system to the definition of the frame can compensate for the drawbacks of the others and technique-specific systematic errors might be identified. The strategy used to combine the observations from the different techniques is then of prominent importance for the realization of a precise and stable reference frame. This study concentrates, in particular, on the combination of Satellite Laser Ranging (SLR) and Global Navigation Satellite System (GNSS) observations by exploiting satellite co-locations. This innovative approach is based on the fact that laser tracking of GNSS satellites, carrying on board laser reflector arrays, allows for the combination of optical and microwave signals in the determination of the spacecraft orbit. Besides, the use of satellite co-locations differs quite significantly from the traditional combination method in which each single technique solution is carried out autonomously and is interrelated in a second step. One of the benefits of the approach adopted in this study is that it allows for an independent validation of the local tie, i.e. of the vector connecting the SLR and GNSS reference points in a multi-techniques station. Typically, local ties are expressed by a single value, measured with ground-based geodetic techniques and taken as constant. In principle, however, local ties might show time variations likely caused by the different monumentation characteristics of the GNSS antennas with respect to those of a SLR system. This study evaluates the possibility of using the satellite co-location approach to generate local-ties time series by means of observations available for a selected network of ILRS stations. The data analyzed in this study were acquired as part of the NASA's Earth Science Data Systems and are archived and distributed by the Crustal

  17. LEA: An Algorithm to Estimate the Level of Location Exposure in Infrastructure-Based Wireless Networks

    Directory of Open Access Journals (Sweden)

    Francisco Garcia

    2017-01-01

    Full Text Available Location privacy in wireless networks is nowadays a major concern. This is due to the fact that the mere fact of transmitting may allow a network to pinpoint a mobile node. We consider that a first step to protect a mobile node in this situation is to provide it with the means to quantify how accurately a network establishes its position. To achieve this end, we introduce the location-exposure algorithm (LEA, which runs on the mobile terminal only and whose operation consists of two steps. In the first step, LEA discovers the positions of nearby network nodes and uses this information to emulate how they estimate the position of the mobile node. In the second step, it quantifies the level of exposure by computing the distance between the position estimated in the first step and its true position. We refer to these steps as a location-exposure problem. We tested our proposal with simulations and testbed experiments. These results show the ability of LEA to reproduce the location of the mobile node, as seen by the network, and to quantify the level of exposure. This knowledge can help the mobile user decide which actions should be performed before transmitting.

  18. Full waveform approach for the automatic detection and location of acoustic emissions from hydraulic fracturing at Äspö (Sweden)

    Science.gov (United States)

    Ángel López Comino, José; Cesca, Simone; Heimann, Sebastian; Grigoli, Francesco; Milkereit, Claus; Dahm, Torsten; Zang, Arno

    2017-04-01

    seismic event. The relative location accuracy is improved using a master event approach to correct for travel time perturbations. The master event is selected based on a good signal to noise ratio leading to a robust location with small uncertainties. Relative magnitudes are finally estimated upon the decay of the maximal recorded amplitude from the AE location. The resulting catalogue is composed of more than 4000 AEs. Their hypocenters are spatially clustered in a planar region, resembling the main fracture plane; its orientation and size are estimated from the spatial distribution of AEs. This work is funded by the EU H2020 SHEER project. Nova project 54-14-1 was financially supported by the GFZ German Research Center for Geosciences (75%), the KIT Karlsruhe Institute of Technology (15%) and the Nova Center for University Studies, Research and Development (10%). An additional in-kind contribution of SKB for using Äspö Hard Rock Laboratory as test site for geothermal research is greatly acknowledged.

  19. Decommissioning Cost Estimating -The ''Price'' Approach

    International Nuclear Information System (INIS)

    Manning, R.; Gilmour, J.

    2002-01-01

    Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs

  20. Reference Device-Assisted Adaptive Location Fingerprinting

    Directory of Open Access Journals (Sweden)

    Dongjin Wu

    2016-06-01

    Full Text Available Location fingerprinting suffers in dynamic environments and needs recalibration from time to time to maintain system performance. This paper proposes an adaptive approach for location fingerprinting. Based on real-time received signal strength indicator (RSSI samples measured by a group of reference devices, the approach applies a modified Universal Kriging (UK interpolant to estimate adaptive temporal and environmental radio maps. The modified UK can take the spatial distribution characteristics of RSSI into account. In addition, the issue of device heterogeneity caused by multiple reference devices is further addressed. To compensate the measuring differences of heterogeneous reference devices, differential RSSI metric is employed. Extensive experiments were conducted in an indoor field and the results demonstrate that the proposed approach not only adapts to dynamic environments and the situation of changing APs’ positions, but it is also robust toward measuring differences of heterogeneous reference devices.

  1. Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach

    Science.gov (United States)

    Setthawong, Pisal; Vannija, Vajirasak

    This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.

  2. Methodological Approaches to Locating Outlets of the Franchise Retail Network

    Directory of Open Access Journals (Sweden)

    Grygorenko Tetyana M.

    2016-08-01

    Full Text Available Methodical approaches to selecting strategic areas of managing the future location of franchise retail network outlets are presented. The main stages in the assessment of strategic areas of managing the future location of franchise retail network outlets have been determined and the evaluation criteria have been suggested. Since such selection requires consideration of a variety of indicators and directions of the assessment, the author proposes a scale of evaluation, which allows generalizing and organizing the research data and calculations of the previous stages of the analysis. The most important criteria and sequence of the selection of the potential franchisees for the franchise retail network have been identified, the technique for their evaluation has been proposed. The use of the suggested methodological approaches will allow the franchiser making sound decisions on the selection of potential target markets, minimizing expenditures of time and efforts on the selection of franchisees and hence optimizing the process of development of the franchise retail network, which will contribute to the formation of its structure.

  3. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    OpenAIRE

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  4. Geospatial tools effectively estimate nonexceedance probabilities of daily streamflow at ungauged and intermittently gauged locations in Ohio

    Directory of Open Access Journals (Sweden)

    William H. Farmer

    2017-10-01

    New hydrological insights for the region: Several methods for estimating nonexceedance probabilities of daily mean streamflows are explored, including single-index methodologies (nearest-neighboring index and geospatial tools (kriging and topological kriging. These methods were evaluated by conducting leave-one-out cross-validations based on analyses of nearly 7 years of daily streamflow data from 79 unregulated streamgages in Ohio and neighboring states. The pooled, ordinary kriging model, with a median Nash–Sutcliffe performance of 0.87, was superior to the single-site index methods, though there was some bias in the tails of the probability distribution. Incorporating network structure through topological kriging did not improve performance. The pooled, ordinary kriging model was applied to 118 locations without systematic streamgaging across Ohio where instantaneous streamflow measurements had been made concurrent with water-quality sampling on at least 3 separate days. Spearman rank correlations between estimated nonexceedance probabilities and measured streamflows were high, with a median value of 0.76. In consideration of application, the degree of regulation in a set of sample sites helped to specify the streamgages required to implement kriging approaches successfully.

  5. Location Privacy on DVB-RCS using a “Spatial-Timing” Approach

    Directory of Open Access Journals (Sweden)

    A. Aggelis

    2011-09-01

    Full Text Available DVB-RCS synchronization scheme on the Return Channel requires the RCSTs to be programmed with their location coordinates with an accuracy of no more than a few kilometers. RCSTs use this location information in their ranging calculation to the servicing satellite. For certain users this location information disclosure to the network operator can be seen as a serious security event. Recent work of the authors overcame this requirement by cloaking the location of an RCST in such a way (based on "spatial/geometric" symmetries of the network that the respective ranging calculations are not affected. In this work we argue that timing tolerances in the Return Channel synchronization scheme, accepted by the DVB-RCS standard, can be used in combination to the "spatial" method, further enhancing the location privacy of an RCST. Theoretical findings of the proposed "spatial-timing" approach were used to develop a practical method that can be used by workers in the field. Finally this practical method was successfully tested on a real DVB-RCS system.

  6. Application of algorithms and artificial-intelligence approach for locating multiple harmonics in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Y.-Y.; Chen, Y.-C. [Chung Yuan University (China). Dept. of Electrical Engineering

    1999-05-01

    A new method is proposed for locating multiple harmonic sources in distribution systems. The proposed method first determines the proper locations for metering measurement using fuzzy clustering. Next, an artificial neural network based on the back-propagation approach is used to identify the most likely location for multiple harmonic sources. A set of systematic algorithmic steps is developed until all harmonic locations are identified. The simulation results for an 18-busbar system show that the proposed method is very efficient in locating the multiple harmonics in a distribution system. (author)

  7. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  8. Low complexity joint estimation of reflection coefficient, spatial location, and Doppler shift for MIMO-radar by exploiting 2D-FFT

    KAUST Repository

    Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim

    2014-01-01

    In multiple-input multiple-output (MIMO) radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, maximum-likelihood (ML) estimation yields the best performance. For this problem, the ML estimation requires

  9. FDI theories. A location-based approach

    Directory of Open Access Journals (Sweden)

    Popovici, Oana Cristina

    2014-09-01

    Full Text Available Given the importance of FDI for the economic growth of both home and host countries, the aim of this paper is to assess the importance granted to location advantages during the development of FDI theory. We start with the earliest theoretical directions as regards FDI location issues and extend our study to describing less debated theories, but of a particular importance for this theme. In this way, we have the opportunity to emphasize the changes in FDI location determinants. We find that a direction of the FDI theories’ expansion is due to the incorporation of new variables on location, although the location advantages are barely mentioned in the first explanations regarding the international activity of the firms.

  10. Evaluations of carbon fluxes estimated by top-down and bottom-up approaches

    Science.gov (United States)

    Murakami, K.; Sasai, T.; Kato, S.; Hiraki, K.; Maksyutov, S. S.; Yokota, T.; Nasahara, K.; Matsunaga, T.

    2013-12-01

    There are two types of estimating carbon fluxes using satellite observation data, and these are referred to as top-down and bottom-up approaches. Many uncertainties are however still remain in these carbon flux estimations, because the true values of carbon flux are still unclear and estimations vary according to the type of the model (e.g. a transport model, a process based model) and input data. The CO2 fluxes in these approaches are estimated by using different satellite data such as the distribution of CO2 concentration in the top-down approach and the land cover information (e.g. leaf area, surface temperature) in the bottom-up approach. The satellite-based CO2 flux estimations with reduced uncertainty can be used efficiently for identifications of large emission area and carbon stocks of forest area. In this study, we evaluated the carbon flux estimates from two approaches by comparing with each other. The Greenhouse gases Observing SATellite (GOSAT) has been observing atmospheric CO2 concentrations since 2009. GOSAT L4A data product is the monthly CO2 flux estimations for 64 sub-continental regions and is estimated by using GOSAT FTS SWIR L2 XCO2 data and atmospheric tracer transport model. We used GOSAT L4A CO2 flux as top-down approach estimations and net ecosystem productions (NEP) estimated by the diagnostic type biosphere model BEAMS as bottom-up approach estimations. BEAMS NEP is only natural land CO2 flux, so we used GOSAT L4A CO2 flux after subtraction of anthropogenic CO2 emissions and oceanic CO2 flux. We compared with two approach in temperate north-east Asia region. This region is covered by grassland and crop land (about 60 %), forest (about 20 %) and bare ground (about 20 %). The temporal variation for one year period was indicated similar trends between two approaches. Furthermore we show the comparison of CO2 flux estimations in other sub-continental regions.

  11. On the benefits of location-based relay selection in mobile wireless networks

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2016-01-01

    We consider infrastructure-based mobile networks that are assisted by a single relay transmission where both the downstream destination and relay nodes are mobile. Selecting the optimal transmission path for a destination node requires up-to-date link quality estimates of all relevant links....... If the relay selection is based on link quality measurements, the number of links to update grows quadratically with the number of nodes, and measurements need to be updated frequently when nodes are mobile. In this paper, we consider a location-based relay selection scheme where link qualities are estimated...... from node positions; in the scenario of a node-based location system such as GPS, the location-based approach reduces signaling overhead, which in this case only grows linearly with the number of nodes. This paper studies these two relay selection approaches and investigates how they are affected...

  12. The privacy concerns in location based services: protection approaches and remaining challenges

    OpenAIRE

    Basiri, Anahid; Moore, Terry; Hill, Chris

    2016-01-01

    Despite the growth in the developments of the Location Based Services (LBS) applications, there are still several challenges remaining. One of the most important concerns about LBS, shared by many users and service providers is the privacy. Privacy has been considered as a big threat to the adoption of LBS among many users and consequently to the growth of LBS markets. This paper discusses the privacy concerns associated with location data, and the current privacy protection approaches. It re...

  13. A Data-Driven Reliability Estimation Approach for Phased-Mission Systems

    Directory of Open Access Journals (Sweden)

    Hua-Feng He

    2014-01-01

    Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.

  14. Computed statistics at streamgages, and methods for estimating low-flow frequency statistics and development of regional regression equations for estimating low-flow frequency statistics at ungaged locations in Missouri

    Science.gov (United States)

    Southard, Rodney E.

    2013-01-01

    The weather and precipitation patterns in Missouri vary considerably from year to year. In 2008, the statewide average rainfall was 57.34 inches and in 2012, the statewide average rainfall was 30.64 inches. This variability in precipitation and resulting streamflow in Missouri underlies the necessity for water managers and users to have reliable streamflow statistics and a means to compute select statistics at ungaged locations for a better understanding of water availability. Knowledge of surface-water availability is dependent on the streamflow data that have been collected and analyzed by the U.S. Geological Survey for more than 100 years at approximately 350 streamgages throughout Missouri. The U.S. Geological Survey, in cooperation with the Missouri Department of Natural Resources, computed streamflow statistics at streamgages through the 2010 water year, defined periods of drought and defined methods to estimate streamflow statistics at ungaged locations, and developed regional regression equations to compute selected streamflow statistics at ungaged locations. Streamflow statistics and flow durations were computed for 532 streamgages in Missouri and in neighboring States of Missouri. For streamgages with more than 10 years of record, Kendall’s tau was computed to evaluate for trends in streamflow data. If trends were detected, the variable length method was used to define the period of no trend. Water years were removed from the dataset from the beginning of the record for a streamgage until no trend was detected. Low-flow frequency statistics were then computed for the entire period of record and for the period of no trend if 10 or more years of record were available for each analysis. Three methods are presented for computing selected streamflow statistics at ungaged locations. The first method uses power curve equations developed for 28 selected streams in Missouri and neighboring States that have multiple streamgages on the same streams. Statistical

  15. A Model-Driven Approach for Hybrid Power Estimation in Embedded Systems Design

    Directory of Open Access Journals (Sweden)

    Ben Atitallah Rabie

    2011-01-01

    Full Text Available Abstract As technology scales for increased circuit density and performance, the management of power consumption in system-on-chip (SoC is becoming critical. Today, having the appropriate electronic system level (ESL tools for power estimation in the design flow is mandatory. The main challenge for the design of such dedicated tools is to achieve a better tradeoff between accuracy and speed. This paper presents a consumption estimation approach allowing taking the consumption criterion into account early in the design flow during the system cosimulation. The originality of this approach is that it allows the power estimation for both white-box intellectual properties (IPs using annotated power models and black-box IPs using standalone power estimators. In order to obtain accurate power estimates, our simulations were performed at the cycle-accurate bit-accurate (CABA level, using SystemC. To make our approach fast and not tedious for users, the simulated architectures, including standalone power estimators, were generated automatically using a model driven engineering (MDE approach. Both annotated power models and standalone power estimators can be used together to estimate the consumption of the same architecture, which makes them complementary. The simulation results showed that the power estimates given by both estimation techniques for a hardware component are very close, with a difference that does not exceed 0.3%. This proves that, even when the IP code is not accessible or not modifiable, our approach allows obtaining quite accurate power estimates that early in the design flow thanks to the automation offered by the MDE approach.

  16. Estimating variability in functional images using a synthetic resampling approach

    International Nuclear Information System (INIS)

    Maitra, R.; O'Sullivan, F.

    1996-01-01

    Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

  17. GIS-based Approach to Estimate Surface Runoff in Small Catchments: A Case Study

    Directory of Open Access Journals (Sweden)

    Vojtek Matej

    2016-09-01

    Full Text Available The issue of surface runoff assessment is one of the important and relevant topics of hydrological as well as geographical research. The aim of the paper is therefore to estimate and assess surface runoff on the example of Vyčoma catchment which is located in the Western Slovakia. For this purpose, SCS runoff curve number method, modeling in GIS and remote sensing were used. An important task was the creation of a digital elevation model (DEM, which enters the surface runoff modeling and affects its accuracy. Great attention was paid to the spatial interpretation of land use categories applying aerial imagery from 2013 and hydrological soil groups as well as calculation of maximum daily rainfall with N-year return periods as partial tasks in estimating surface runoff. From the methodological point of view, the importance of the paper can be seen in the use of a simple GIS-based approach to assess the surface runoff conditions in a small catchment.

  18. A heuristic approach to handle capacitated facility location problem evaluated using clustering internal evaluation

    Science.gov (United States)

    Sutanto, G. R.; Kim, S.; Kim, D.; Sutanto, H.

    2018-03-01

    One of the problems in dealing with capacitated facility location problem (CFLP) is occurred because of the difference between the capacity numbers of facilities and the number of customers that needs to be served. A facility with small capacity may result in uncovered customers. These customers need to be re-allocated to another facility that still has available capacity. Therefore, an approach is proposed to handle CFLP by using k-means clustering algorithm to handle customers’ allocation. And then, if customers’ re-allocation is needed, is decided by the overall average distance between customers and the facilities. This new approach is benchmarked to the existing approach by Liao and Guo which also use k-means clustering algorithm as a base idea to decide the facilities location and customers’ allocation. Both of these approaches are benchmarked by using three clustering evaluation methods with connectedness, compactness, and separations factors.

  19. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  20. Error estimates for ice discharge calculated using the flux gate approach

    Science.gov (United States)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  1. A Partial Join Approach for Mining Co-Location Patterns: A Summary of Results

    National Research Council Canada - National Science Library

    Yoo, Jin S; Shekhar, Shashi

    2005-01-01

    .... They propose a novel partial-join approach for mining co-location patterns efficiently. It transactionizes continuous spatial data while keeping track of the spatial information not modeled by transactions...

  2. Techniques for the estimation of global irradiation from sunshine duration and global irradiation estimation for Italian locations

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-04-01

    Angstrom equation H=H 0 (a+bS/S 0 ) has been fitted using the least-square method to the global irradiation and the sunshine duration data of 31 Italian locations for the duration 1965-1974. Three more linear equations: i) the equation H'=H 0 (a+bS/S 0 ), obtained by incorporating the effect of the multiple reflections between the earth's surface and the atmosphere, ii) the equation H=H 0 (a+bS/S' 0 ), obtained by incorporating the effect of not burning of the sunshine recorder chart when the elevation of the sun is less than 5 deg., and iii) the equation H'=H 0 (a+bS/S' 0 ), obtained by incorporating both the above effects simultaneously, have also each been fitted to the same data. Good correlation with correlation coefficients around 0.9 or more are obtained for most of the locations with all the four equations. Substantial spatial scatter is obtained in the values of the regression parameters. The use of any of the three latter equations does not result in any advantage over that of the simpler Angstrom equation; it neither results in a decrease in the spatial scatter in the values of the regression parameters nor does it yield better correlation. The computed values of the regression parameters in the Angstrom equation yield estimates of the global irradiation that are on the average within +- 4% of the measured values for most of the locations. (author)

  3. A New Wave Equation Based Source Location Method with Full-waveform Inversion

    KAUST Repository

    Wu, Zedong

    2017-05-26

    Locating the source of a passively recorded seismic event is still a challenging problem, especially when the velocity is unknown. Many imaging approaches to focus the image do not address the velocity issue and result in images plagued with illumination artifacts. We develop a waveform inversion approach with an additional penalty term in the objective function to reward the focusing of the source image. This penalty term is relaxed early to allow for data fitting, and avoid cycle skipping, using an extended source. At the later stages the focusing of the image dominates the inversion allowing for high resolution source and velocity inversion. We also compute the source location explicitly and numerical tests show that we obtain good estimates of the source locations with this approach.

  4. Principal component approach in variance component estimation for international sire evaluation

    Directory of Open Access Journals (Sweden)

    Jakobsen Jette

    2011-05-01

    Full Text Available Abstract Background The dairy cattle breeding industry is a highly globalized business, which needs internationally comparable and reliable breeding values of sires. The international Bull Evaluation Service, Interbull, was established in 1983 to respond to this need. Currently, Interbull performs multiple-trait across country evaluations (MACE for several traits and breeds in dairy cattle and provides international breeding values to its member countries. Estimating parameters for MACE is challenging since the structure of datasets and conventional use of multiple-trait models easily result in over-parameterized genetic covariance matrices. The number of parameters to be estimated can be reduced by taking into account only the leading principal components of the traits considered. For MACE, this is readily implemented in a random regression model. Methods This article compares two principal component approaches to estimate variance components for MACE using real datasets. The methods tested were a REML approach that directly estimates the genetic principal components (direct PC and the so-called bottom-up REML approach (bottom-up PC, in which traits are sequentially added to the analysis and the statistically significant genetic principal components are retained. Furthermore, this article evaluates the utility of the bottom-up PC approach to determine the appropriate rank of the (covariance matrix. Results Our study demonstrates the usefulness of both approaches and shows that they can be applied to large multi-country models considering all concerned countries simultaneously. These strategies can thus replace the current practice of estimating the covariance components required through a series of analyses involving selected subsets of traits. Our results support the importance of using the appropriate rank in the genetic (covariance matrix. Using too low a rank resulted in biased parameter estimates, whereas too high a rank did not result in

  5. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  6. Preliminary Upper Estimate of Peak Currents in Transcranial Magnetic Stimulation at Distant Locations From a TMS Coil.

    Science.gov (United States)

    Makarov, Sergey N; Yanamadala, Janakinadh; Piazza, Matthew W; Helderman, Alex M; Thang, Niang S; Burnham, Edward H; Pascual-Leone, Alvaro

    2016-09-01

    Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of this study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100 000 observation points, and two distinct pulse rise times; thus, providing a representative number of different datasets for comparison, while also using other numerical data. Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. The simple analytical model tested in this study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women.

  7. Bioinspired Computational Approach to Missing Value Estimation

    Directory of Open Access Journals (Sweden)

    Israel Edem Agbehadji

    2018-01-01

    Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.

  8. Estimating Bus Loads and OD Flows Using Location-Stamped Farebox and Wi-Fi Signal Data

    Directory of Open Access Journals (Sweden)

    Yuxiong Ji

    2017-01-01

    Full Text Available Electronic fareboxes integrated with Automatic Vehicle Location (AVL systems can provide location-stamped records to infer passenger boarding at individual stops. However, bus loads and Origin-Destination (OD flows, which are useful for route planning, design, and real-time controls, cannot be derived directly from farebox data. Recently, Wi-Fi sensors have been used to collect passenger OD flow information. But the data are insufficient to capture the variation of passenger demand across bus trips. In this study, we propose a hierarchical Bayesian model to estimate trip-level OD flow matrices and a period-level OD flow matrix using sampled OD flow data collected by Wi-Fi sensors and boarding data provided by fareboxes. Bus loads on each bus trip are derived directly from the estimated trip-level OD flow matrices. The proposed method is evaluated empirically on an operational bus route and the results demonstrate that it provides good and detailed transit route-level passenger demand information by combining farebox and Wi-Fi signal data.

  9. A new MODIS based approach for gas flared volumes estimation: the case of the Val d'Agri Oil Center (Southern Italy)

    Science.gov (United States)

    Lacava, T.; Faruolo, M.; Coviello, I.; Filizzola, C.; Pergola, N.; Tramutoli, V.

    2014-12-01

    Gas flaring is one of the most controversial energetic and environmental issues the Earth is facing, moreover contributing to the global warming and climate change. According to the World Bank, each year about 150 Billion Cubic Meter of gas are being flared globally, that is equivalent to the annual gas use of Italy and France combined. Besides, about 400 million tons of CO2 (representing about 1.2% of global CO2 emissions) are added annually into the atmosphere. Efforts to evaluate the impact of flaring on the surrounding environment are hampered by lack of official information on flare locations and volumes. Suitable satellite based techniques could offers a potential solution to this problem through the detection and subsequent mapping of flare locations as well as gas emissions estimation. In this paper a new methodological approach, based on the Robust Satellite Techniques (RST), a multi-temporal scheme of satellite data analysis, was developed to analyze and characterize the flaring activity of the largest Italian gas and oil pre-treatment plant (ENI-COVA) located in Val d'Agri (Basilicata) For this site, located in an anthropized area characterized by a large environmental complexity, flaring emissions are mainly related to emergency conditions (i.e. waste flaring), being the industrial process regulated by strict regional laws. With reference to the peculiar characteristics of COVA flaring, the RST approach was implemented on 13 years of EOS-MODIS (Earth Observing System - Moderate Resolution Imaging Spectroradiometer) infrared data to detect COVA-related thermal anomalies and to develop a regression model for gas flared volume estimation. The methodological approach, the whole processing chain and the preliminarily achieved results will be shown and discussed in this paper. In addition, the possible implementation of the proposed approach on the data acquired by the SUOMI NPP - VIIRS (National Polar-orbiting Partnership - Visible Infrared Imaging

  10. Public Transportation Hub Location with Stochastic Demand: An Improved Approach Based on Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Sen Liu

    2015-01-01

    Full Text Available Urban public transportation hubs are the key nodes of the public transportation system. The location of such hubs is a combinatorial problem. Many factors can affect the decision-making of location, including both quantitative and qualitative factors; however, most current research focuses solely on either the quantitative or the qualitative factors. Little has been done to combine these two approaches. To fulfill this gap in the research, this paper proposes a novel approach to the public transportation hub location problem, which takes both quantitative and qualitative factors into account. In this paper, an improved multiple attribute group decision-making (MAGDM method based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution and deviation is proposed to convert the qualitative factors of each hub into quantitative evaluation values. A location model with stochastic passenger flows is then established based on the above evaluation values. Finally, stochastic programming theory is applied to solve the model and to determine the location result. A numerical study shows that this approach is applicable and effective.

  11. Estimating raptor nesting success: old and new approaches

    Science.gov (United States)

    Brown, Jessi L.; Steenhof, Karen; Kochert, Michael N.; Bond, Laura

    2013-01-01

    Studies of nesting success can be valuable in assessing the status of raptor populations, but differing monitoring protocols can present unique challenges when comparing populations of different species across time or geographic areas. We used large datasets from long-term studies of 3 raptor species to compare estimates of apparent nest success (ANS, the ratio of successful to total number of nesting attempts), Mayfield nesting success, and the logistic-exposure model of nest survival. Golden eagles (Aquila chrysaetos), prairie falcons (Falco mexicanus), and American kestrels (F. sparverius) differ in their breeding biology and the methods often used to monitor their reproduction. Mayfield and logistic-exposure models generated similar estimates of nesting success with similar levels of precision. Apparent nest success overestimated nesting success and was particularly sensitive to inclusion of nesting attempts discovered late in the nesting season. Thus, the ANS estimator is inappropriate when exact point estimates are required, especially when most raptor pairs cannot be located before or soon after laying eggs. However, ANS may be sufficient to assess long-term trends of species in which nesting attempts are highly detectable.

  12. Model-Assisted Estimation of Tropical Forest Biomass Change: A Comparison of Approaches

    Directory of Open Access Journals (Sweden)

    Nikolai Knapp

    2018-05-01

    Full Text Available Monitoring of changes in forest biomass requires accurate transfer functions between remote sensing-derived changes in canopy height (ΔH and the actual changes in aboveground biomass (ΔAGB. Different approaches can be used to accomplish this task: direct approaches link ΔH directly to ΔAGB, while indirect approaches are based on deriving AGB stock estimates for two points in time and calculating the difference. In some studies, direct approaches led to more accurate estimations, while, in others, indirect approaches led to more accurate estimations. It is unknown how each approach performs under different conditions and over the full range of possible changes. Here, we used a forest model (FORMIND to generate a large dataset (>28,000 ha of natural and disturbed forest stands over time. Remote sensing of forest height was simulated on these stands to derive canopy height models for each time step. Three approaches for estimating ΔAGB were compared: (i the direct approach; (ii the indirect approach and (iii an enhanced direct approach (dir+tex, using ΔH in combination with canopy texture. Total prediction accuracies of the three approaches measured as root mean squared errors (RMSE were RMSEdirect = 18.7 t ha−1, RMSEindirect = 12.6 t ha−1 and RMSEdir+tex = 12.4 t ha−1. Further analyses revealed height-dependent biases in the ΔAGB estimates of the direct approach, which did not occur with the other approaches. Finally, the three approaches were applied on radar-derived (TanDEM-X canopy height changes on Barro Colorado Island (Panama. The study demonstrates the potential of forest modeling for improving the interpretation of changes observed in remote sensing data and for comparing different methodologies.

  13. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  14. A Bayesian inverse modeling approach to estimate soil hydraulic properties of a toposequence in southeastern Amazonia.

    Science.gov (United States)

    Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel

    2016-04-01

    Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when

  15. Locating sensors for detecting source-to-target patterns of special nuclear material smuggling: a spatial information theoretic approach.

    Science.gov (United States)

    Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong

    2010-01-01

    In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  16. Locating Sensors for Detecting Source-to-Target Patterns of Special Nuclear Material Smuggling: A Spatial Information Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Xuesong Zhou

    2010-08-01

    Full Text Available In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  17. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  18. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  19. Quantitative Precipitation Estimation over Ocean Using Bayesian Approach from Microwave Observations during the Typhoon Season

    Directory of Open Access Journals (Sweden)

    Jen-Chi Hu

    2009-01-01

    Full Text Available We have developed a new Bayesian approach to retrieve oceanic rain rate from the Tropical Rainfall Measuring Mission (TRMM Microwave Imager (TMI, with an emphasis on typhoon cases in the West Pacific. Retrieved rain rates are validated with measurements of rain gauges located on Japanese islands. To demonstrate improvement, retrievals are also compared with those from the TRMM/Precipitation Radar (PR, the Goddard Profiling Algorithm (GPROF, and a multi-channel linear regression statistical method (MLRS. We have found that qualitatively, all methods retrieved similar horizontal distributions in terms of locations of eyes and rain bands of typhoons. Quantitatively, our new Bayesian retrievals have the best linearity and the smallest root mean square (RMS error against rain gauge data for 16 typhoon over passes in 2004. The correlation coefficient and RMS of our retrievals are 0.95 and ~2 mm hr-1, respectively. In particular, at heavy rain rates, our Bayesian retrievals out perform those retrieved from GPROF and MLRS. Over all, the new Bayesian approach accurately retrieves surface rain rate for typhoon cases. Ac cu rate rain rate estimates from this method can be assimilated in models to improve forecast and prevent potential damages in Taiwan during typhoon seasons.

  20. Comparative study of approaches to estimate pipe break frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K.; Pulkkinen, U.; Talja, H.; Saarenheimo, A.; Karjalainen-Roikonen, P. [VTT Industrial Systems (Finland)

    2002-12-01

    The report describes the comparative study of two approaches to estimate pipe leak and rupture frequencies for piping. One method is based on a probabilistic fracture mechanistic (PFM) model while the other one is based on statistical estimation of rupture frequencies from a large database. In order to be able to compare the approaches and their results, the rupture frequencies of some selected welds have been estimated using both of these methods. This paper highlights the differences both in methods, input data, need and use of plant specific information and need of expert judgement. The study focuses on one specific degradation mechanism, namely the intergranular stress corrosion cracking (IGSCC). This is the major degradation mechanism in old stainless steel piping in BWR environment, and its growth is influenced by material properties, stresses and water chemistry. (au)

  1. An Information-Based Approach to Precision Analysis of Indoor WLAN Localization Using Location Fingerprint

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2015-12-01

    Full Text Available In this paper, we proposed a novel information-based approach to precision analysis of indoor wireless local area network (WLAN localization using location fingerprint. First of all, by using the Fisher information matrix (FIM, we derive the fundamental limit of WLAN fingerprint-based localization precision considering different signal distributions in characterizing the variation of received signal strengths (RSSs in the target environment. After that, we explore the relationship between the localization precision and access point (AP placement, which can provide valuable suggestions for the design of the highly-precise localization system. Second, we adopt the heuristic simulated annealing (SA algorithm to optimize the AP locations for the sake of approaching the fundamental limit of localization precision. Finally, the extensive simulations and experiments are conducted in both regular line-of-sight (LOS and irregular non-line-of-sight (NLOS environments to demonstrate that the proposed approach can not only effectively improve the WLAN fingerprint-based localization precision, but also reduce the time overhead.

  2. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  3. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    Science.gov (United States)

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  4. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B [Instrument Department, College of Mechatronics Engineering and Automation, National University of Defense Technology, ChangSha, Hunan, 410073 (China)

    2006-10-15

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily.

  5. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    International Nuclear Information System (INIS)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B

    2006-01-01

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily

  6. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Shangjie [Tianjin Key Laboratory of Process Measurement and Control, School of Electrical Engineering and Automation, Tianjin University, Tianjin (China); Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Hara, Wendy; Wang, Lei; Buyyounouski, Mark K.; Le, Quynh-Thu; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Li, Ruijiang, E-mail: rli2@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States)

    2017-03-15

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a reference anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.

  7. A Latent Class Approach to Estimating Test-Score Reliability

    Science.gov (United States)

    van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas

    2011-01-01

    This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…

  8. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    Science.gov (United States)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  9. A variational approach to parameter estimation in ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Kaschek Daniel

    2012-08-01

    Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  10. A variational approach to parameter estimation in ordinary differential equations.

    Science.gov (United States)

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  11. Fetal QRS detection and heart rate estimation: a wavelet-based approach

    International Nuclear Information System (INIS)

    Almeida, Rute; Rocha, Ana Paula; Gonçalves, Hernâni; Bernardes, João

    2014-01-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. (paper)

  12. Estimating the Risk of Tropical Cyclone Characteristics Along the United States Gulf of Mexico Coastline Using Different Statistical Approaches

    Science.gov (United States)

    Trepanier, J. C.; Ellis, K.; Jagger, T.; Needham, H.; Yuan, J.

    2017-12-01

    Tropical cyclones, with their high wind speeds, high rainfall totals and deep storm surges, frequently strike the United States Gulf of Mexico coastline influencing millions of people and disrupting off shore economic activities. Events, such as Hurricane Katrina in 2005 and Hurricane Isaac in 2012, can be physically different but still provide detrimental effects due to their locations of influence. There are a wide variety of ways to estimate the risk of occurrence of extreme tropical cyclones. Here, the combined risk of tropical cyclone storm surge and nearshore wind speed using a statistical copula is provided for 22 Gulf of Mexico coastal cities. Of the cities considered, Bay St. Louis, Mississippi has the shortest return period for a tropical cyclone with at least a 50 m s-1 nearshore wind speed and a three meter surge (19.5 years, 17.1-23.5). Additionally, a multivariate regression model is provided estimating the compound effects of tropical cyclone tracks, landfall central pressure, the amount of accumulated precipitation, and storm surge for five locations around Lake Pontchartrain in Louisiana. It is shown the most intense tropical cyclones typically approach from the south and a small change in the amount of rainfall or landfall central pressure leads to a large change in the final storm surge depth. Data are used from the National Hurricane Center, U-Surge, SURGEDAT, and Cooperative Observer Program. The differences in the two statistical approaches are discussed, along with the advantages and limitations to each. The goal of combining the results of the two studies is to gain a better understanding of the most appropriate risk estimation technique for a given area.

  13. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  14. Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.

    Science.gov (United States)

    Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin

    2017-08-10

    Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.

  15. Featureous: an Integrated Approach to Location, Analysis and Modularization of Features in Java Applications

    DEFF Research Database (Denmark)

    Olszak, Andrzej

    , it is essential that features are properly modularized within the structural organization of software systems. Nevertheless, in many object-oriented applications, features are not represented explicitly. Consequently, features typically end up scattered and tangled over multiple source code units......, such as architectural layers, packages and classes. This lack of modularization is known to make application features difficult to locate, to comprehend and to modify in isolation from one another. To overcome these problems, this thesis proposes Featureous, a novel approach to location, analysis and modularization...... quantitative and qualitative results suggest that Featureous succeeds at efficiently locating features in unfamiliar codebases, at aiding feature-oriented comprehension and modification, and at improving modularization of features using Java packages....

  16. Microseismic event location by master-event waveform stacking

    Science.gov (United States)

    Grigoli, F.; Cesca, S.; Dahm, T.

    2016-12-01

    Waveform stacking location methods are nowadays extensively used to monitor induced seismicity monitoring assoiciated with several underground industrial activities such as Mining, Oil&Gas production and Geothermal energy exploitation. In the last decade a significant effort has been spent to develop or improve methodologies able to perform automated seismological analysis for weak events at a local scale. This effort was accompanied by the improvement of monitoring systems, resulting in an increasing number of large microseismicity catalogs. The analysis of microseismicity is challenging, because of the large number of recorded events often characterized by a low signal-to-noise ratio. A significant limitation of the traditional location approaches is that automated picking is often done on each seismogram individually, making little or no use of the coherency information between stations. In order to improve the performance of the traditional location methods, in the last year, alternative approaches have been proposed. These methods exploits the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. The main advantage of this methods relies on their robustness even when the recorded waveforms are very noisy. On the other hand, like any other location method, the location performance strongly depends on the accuracy of the available velocity model. When dealing with inaccurate velocity models, in fact, location results can be affected by large errors. Here we will introduce a new automated waveform stacking location method which is less dependent on the knowledge of the velocity model and presents several benefits, which improve the location accuracy: 1) it accounts for phase delays due to local site effects, e.g. surface topography or variable sediment thickness 2) theoretical velocity model are only used to estimate travel times within the source volume, and not along the whole source-sensor path. We

  17. Planning Emergency Shelters for Urban Disaster Resilience: An Integrated Location-Allocation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Laijun Zhao

    2017-11-01

    Full Text Available In recent years, extreme natural hazards threaten cities more than ever due to contemporary society’s high vulnerability in cities. Hence, local governments need to implement risk mitigation and disaster operation management to enhance disaster resilience in cities. Transforming existing open spaces within cities into emergency shelters is an effective method of providing essential life support and an agent of recovery in the wake of disasters. Emergency shelters planning must identify suitable locations for shelters and reasonably allocate evacuees to those shelters. In this paper, we first consider both the buildings’ post-disaster condition and the human choice factor that affect evacuees’ decision, and propose a forecasting method to estimate the time-varying shelter demand. Then we formulate an integrated location-allocation model that is used sequentially: an emergency shelter location model to satisfy the time-varying shelter demand in a given urban area with a goal of minimizing the total setup cost of locating the shelters and an allocation model that allocates the evacuees to shelters with a goal of minimizing their total evacuation distance. We also develop an efficient algorithm to solve the model. Finally, we propose an emergency shelters planning based on a case study of Shanghai, China.

  18. Big Data-Based Approach to Detect, Locate, and Enhance the Stability of an Unplanned Microgrid Islanding

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Li, Yan; Zhang, Yingchen; Zhang, Jun Jason; Gao, David Wenzhong; Muljadi, Eduard; Gu, Yi

    2017-10-01

    In this paper, a big data-based approach is proposed for the security improvement of an unplanned microgrid islanding (UMI). The proposed approach contains two major steps: the first step is big data analysis of wide-area monitoring to detect a UMI and locate it; the second step is particle swarm optimization (PSO)-based stability enhancement for the UMI. First, an optimal synchrophasor measurement device selection (OSMDS) and matching pursuit decomposition (MPD)-based spatial-temporal analysis approach is proposed to significantly reduce the volume of data while keeping appropriate information from the synchrophasor measurements. Second, a random forest-based ensemble learning approach is trained to detect the UMI. When combined with grid topology, the UMI can be located. Then the stability problem of the UMI is formulated as an optimization problem and the PSO is used to find the optimal operational parameters of the UMI. An eigenvalue-based multiobjective function is proposed, which aims to improve the damping and dynamic characteristics of the UMI. Finally, the simulation results demonstrate the effectiveness and robustness of the proposed approach.

  19. CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-04

    Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.

  20. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    Science.gov (United States)

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  1. A Visual Analysis Approach for Inferring Personal Job and Housing Locations Based on Public Bicycle Data

    Directory of Open Access Journals (Sweden)

    Xiaoying Shi

    2017-07-01

    Full Text Available Information concerning the home and workplace of residents is the basis of analyzing the urban job-housing spatial relationship. Traditional methods conduct time-consuming user surveys to obtain personal job and housing location information. Some new methods define rules to detect personal places based on human mobility data. However, because the travel patterns of residents are variable, simple rule-based methods are unable to generalize highly changing and complex travel modes. In this paper, we propose a visual analysis approach to assist the analyzer in inferring personal job and housing locations interactively based on public bicycle data. All users are first clustered to find potential commuting users. Then, several visual views are designed to find the key candidate stations for a specific user, and the visited temporal pattern of stations and the user’s hire behavior are analyzed, which helps with the inference of station semantic meanings. Finally, a number of users’ job and housing locations are detected by the analyzer and visualized. Our approach can manage the complex and diverse cycling habits of users. The effectiveness of the approach is shown through case studies based on a real-world public bicycle dataset.

  2. Estimating wetland connectivity to streams in the Prairie Pothole Region: An isotopic and remote sensing approach

    Science.gov (United States)

    Brooks, J. R.; Mushet, David M.; Vanderhoof, Melanie; Leibowitz, Scott G.; Neff, Brian; Christensen, J. R.; Rosenberry, Donald O.; Rugh, W. D.; Alexander, L.C.

    2018-01-01

    Understanding hydrologic connectivity between wetlands and perennial streams is critical to understanding the reliance of stream flow on inputs from wetlands. We used the isotopic evaporation signal in water and remote sensing to examine wetland‐stream hydrologic connectivity within the Pipestem Creek watershed, North Dakota, a watershed dominated by prairie‐pothole wetlands. Pipestem Creek exhibited an evaporated‐water signal that had approximately half the isotopic‐enrichment signal found in most evaporatively enriched prairie‐pothole wetlands. Groundwater adjacent to Pipestem Creek had isotopic values that indicated recharge from winter precipitation and had no significant evaporative enrichment, indicating that enriched surface water did not contribute significantly to groundwater discharging into Pipestem Creek. The estimated surface water area necessary to generate the evaporation signal within Pipestem Creek was highly dynamic, varied primarily with the amount of discharge, and was typically greater than the immediate Pipestem Creek surface water area, indicating that surficial flow from wetlands contributed to stream flow throughout the summer. We propose a dynamic range of spilling thresholds for prairie‐pothole wetlands across the watershed allowing for wetland inputs even during low‐flow periods. Combining Landsat estimates with the isotopic approach allowed determination of potential (Landsat) and actual (isotope) contributing areas in wetland‐dominated systems. This combined approach can give insights into the changes in location and magnitude of surface water and groundwater pathways over time. This approach can be used in other areas where evaporation from wetlands results in a sufficient evaporative isotopic signal.

  3. Lesion size estimator of cardiac radiofrequency ablation at different common locations with different tip temperatures.

    Science.gov (United States)

    Lai, Yu-Chi; Choy, Young Bin; Haemmerich, Dieter; Vorperian, Vicken R; Webster, John G

    2004-10-01

    Finite element method (FEM) analysis has become a common method to analyze the lesion formation during temperature-controlled radiofrequency (RF) cardiac ablation. We present a process of FEM modeling a system including blood, myocardium, and an ablation catheter with a thermistor embedded at the tip. The simulation used a simple proportional-integral (PI) controller to control the entire process operated in temperature-controlled mode. Several factors affect the lesion size such as target temperature, blood flow rate, and application time. We simulated the time response of RF ablation at different locations by using different target temperatures. The applied sites were divided into two groups each with a different convective heat transfer coefficient. The first group was high-flow such as the atrioventricular (AV) node and the atrial aspect of the AV annulus, and the other was low-flow such as beneath the valve or inside the coronary sinus. Results showed the change of lesion depth and lesion width with time, under different conditions. We collected data for all conditions and used it to create a database. We implemented a user-interface, the lesion size estimator, where the user enters set temperature and location. Based on the database, the software estimated lesion dimensions during different applied durations. This software could be used as a first-step predictor to help the electrophysiologist choose treatment parameters.

  4. Location Discovery Based on Fuzzy Geometry in Passive Sensor Networks

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2011-01-01

    Full Text Available Location discovery with uncertainty using passive sensor networks in the nation's power grid is known to be challenging, due to the massive scale and inherent complexity. For bearings-only target localization in passive sensor networks, the approach of fuzzy geometry is introduced to investigate the fuzzy measurability for a moving target in R2 space. The fuzzy analytical bias expressions and the geometrical constraints are derived for bearings-only target localization. The interplay between fuzzy geometry of target localization and the fuzzy estimation bias for the case of fuzzy linear observer trajectory is analyzed in detail in sensor networks, which can realize the 3-dimensional localization including fuzzy estimate position and velocity of the target by measuring the fuzzy azimuth angles at intervals of fixed time. Simulation results show that the resulting estimate position outperforms the traditional least squares approach for localization with uncertainty.

  5. Precision of EM Simulation Based Wireless Location Estimation in Multi-Sensor Capsule Endoscopy.

    Science.gov (United States)

    Khan, Umair; Ye, Yunxing; Aisha, Ain-Ul; Swar, Pranay; Pahlavan, Kaveh

    2018-01-01

    In this paper, we compute and examine two-way localization limits for an RF endoscopy pill as it passes through an individuals gastrointestinal (GI) tract. We obtain finite-difference time-domain and finite element method-based simulation results position assessment employing time of arrival (TOA). By means of a 3-D human body representation from a full-wave simulation software and lognormal models for TOA propagation from implant organs to body surface, we calculate bounds on location estimators in three digestive organs: stomach, small intestine, and large intestine. We present an investigation of the causes influencing localization precision, consisting of a range of organ properties; peripheral sensor array arrangements, number of pills in cooperation, and the random variations in transmit power of sensor nodes. We also perform a localization precision investigation for the situation where the transmission signal of the antenna is arbitrary with a known probability distribution. The computational solver outcome shows that the number of receiver antennas on the exterior of the body has higher impact on the precision of the location than the amount of capsules in collaboration within the GI region. The large intestine is influenced the most by the transmitter power probability distribution.

  6. Non-invasive approach towards the in vivo estimation of 3D inter-vertebral movements: methods and preliminary results.

    Science.gov (United States)

    Cerveri, P; Pedotti, A; Ferrigno, G

    2004-12-01

    A kinematical model of the lower spine was designed and used to obtain a robust estimation of the vertebral rotations during torso movements from skin-surface markers recorded by video-cameras. Markers were placed in correspondence of the anatomical landmarks of the pelvic bone and vertebral spinous and transverse processes, and acquired during flexion, lateral bending and axial motions. In the model calibration stage, a motion-based approach was used to compute the rotation axes and centres of the functional segmental units. Markers were mirrored into virtual points located on the model surface, expressed in the local reference system of coordinates. The spine motion assessment was solved into the domain of extended Kalman filters: at each frame of the acquisition, the model pose was updated by minimizing the distances between the measured 2D marker projections on the cameras and the corresponding back-projections of virtual points located on the model surface. The novelty of the proposed technique rests on the fact that the varying location of the rotation centres of the functional segmental units can be tracked directly during motion computation. In addition, we show how the effects of skin artefacts on orientation data can be taken into account. As a result, the kinematical estimation of simulated motions shows that orientation artefacts were reduced by a factor of at least 50%. Preliminary experiments on real motion confirmed the reliability of the proposed method with results in agreement with classical studies in literature.

  7. Estimation of net greenhouse gas balance using crop- and soil-based approaches: Two case studies

    International Nuclear Information System (INIS)

    Huang, Jianxiong; Chen, Yuanquan; Sui, Peng; Gao, Wansheng

    2013-01-01

    The net greenhouse gas balance (NGHGB), estimated by combining direct and indirect greenhouse gas (GHG) emissions, can reveal whether an agricultural system is a sink or source of GHGs. Currently, two types of methods, referred to here as crop-based and soil-based approaches, are widely used to estimate the NGHGB of agricultural systems on annual and seasonal crop timescales. However, the two approaches may produce contradictory results, and few studies have tested which approach is more reliable. In this study, we examined the two approaches using experimental data from an intercropping trial with straw removal and a tillage trial with straw return. The results of the two approaches provided different views of the two trials. In the intercropping trial, NGHGB estimated by the crop-based approach indicated that monocultured maize (M) was a source of GHGs (− 1315 kg CO 2 −eq ha −1 ), whereas maize–soybean intercropping (MS) was a sink (107 kg CO 2 −eq ha −1 ). When estimated by the soil-based approach, both cropping systems were sources (− 3410 for M and − 2638 kg CO 2 −eq ha −1 for MS). In the tillage trial, mouldboard ploughing (MP) and rotary tillage (RT) mitigated GHG emissions by 22,451 and 21,500 kg CO 2 −eq ha −1 , respectively, as estimated by the crop-based approach. However, by the soil-based approach, both tillage methods were sources of GHGs: − 3533 for MP and − 2241 kg CO 2 −eq ha −1 for RT. The crop-based approach calculates a GHG sink on the basis of the returned crop biomass (and other organic matter input) and estimates considerably more GHG mitigation potential than that calculated from the variations in soil organic carbon storage by the soil-based approach. These results indicate that the crop-based approach estimates higher GHG mitigation benefits compared to the soil-based approach and may overestimate the potential of GHG mitigation in agricultural systems. - Highlights: • Net greenhouse gas balance (NGHGB) of

  8. An integrated approach for estimating oil volume in petroleum-contaminated sites: a North American case study

    International Nuclear Information System (INIS)

    Chen, Z.; Huang, G.H.; Chakma, A.

    1999-01-01

    An integrated approach for estimating the distribution of light nonaqueous phase liquids (LNAPLs) such as oil spill and leakage in a porous media is proposed, based on a study at a site located in western Canada. The site has one original release source that is a flare pit, with on-site soil and groundwater seriously contaminated by petroleum products spilled over the past two decades. Results of the study show that soil properties and site characteristics have significant impact on the spreading of contaminants which affect the estimation of contaminant volume. Although the LNAPLs in the subsurface do not appear as a distinct layer, and the volume and distribution differ from site to site, the proposed method offers insight into the contamination details and is, therefore, considered to be an effective and convenient tool for obtaining a reasonable estimate of residual oil volume in the subsurface. Results could also be used in designing an enhanced recovery scheme for the site under study, as well as in designing multi-component models of the subsurface contamination for the purpose of risk assessment. 13 refs., 2 tabs., 2 figs

  9. A fuel-based approach to estimating motor vehicle exhaust emissions

    Science.gov (United States)

    Singer, Brett Craig

    Motor vehicles contribute significantly to air pollution problems; accurate motor vehicle emission inventories are therefore essential to air quality planning. Current travel-based inventory models use emission factors measured from potentially biased vehicle samples and predict fleet-average emissions which are often inconsistent with on-road measurements. This thesis presents a fuel-based inventory approach which uses emission factors derived from remote sensing or tunnel-based measurements of on-road vehicles. Vehicle activity is quantified by statewide monthly fuel sales data resolved to the air basin level. Development of the fuel-based approach includes (1) a method for estimating cold start emission factors, (2) an analysis showing that fuel-normalized emission factors are consistent over a range of positive vehicle loads and that most fuel use occurs during loaded-mode driving, (3) scaling factors relating infrared hydrocarbon measurements to total exhaust volatile organic compound (VOC) concentrations, and (4) an analysis showing that economic factors should be considered when selecting on-road sampling sites. The fuel-based approach was applied to estimate carbon monoxide (CO) emissions from warmed-up vehicles in the Los Angeles area in 1991, and CO and VOC exhaust emissions for Los Angeles in 1997. The fuel-based CO estimate for 1991 was higher by a factor of 2.3 +/- 0.5 than emissions predicted by California's MVEI 7F model. Fuel-based inventory estimates for 1997 were higher than those of California's updated MVEI 7G model by factors of 2.4 +/- 0.2 for CO and 3.5 +/- 0.6 for VOC. Fuel-based estimates indicate a 20% decrease in the mass of CO emitted, despite an 8% increase in fuel use between 1991 and 1997; official inventory models predict a 50% decrease in CO mass emissions during the same period. Cold start CO and VOC emission factors derived from parking garage measurements were lower than those predicted by the MVEI 7G model. Current inventories

  10. Damage severity estimation from the global stiffness decrease

    International Nuclear Information System (INIS)

    Nitescu, C; Gillich, G R; Manescu, T; Korka, Z I; Abdel Wahab, M

    2017-01-01

    In actual damage detection methods, localization and severity estimation can be treated separately. The severity is commonly estimated using fracture mechanics approach, with the main disadvantage of involving empirically deduced relations. In this paper, a damage severity estimator based on the global stiffness reduction is proposed. This feature is computed from the deflections of the intact and damaged beam, respectively. The damage is always located where the bending moment achieves maxima. If the damage is positioned elsewhere on the beam, its effect becomes lower, because the stress is produced by a diminished bending moment. It is shown that the global stiffness reduction produced by a crack is the same for all beams with a similar cross-section, regardless of the boundary conditions. One mathematical relation indicating the severity and another indicating the effect of removing damage from the beam. Measurements on damaged beams with different boundary conditions and cross-sections are carried out, and the location and severity are found using the proposed relations. These comparisons prove that the proposed approach can be used to accurately compute the severity estimator. (paper)

  11. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  12. Location capability of a sparse regional network (RSTN) using a multi-phase earthquake location algorithm (REGLOC)

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.

    1994-01-01

    The Regional Seismic Test Network (RSTN) was deployed by the US Department of Energy (DOE) to determine whether data recorded by a regional network could be used to detect and accurately locate seismic events that might be clandestine nuclear tests. The purpose of this paper is to evaluate the location capability of the RSTN. A major part of this project was the development of the location algorithm REGLOC and application of Basian a prior statistics for determining the accuracy of the location estimates. REGLOC utilizes all identifiable phases, including backazimuth, in the location. Ninty-four events, distributed throughout the network area, detected by both the RSTN and located by local networks were used in the study. The location capability of the RSTN was evaluated by estimating the location accuracy, error ellipse accuracy, and the percentage of events that could be located, as a function of magnitude. The location accuracy was verified by comparing the RSTN results for the 94 events with published locations based on data from the local networks. The error ellipse accuracy was evaluated by determining whether the error ellipse includes the actual location. The percentage of events located was assessed by combining detection capability with location capability to determine the percentage of events that could be located within the study area. Events were located with both an average crustal model for the entire region, and with regional velocity models along with station corrections obtained from master events. Most events with a magnitude <3.0 can only be located with arrivals from one station. Their average location errors are 453 and 414 km for the average- and regional-velocity model locations, respectively. Single station locations are very unreliable because they depend on accurate backazimuth estimates, and backazimuth proved to be a very unreliable computation.

  13. Top-down and bottom-up approaches for cost estimating new reactor designs

    International Nuclear Information System (INIS)

    Berbey, P.; Gautier, G.M.; Duflo, D.; Rouyer, J.L.

    2007-01-01

    For several years, Generation-4 designs will be 'pre-conceptual' for the less mature concepts and 'preliminary' for the more mature concepts. In this situation, appropriate data for some of the plant systems may be lacking to develop a bottom-up cost estimate. Therefore, a more global approach, the Top-Down Approach (TDA), is needed to help the designers and decision makers in comparing design options. It utilizes more or less simple models for cost estimating the different parts of a design. TDA cost estimating effort applies to a whole functional element whose cost is approached by similar estimations coming from existing data, ratios and models, for a given range of variation of parameters. Modeling is used when direct analogy is not possible. There are two types of models, global and specific ones. Global models are applied to cost modules related to Code Of Account. Exponential formulae such as Ci = Ai + (Bi x Pi n ) are used when there are cost data for comparable modules in nuclear or other industries. Specific cost models are developed for major specific components of the plant: - process equipment such as reactor vessel, steam generators or large heat exchangers. - buildings, with formulae estimating the construction cost from base cost of m3 of building volume. - systems, when unit costs, cost ratios and models are used, depending on the level of detail of the design. Bottom Up Approach (BUA), which is based on unit prices coming from similar equipment or from manufacturer consulting, is very valuable and gives better cost estimations than TDA when it can be applied, that is at a rather late stage of the design. Both approaches are complementary when some parts of the design are detailed enough to be estimated by BUA, and when BUA results are used to check TDA results and to improve TDA models. This methodology is applied to the HTR (High Temperature Reactor) concept and to an advanced PWR design

  14. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    International Nuclear Information System (INIS)

    Wen Fang-Qing; Zhang Gong; Ben De

    2015-01-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. (paper)

  15. Infrastructure and industrial location : a dual technology approach

    OpenAIRE

    Bjorvatn, Kjetil

    2001-01-01

    The paper investigates how differences in infrastructure quality may affect industrial location between countries. Employing a dualtechnology model, the main result of the paper is the somewhat surprising conclusion that an improvement in a country’s infrastructure may weaken its locational advantage and induce a firm to locate production in a country with a less efficient infrastructure.

  16. Modeling discrete competitive facility location

    CERN Document Server

    Karakitsiou, Athanasia

    2015-01-01

    This book presents an up-to-date review of modeling and optimization approaches for location problems along with a new bi-level programming methodology which captures the effect of competition of both producers and customers on facility location decisions. While many optimization approaches simplify location problems by assuming decision making in isolation, this monograph focuses on models which take into account the competitive environment in which such decisions are made. New insights in modeling, algorithmic and theoretical possibilities are opened by this approach and new applications are possible. Competition on equal term plus competition between market leader and followers are considered in this study, consequently bi-level optimization methodology is emphasized and further developed. This book provides insights regarding modeling complexity and algorithmic approaches to discrete competitive location problems. In traditional location modeling, assignment of customer demands to supply sources are made ...

  17. Near-source mobile methane emission estimates using EPA Method33a and a novel probabilistic approach as a basis for leak quantification in urban areas

    Science.gov (United States)

    Albertson, J. D.

    2015-12-01

    Methane emissions from underground pipeline leaks remain an ongoing issue in the development of accurate methane emission inventories for the natural gas supply chain. Application of mobile methods during routine street surveys would help address this issue, but there are large uncertainties in current approaches. In this paper, we describe results from a series of near-source (< 30 m) controlled methane releases where an instrumented van was used to measure methane concentrations during both fixed location sampling and during mobile traverses immediately downwind of the source. The measurements were used to evaluate the application of EPA Method 33A for estimating methane emissions downwind of a source and also to test the application of a new probabilistic approach for estimating emission rates from mobile traverse data.

  18. Location-aware mobile technologies: historical, social and spatial approaches

    Directory of Open Access Journals (Sweden)

    Adriana de Souza e Silva

    2013-04-01

    Full Text Available With the popularization of smartphones, location-based services are increasingly part of everyday live. People use their cell phones to find nearby restaurants, friends in the vicinity, and track their children. Although location-based services have received sparse attention from mobile communications cholars to date, the ability of locating people and things with one’s cell phone is not new. Since the removal of GPS signal degradation in 2000, artists and researchers have been exploring how location-awareness influences mobility, spatiality and sociability. Besides exploring the historical antecedents of today’s location-based services, this paper focuses on the main social issues that emerge when location-aware technologies leave the strict domain of art and research and become part of everyday life: locational privacy, sociability, and spatiality. Finally, this paper addresses two main topics that future mobile communication research that focus on location-awareness should take into consideration: a shift in the meaning of location, and the adoption and appropriation of location-aware technologies in the global south.

  19. Reconnaissance Estimates of Recharge Based on an Elevation-dependent Chloride Mass-balance Approach

    Energy Technology Data Exchange (ETDEWEB)

    Charles E. Russell; Tim Minor

    2002-08-31

    Significant uncertainty is associated with efforts to quantity recharge in arid regions such as southern Nevada. However, accurate estimates of groundwater recharge are necessary to understanding the long-term sustainability of groundwater resources and predictions of groundwater flow rates and directions. Currently, the most widely accepted method for estimating recharge in southern Nevada is the Maxey and Eakin method. This method has been applied to most basins within Nevada and has been independently verified as a reconnaissance-level estimate of recharge through several studies. Recharge estimates derived from the Maxey and Eakin and other recharge methodologies ultimately based upon measures or estimates of groundwater discharge (outflow methods) should be augmented by a tracer-based aquifer-response method. The objective of this study was to improve an existing aquifer-response method that was based on the chloride mass-balance approach. Improvements were designed to incorporate spatial variability within recharge areas (rather than recharge as a lumped parameter), develop a more defendable lower limit of recharge, and differentiate local recharge from recharge emanating as interbasin flux. Seventeen springs, located in the Sheep Range, Spring Mountains, and on the Nevada Test Site were sampled during the course of this study and their discharge was measured. The chloride and bromide concentrations of the springs were determined. Discharge and chloride concentrations from these springs were compared to estimates provided by previously published reports. A literature search yielded previously published estimates of chloride flux to the land surface. {sup 36}Cl/Cl ratios and discharge rates of the three largest springs in the Amargosa Springs discharge area were compiled from various sources. This information was utilized to determine an effective chloride concentration for recharging precipitation and its associated uncertainty via Monte Carlo simulations

  20. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  1. Location Estimation of Mobile Devices

    Directory of Open Access Journals (Sweden)

    Kamil ŽIDEK

    2009-06-01

    Full Text Available This contribution describes mathematical model (kinematics for Mobile Robot carriage. The mathematical model is fully parametric. Model is designed universally for any measures three or four wheeled carriage. The next conditions are: back wheels are driving-wheel, front wheels change angle of Robot turning. Position of the front wheel gives the actual position of the robot. Position of the robot is described by coordinates x, y and by angle of the front wheel α in reference position. Main reason for model implementation is indoor navigation. We need some estimation of robot position especially after turning of the Robot. Next use is for outdoor navigation especially for precising GPS information.

  2. A comparison of the Bayesian and frequentist approaches to estimation

    CERN Document Server

    Samaniego, Francisco J

    2010-01-01

    This monograph contributes to the area of comparative statistical inference. Attention is restricted to the important subfield of statistical estimation. The book is intended for an audience having a solid grounding in probability and statistics at the level of the year-long undergraduate course taken by statistics and mathematics majors. The necessary background on Decision Theory and the frequentist and Bayesian approaches to estimation is presented and carefully discussed in Chapters 1-3. The 'threshold problem' - identifying the boundary between Bayes estimators which tend to outperform st

  3. Rapidly locating sources and predicting contaminant dispersion in buildings

    International Nuclear Information System (INIS)

    Sohn, Michael D.; Reynolds, Pamela; Gadgil, Ashok J.; Sextro, Richard G.

    2002-01-01

    Contaminant releases in or near a building can lead to significant human exposures unless prompt response measures are taken. However, selecting the proper response depends in part on knowing the source locations, the amounts released, and the dispersion characteristics of the pollutants. We present an approach that estimates this information in real time. It uses Bayesian statistics to interpret measurements from sensors placed in the building yielding best estimates and uncertainties for the release conditions, including the operating state of the building. Because the method is fast, it continuously updates the estimates as measurements stream in from the sensors. We show preliminary results for characterizing a gas release in a three-floor, multi-room building at the Dugway Proving Grounds, Utah, USA

  4. Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.

    Science.gov (United States)

    Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A

    2018-01-18

    Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.

  5. A Bayesian approach to estimate sensible and latent heat over vegetated land surface

    Directory of Open Access Journals (Sweden)

    C. van der Tol

    2009-06-01

    Full Text Available Sensible and latent heat fluxes are often calculated from bulk transfer equations combined with the energy balance. For spatial estimates of these fluxes, a combination of remotely sensed and standard meteorological data from weather stations is used. The success of this approach depends on the accuracy of the input data and on the accuracy of two variables in particular: aerodynamic and surface conductance. This paper presents a Bayesian approach to improve estimates of sensible and latent heat fluxes by using a priori estimates of aerodynamic and surface conductance alongside remote measurements of surface temperature. The method is validated for time series of half-hourly measurements in a fully grown maize field, a vineyard and a forest. It is shown that the Bayesian approach yields more accurate estimates of sensible and latent heat flux than traditional methods.

  6. Quantum chemical approach to estimating the thermodynamics of metabolic reactions.

    Science.gov (United States)

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-11-12

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.

  7. The effects of sampling location and turbulence on discharge estimates in short converging turbine intakes

    Energy Technology Data Exchange (ETDEWEB)

    Romero-Gomez, P.; Harding, S. F.; Richmond, M. C.

    2017-01-01

    Standards provide recommendations for best practices when installing current meters to measure fluid flow in closed conduits. A central guideline requires the velocity distribution to be regular and the flow steady. Because of the nature of the short converging intakes typical of low-head hydroturbines, these assumptions may be invalid if current meters are intended to be used to estimate discharge. Usual concerns are (1) the effects of the number of devices, (2) the sampling location and (3) the high turbulence caused by blockage from submersible traveling screens usually deployed for safe downstream fish passage. These three effects were examined in the present study by using 3D simulated flow fields in both steady-state and transient modes. In the process of describing an application at an existing hydroturbine intake at Ice Harbor Dam, the present work outlined the methods involved, which combined computational fluid dynamics, laboratory measurements in physical models of the hydroturbine, and current meter performance evaluations in experimental settings. The main conclusions in this specific application were that a steady-state flow field sufficed to determine the adequate number of meters and their location, and that both the transverse velocity and turbulence intensity had a small impact on estimate errors. However, while it may not be possible to extrapolate these findings to other field conditions and measuring devices, the study laid out a path to conduct similar assessments in other applications.

  8. Approaches for the direct estimation of lambda, and demographic contributions to lambda, using capture-recapture data

    Science.gov (United States)

    Nichols, James D.; Hines, James E.

    2002-01-01

    We first consider the estimation of the finite rate of population increase or population growth rate, u i , using capture-recapture data from open populations. We review estimation and modelling of u i under three main approaches to modelling openpopulation data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to u i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, n i , of Pradel (1996). We review estimation of n i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for u i and n i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.

  9. On Algebraic Approach for MSD Parametric Estimation

    OpenAIRE

    Oueslati , Marouene; Thiery , Stéphane; Gibaru , Olivier; Béarée , Richard; Moraru , George

    2011-01-01

    This article address the identification problem of the natural frequency and the damping ratio of a second order continuous system where the input is a sinusoidal signal. An algebra based approach for identifying parameters of a Mass Spring Damper (MSD) system is proposed and compared to the Kalman-Bucy filter. The proposed estimator uses the algebraic parametric method in the frequency domain yielding exact formula, when placed in the time domain to identify the unknown parameters. We focus ...

  10. Stochastic LMP (Locational marginal price) calculation method in distribution systems to minimize loss and emission based on Shapley value and two-point estimate method

    International Nuclear Information System (INIS)

    Azad-Farsani, Ehsan; Agah, S.M.M.; Askarian-Abyaneh, Hossein; Abedi, Mehrdad; Hosseinian, S.H.

    2016-01-01

    LMP (Locational marginal price) calculation is a serious impediment in distribution operation when private DG (distributed generation) units are connected to the network. A novel policy is developed in this study to guide distribution company (DISCO) to exert its control over the private units when power loss and green-house gases emissions are minimized. LMP at each DG bus is calculated according to the contribution of the DG to the reduced amount of loss and emission. An iterative algorithm which is based on the Shapley value method is proposed to allocate loss and emission reduction. The proposed algorithm will provide a robust state estimation tool for DISCOs in the next step of operation. The state estimation tool provides the decision maker with the ability to exert its control over private DG units when loss and emission are minimized. Also, a stochastic approach based on the PEM (point estimate method) is employed to capture uncertainty in the market price and load demand. The proposed methodology is applied to a realistic distribution network, and efficiency and accuracy of the method are verified. - Highlights: • Reduction of the loss and emission at the same time. • Fair allocation of loss and emission reduction. • Estimation of the system state using an iterative algorithm. • Ability of DISCOs to control DG units via the proposed policy. • Modeling the uncertainties to calculate the stochastic LMP.

  11. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  12. A fuzzy regression with support vector machine approach to the estimation of horizontal global solar radiation

    International Nuclear Information System (INIS)

    Baser, Furkan; Demirhan, Haydar

    2017-01-01

    Accurate estimation of the amount of horizontal global solar radiation for a particular field is an important input for decision processes in solar radiation investments. In this article, we focus on the estimation of yearly mean daily horizontal global solar radiation by using an approach that utilizes fuzzy regression functions with support vector machine (FRF-SVM). This approach is not seriously affected by outlier observations and does not suffer from the over-fitting problem. To demonstrate the utility of the FRF-SVM approach in the estimation of horizontal global solar radiation, we conduct an empirical study over a dataset collected in Turkey and applied the FRF-SVM approach with several kernel functions. Then, we compare the estimation accuracy of the FRF-SVM approach to an adaptive neuro-fuzzy system and a coplot supported-genetic programming approach. We observe that the FRF-SVM approach with a Gaussian kernel function is not affected by both outliers and over-fitting problem and gives the most accurate estimates of horizontal global solar radiation among the applied approaches. Consequently, the use of hybrid fuzzy functions and support vector machine approaches is found beneficial in long-term forecasting of horizontal global solar radiation over a region with complex climatic and terrestrial characteristics. - Highlights: • A fuzzy regression functions with support vector machines approach is proposed. • The approach is robust against outlier observations and over-fitting problem. • Estimation accuracy of the model is superior to several existent alternatives. • A new solar radiation estimation model is proposed for the region of Turkey. • The model is useful under complex terrestrial and climatic conditions.

  13. The CAPM approach to materiality

    OpenAIRE

    Hadjieftychiou, Aristarchos

    1993-01-01

    Materiality is a pervasive accounting concept that has defied a precise quantitative definition. The Capital Asset Pricing Model (CAPM) approach to materiality provides a means for determining the limits that bound materiality. Also, the approach makes it possible to locate the point estimate within these limits based on certain assumptions.

  14. Automatic picker of P & S first arrivals and robust event locator

    Science.gov (United States)

    Pinsky, V.; Polozov, A.; Hofstetter, A.

    2003-12-01

    We report on further development of automatic all distances location procedure designed for a regional network. The procedure generalizes the previous "loca l" (R ratio of two STAs, calculated in two consecutive and equal time windows (instead of previously used Akike Information Criterion). "Teleseismic " location is split in two stages: preliminary and final one. The preliminary part estimates azimuth and apparent velocity by fitting a plane wave to the P automatic pickings. The apparent velocity criterion is used to decide about strategy of the following computations: teleseismic or regional. The preliminary estimates of azimuth and apparent velocity provide starting value for the final teleseismic and regional location. Apparent velocity is used to get first a pproximation distance to the source on the basis of the P, Pn, Pg travel-timetables. The distance estimate together with the preliminary azimuth estimate provides first approximations of the source latitude and longitude via sine and cosine theorems formulated for the spherical triangle. Final location is based on robust grid-search optimization procedure, weighting the number of pickings that simultaneously fit the model travel times. The grid covers initial location and becomes finer while approaching true hypocenter. The target function is a sum of the bell-shaped characteristic functions, used to emphasize true pickings and eliminate outliers. The final solution is a grid point that provides maximum to the target function. The procedure was applied to a list of ML > 4 earthquakes recorded by the Israel Seismic Network (ISN) in the 1999-2002 time period. Most of them are badly constrained relative the network. However, the results of location with average normalized error relative bulletin solutions e=dr/R of 5% were obtained, in each of the distance ranges. The first version of the procedure was incorporated in the national Early Warning System in 2001. Recently, we started to send automatic Early

  15. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  16. Generating Health Estimates by Zip Code: A Semiparametric Small Area Estimation Approach Using the California Health Interview Survey.

    Science.gov (United States)

    Wang, Yueyan; Ponce, Ninez A; Wang, Pan; Opsomer, Jean D; Yu, Hongjian

    2015-12-01

    We propose a method to meet challenges in generating health estimates for granular geographic areas in which the survey sample size is extremely small. Our generalized linear mixed model predicts health outcomes using both individual-level and neighborhood-level predictors. The model's feature of nonparametric smoothing function on neighborhood-level variables better captures the association between neighborhood environment and the outcome. Using 2011 to 2012 data from the California Health Interview Survey, we demonstrate an empirical application of this method to estimate the fraction of residents without health insurance for Zip Code Tabulation Areas (ZCTAs). Our method generated stable estimates of uninsurance for 1519 of 1765 ZCTAs (86%) in California. For some areas with great socioeconomic diversity across adjacent neighborhoods, such as Los Angeles County, the modeled uninsured estimates revealed much heterogeneity among geographically adjacent ZCTAs. The proposed method can increase the value of health surveys by providing modeled estimates for health data at a granular geographic level. It can account for variations in health outcomes at the neighborhood level as a result of both socioeconomic characteristics and geographic locations.

  17. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  18. Flood Catastrophe Model for Designing Optimal Flood Insurance Program: Estimating Location-Specific Premiums in the Netherlands.

    Science.gov (United States)

    Ermolieva, T; Filatova, T; Ermoliev, Y; Obersteiner, M; de Bruijn, K M; Jeuken, A

    2017-01-01

    As flood risks grow worldwide, a well-designed insurance program engaging various stakeholders becomes a vital instrument in flood risk management. The main challenge concerns the applicability of standard approaches for calculating insurance premiums of rare catastrophic losses. This article focuses on the design of a flood-loss-sharing program involving private insurance based on location-specific exposures. The analysis is guided by a developed integrated catastrophe risk management (ICRM) model consisting of a GIS-based flood model and a stochastic optimization procedure with respect to location-specific risk exposures. To achieve the stability and robustness of the program towards floods with various recurrences, the ICRM uses stochastic optimization procedure, which relies on quantile-related risk functions of a systemic insolvency involving overpayments and underpayments of the stakeholders. Two alternative ways of calculating insurance premiums are compared: the robust derived with the ICRM and the traditional average annual loss approach. The applicability of the proposed model is illustrated in a case study of a Rotterdam area outside the main flood protection system in the Netherlands. Our numerical experiments demonstrate essential advantages of the robust premiums, namely, that they: (1) guarantee the program's solvency under all relevant flood scenarios rather than one average event; (2) establish a tradeoff between the security of the program and the welfare of locations; and (3) decrease the need for other risk transfer and risk reduction measures. © 2016 Society for Risk Analysis.

  19. Estimating Single and Multiple Target Locations Using K-Means Clustering with Radio Tomographic Imaging in Wireless Sensor Networks

    Science.gov (United States)

    2015-03-26

    clustering is an algorithm that has been used in data mining applications such as machine learning applications , pattern recognition, hyper-spectral imagery...42 3.7.2 Application of K-means Clustering . . . . . . . . . . . . . . . . . 42 3.8 Experiment Design...Tomographic Imaging WLAN Wireless Local Area Networks WSN Wireless Sensor Network xx ESTIMATING SINGLE AND MULTIPLE TARGET LOCATIONS USING K-MEANS CLUSTERING

  20. Estimation of direction of arrival of a moving target using subspace based approaches

    Science.gov (United States)

    Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2016-05-01

    In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.

  1. Scaling measurements of metabolism in stream ecosystems: challenges and approaches to estimating reaeration

    Science.gov (United States)

    Bowden, W. B.; Parker, S.; Song, C.

    2016-12-01

    Stream ecologists have used various formulations of an oxygen budget approach as a surrogate to measure "whole-stream metabolism" (WSM) of carbon in rivers and streams. Improvements in sensor technologies that provide reliable, high-frequency measurements of dissolved oxygen concentrations in adverse field conditions has made it much easier to acquire the basic data needed to estimate WSM in remote locations over long periods (weeks to months). However, accurate estimates of WSM require reliable measurements or estimates of the reaeration coefficient (k). Small errors in estimates of k can lead to large errors in estimates of gross ecosystem production and ecosystem respiration and so the magnitude of the biological flux of CO2 to or from streams. This is an especially challenging problem in unproductive, oligotrophic streams. Unfortunately, current methods to measure reaeration directly (gas evasion) are expensive, labor-intensive, and time-consuming. As a consequence, there is a substantial mismatch between the time steps at which we can measure reaeration versus most of the other variables required to calculate WSM. As a part of the NSF Arctic Long-Term Ecological Research Project we have refined methods to measure WSM in Arctic streams and found a good relationship between measured k values and those calculated by the Energy Dissipation Model (EDM). Other researchers have also noted that this equation works well for both low- and high-order streams. The EDM is dependent on stream slope (relatively constant) and velocity (which is related to discharge or stage). These variables are easy to measure and can be used to estimate k a high frequency (minutes) over large areas (river networks). As a key part of the NSF MacroSystems Biology SCALER project we calculated WSM for multiple reaches in nested stream networks in six biomes across the United States and Australia. We calculated k by EDM and fitted k via a Bayesian model for WSM. The relationships between

  2. MUSIC algorithm DoA estimation for cooperative node location in mobile ad hoc networks

    Science.gov (United States)

    Warty, Chirag; Yu, Richard Wai; ElMahgoub, Khaled; Spinsante, Susanna

    In recent years the technological development has encouraged several applications based on distributed communications network without any fixed infrastructure. The problem of providing a collaborative early warning system for multiple mobile nodes against a fast moving object. The solution is provided subject to system level constraints: motion of nodes, antenna sensitivity and Doppler effect at 2.4 GHz and 5.8 GHz. This approach consists of three stages. The first phase consists of detecting the incoming object using a highly directive two element antenna at 5.0 GHz band. The second phase consists of broadcasting the warning message using a low directivity broad antenna beam using 2× 2 antenna array which then in third phase will be detected by receiving nodes by using direction of arrival (DOA) estimation technique. The DOA estimation technique is used to estimate the range and bearing of the incoming nodes. The position of fast arriving object can be estimated using the MUSIC algorithm for warning beam DOA estimation. This paper is mainly intended to demonstrate the feasibility of early detection and warning system using a collaborative node to node communication links. The simulation is performed to show the behavior of detecting and broadcasting antennas as well as performance of the detection algorithm. The idea can be further expanded to implement commercial grade detection and warning system

  3. Approaches to estimating the universe of natural history collections data

    Directory of Open Access Journals (Sweden)

    Arturo H. Ariño

    2010-10-01

    Full Text Available This contribution explores the problem of recognizing and measuring the universe of specimen-level data existing in Natural History Collections around the world, in absence of a complete, world-wide census or register. Estimates of size seem necessary to plan for resource allocation for digitization or data capture, and may help represent how many vouchered primary biodiversity data (in terms of collections, specimens or curatorial units might remain to be mobilized. Three general approaches are proposed for further development, and initial estimates are given. Probabilistic models involve crossing data from a set of biodiversity datasets, finding commonalities and estimating the likelihood of totally obscure data from the fraction of known data missing from specific datasets in the set. Distribution models aim to find the underlying distribution of collections’ compositions, figuring out the occult sector of the distributions. Finally, case studies seek to compare digitized data from collections known to the world to the amount of data known to exist in the collection but not generally available or not digitized. Preliminary estimates range from 1.2 to 2.1 gigaunits, of which a mere 3% at most is currently web-accessible through GBIF’s mobilization efforts. However, further data and analyses, along with other approaches relying more heavily on surveys, might change the picture and possibly help narrow the estimate. In particular, unknown collections not having emerged through literature are the major source of uncertainty.

  4. Manually locating physical and virtual reality objects.

    Science.gov (United States)

    Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G

    2014-09-01

    In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.

  5. A singular-value decomposition approach to X-ray spectral estimation from attenuation data

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1986-01-01

    A singular-value decomposition (SVD) approach is described for estimating the exposure-rate spectral distributions of X-rays from attenuation data measured withvarious filtrations. This estimation problem with noisy measurements is formulated as the problem of solving a system of linear equations with an ill-conditioned nature. The principle of the SVD approach is that a response matrix, representing the X-ray attenuation effect by filtrations at various energies, can be expanded into summation of inherent component matrices, and thereby the spectral distributions can be represented as a linear combination of some component curves. A criterion function is presented for choosing the components needed to form a reliable estimate. The feasibility of the proposed approach is studied in detail in a computer simulation using a hypothetical X-ray spectrum. The application results of the spectral distributions emitted from a therapeutic X-ray generator are shown. Finally some advantages of this approach are pointed out. (orig.)

  6. Effects of computing parameters and measurement locations on the estimation of 3D NPS in non-stationary MDCT images.

    Science.gov (United States)

    Miéville, Frédéric A; Bolard, Gregory; Bulling, Shelley; Gudinchet, François; Bochud, François O; Verdun, François R

    2013-11-01

    The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT. Copyright © 2012 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Pathological Location of Cranial Nerves in Petroclival Lesions: How to Avoid Their Injury during Anterior Petrosal Approach.

    Science.gov (United States)

    Borghei-Razavi, Hamid; Tomio, Ryosuke; Fereshtehnejad, Seyed-Mohammad; Shibao, Shunsuke; Schick, Uta; Toda, Masahiro; Yoshida, Kazunari; Kawase, Takeshi

    2016-02-01

    Objectives  Numerous surgical approaches have been developed to access the petroclival region. The Kawase approach, through the middle fossa, is a well-described option for addressing cranial base lesions of the petroclival region. Our aim was to gather data about the variation of cranial nerve locations in diverse petroclival pathologies and clarify the most common pathologic variations confirmed during the anterior petrosal approach. Method  A retrospective analysis was made of both videos and operative and histologic records of 40 petroclival tumors from January 2009 to September 2013 in which the Kawase approach was used. The anatomical variations of cranial nerves IV-VI related to the tumor were divided into several location categories: superior lateral (SL), inferior lateral (IL), superior medial (SM), inferior medial (IM), and encased (E). These data were then analyzed taking into consideration pathologic subgroups of meningioma, epidermoid, and schwannoma. Results  In 41% of meningiomas, the trigeminal nerve is encased by the tumor. In 38% of the meningiomas, the trigeminal nerve is in the SL part of the tumor, and it is in 20% of the IL portion of the tumor. In 38% of the meningiomas, the trochlear nerve is encased by the tumor. The abducens nerve is not always visible (35%). The pathologic nerve pattern differs from that of meningiomas for epidermoid and trigeminal schwannomas. Conclusion  The pattern of cranial nerves IV-VI is linked to the type of petroclival tumor. In a meningioma, tumor origin (cavernous, upper clival, tentorial, and petrous apex) is the most important predictor of the location of cranial nerves IV-VI. Classification of four subtypes of petroclival meningiomas using magnetic resonance imaging is very useful to predict the location of deviated cranial nerves IV-VI intraoperatively.

  8. A coherent structure approach for parameter estimation in Lagrangian Data Assimilation

    Science.gov (United States)

    Maclean, John; Santitissadeekorn, Naratip; Jones, Christopher K. R. T.

    2017-12-01

    We introduce a data assimilation method to estimate model parameters with observations of passive tracers by directly assimilating Lagrangian Coherent Structures. Our approach differs from the usual Lagrangian Data Assimilation approach, where parameters are estimated based on tracer trajectories. We employ the Approximate Bayesian Computation (ABC) framework to avoid computing the likelihood function of the coherent structure, which is usually unavailable. We solve the ABC by a Sequential Monte Carlo (SMC) method, and use Principal Component Analysis (PCA) to identify the coherent patterns from tracer trajectory data. Our new method shows remarkably improved results compared to the bootstrap particle filter when the physical model exhibits chaotic advection.

  9. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    Science.gov (United States)

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  10. An evolutionary approach to real-time moment magnitude estimation via inversion of displacement spectra

    Science.gov (United States)

    Caprio, M.; Lancieri, M.; Cua, G. B.; Zollo, A.; Wiemer, S.

    2011-01-01

    We present an evolutionary approach for magnitude estimation for earthquake early warning based on real-time inversion of displacement spectra. The Spectrum Inversion (SI) method estimates magnitude and its uncertainty by inferring the shape of the entire displacement spectral curve based on the part of the spectra constrained by available data. The method consists of two components: 1) estimating seismic moment by finding the low frequency plateau Ω0, the corner frequency fc and attenuation factor (Q) that best fit the observed displacement spectra assuming a Brune ω2 model, and 2) estimating magnitude and its uncertainty based on the estimate of seismic moment. A novel characteristic of this method is that is does not rely on empirically derived relationships, but rather involves direct estimation of quantities related to the moment magnitude. SI magnitude and uncertainty estimates are updated each second following the initial P detection. We tested the SI approach on broadband and strong motion waveforms data from 158 Southern California events, and 25 Japanese events for a combined magnitude range of 3 ≤ M ≤ 7. Based on the performance evaluated on this dataset, the SI approach can potentially provide stable estimates of magnitude within 10 seconds from the initial earthquake detection.

  11. An "Ensemble Approach" to Modernizing Extreme Precipitation Estimation for Dam Safety Decision-Making

    Science.gov (United States)

    Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.

    2017-12-01

    To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the

  12. ANN Based Approach for Estimation of Construction Costs of Sports Fields

    Directory of Open Access Journals (Sweden)

    Michał Juszczyk

    2018-01-01

    Full Text Available Cost estimates are essential for the success of construction projects. Neural networks, as the tools of artificial intelligence, offer a significant potential in this field. Applying neural networks, however, requires respective studies due to the specifics of different kinds of facilities. This paper presents the proposal of an approach to the estimation of construction costs of sports fields which is based on neural networks. The general applicability of artificial neural networks in the formulated problem with cost estimation is investigated. An applicability of multilayer perceptron networks is confirmed by the results of the initial training of a set of various artificial neural networks. Moreover, one network was tailored for mapping a relationship between the total cost of construction works and the selected cost predictors which are characteristic of sports fields. Its prediction quality and accuracy were assessed positively. The research results legitimatize the proposed approach.

  13. The Massachusetts Sustainable-Yield Estimator: A decision-support tool to assess water availability at ungaged stream locations in Massachusetts

    Science.gov (United States)

    Archfield, Stacey A.; Vogel, Richard M.; Steeves, Peter A.; Brandt, Sara L.; Weiskel, Peter K.; Garabedian, Stephen P.

    2010-01-01

    Federal, State and local water-resource managers require a variety of data and modeling tools to better understand water resources. The U.S. Geological Survey, in cooperation with the Massachusetts Department of Environmental Protection, has developed a statewide, interactive decision-support tool to meet this need. The decision-support tool, referred to as the Massachusetts Sustainable-Yield Estimator (MA SYE) provides screening-level estimates of the sustainable yield of a basin, defined as the difference between the unregulated streamflow and some user-specified quantity of water that must remain in the stream to support such functions as recreational activities or aquatic habitat. The MA SYE tool was designed, in part, because the quantity of surface water available in a basin is a time-varying quantity subject to competing demands for water. To compute sustainable yield, the MA SYE tool estimates a daily time series of unregulated, daily mean streamflow for a 44-year period of record spanning October 1, 1960, through September 30, 2004. Selected streamflow quantiles from an unregulated, daily flow-duration curve are estimated by solving six regression equations that are a function of physical and climate basin characteristics at an ungaged site on a stream of interest. Streamflow is then interpolated between the estimated quantiles to obtain a continuous daily flow-duration curve. A time series of unregulated daily streamflow subsequently is created by transferring the timing of the daily streamflow at a reference streamgage to the ungaged site by equating exceedence probabilities of contemporaneous flow at the two locations. One of 66 reference streamgages is selected by kriging, a geostatistical method, which is used to map the spatial relation among correlations between the time series of the logarithm of daily streamflows at each reference streamgage and the ungaged site. Estimated unregulated, daily mean streamflows show good agreement with observed

  14. Estimating construction and demolition debris generation using a materials flow analysis approach.

    Science.gov (United States)

    Cochran, K M; Townsend, T G

    2010-11-01

    The magnitude and composition of a region's construction and demolition (C&D) debris should be understood when developing rules, policies and strategies for managing this segment of the solid waste stream. In the US, several national estimates have been conducted using a weight-per-construction-area approximation; national estimates using alternative procedures such as those used for other segments of the solid waste stream have not been reported for C&D debris. This paper presents an evaluation of a materials flow analysis (MFA) approach for estimating C&D debris generation and composition for a large region (the US). The consumption of construction materials in the US and typical waste factors used for construction materials purchasing were used to estimate the mass of solid waste generated as a result of construction activities. Debris from demolition activities was predicted from various historical construction materials consumption data and estimates of average service lives of the materials. The MFA approach estimated that approximately 610-78 × 10(6)Mg of C&D debris was generated in 2002. This predicted mass exceeds previous estimates using other C&D debris predictive methodologies and reflects the large waste stream that exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. A hybrid approach to estimating national scale spatiotemporal variability of PM2.5 in the contiguous United States.

    Science.gov (United States)

    Beckerman, Bernardo S; Jerrett, Michael; Serre, Marc; Martin, Randall V; Lee, Seung-Jae; van Donkelaar, Aaron; Ross, Zev; Su, Jason; Burnett, Richard T

    2013-07-02

    Airborne fine particulate matter exhibits spatiotemporal variability at multiple scales, which presents challenges to estimating exposures for health effects assessment. Here we created a model to predict ambient particulate matter less than 2.5 μm in aerodynamic diameter (PM2.5) across the contiguous United States to be applied to health effects modeling. We developed a hybrid approach combining a land use regression model (LUR) selected with a machine learning method, and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals. The PM2.5 data set included 104,172 monthly observations at 1464 monitoring locations with approximately 10% of locations reserved for cross-validation. LUR models were based on remote sensing estimates of PM2.5, land use and traffic indicators. Normalized cross-validated R(2) values for LUR were 0.63 and 0.11 with and without remote sensing, respectively, suggesting remote sensing is a strong predictor of ground-level concentrations. In the models including the BME interpolation of the residuals, cross-validated R(2) were 0.79 for both configurations; the model without remotely sensed data described more fine-scale variation than the model including remote sensing. Our results suggest that our modeling framework can predict ground-level concentrations of PM2.5 at multiple scales over the contiguous U.S.

  16. Estimation of Road Vehicle Speed Using Two Omnidirectional Microphones: A Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    López-Valcarce Roberto

    2004-01-01

    Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.

  17. Store Location in Shopping Centers: Theory & Estimates

    OpenAIRE

    Kerry D. Vandell; Charles C. Carter

    2000-01-01

    This paper develops a formal theory of store location within shopping centers based on bid rent theory. The bid rent model is fully speci?ed and solved with the objective function of pro?t maximization in the presence of comparative, multipurpose and impulse shopping behavior. Several hypotheses result about the optimal relationships between store types, sizes, rents, sales, and distances from the mall center. The hypotheses are tested and con?rmed using data from a sample of 689 leases in ei...

  18. Best estimate LB LOCA approach based on advanced thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Sauvage, J.Y.; Gandrille, J.L.; Gaurrand, M.; Rochwerger, D.; Thibaudeau, J.; Viloteau, E.

    2004-01-01

    Improvements achieved in thermal-hydraulics with development of Best Estimate computer codes, have led number of Safety Authorities to preconize realistic analyses instead of conservative calculations. The potentiality of a Best Estimate approach for the analysis of LOCAs urged FRAMATOME to early enter into the development with CEA and EDF of the 2nd generation code CATHARE, then of a LBLOCA BE methodology with BWNT following the Code Scaling Applicability and Uncertainty (CSAU) proceeding. CATHARE and TRAC are the basic tools for LOCA studies which will be performed by FRAMATOME according to either a deterministic better estimate (dbe) methodology or a Statistical Best Estimate (SBE) methodology. (author)

  19. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    Science.gov (United States)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  20. A multiobjective modeling approach to locate multi-compartment containers for urban-sorted waste

    International Nuclear Information System (INIS)

    Tralhao, Lino; Coutinho-Rodrigues, Joao; Alcada-Almeida, Luis

    2010-01-01

    The location of multi-compartment sorted waste containers for recycling purposes in cities is an important problem in the context of urban waste management. The costs associated with those facilities and the impacts placed on populations are important concerns. This paper introduces a mixed-integer, multiobjective programming approach to identify the locations and capacities of such facilities. The approach incorporates an optimization model in a Geographical Information System (GIS)-based interactive decision support system that includes four objectives. The first objective minimizes the total investment cost; the second one minimizes the average distance from dwellings to the respective multi-compartment container; the last two objectives address the 'pull' and 'push' characteristics of the decision problem, one by minimizing the number of individuals too close to any container, and the other by minimizing the number of dwellings too far from the respective multi-compartment container. The model determines the number of facilities to be opened, the respective container capacities, their locations, their respective shares of the total waste of each type to be collected, and the dwellings assigned to each facility. The approach proposed was tested with a case study for the historical center of Coimbra city, Portugal, where a large urban renovation project, addressing about 800 buildings, is being undertaken. This paper demonstrates that the models and techniques incorporated in the interactive decision support system (IDSS) can be used to assist a decision maker (DM) in analyzing this complex problem in a realistically sized urban application. Ten solutions consisting of different combinations of underground containers for the disposal of four types of sorted waste in 12 candidate sites, were generated. These solutions and tradeoffs among the objectives are presented to the DM via tables, graphs, color-coded maps and other graphics. The DM can then use this

  1. Radiographic Estimation of the Location and Size of kidneys in ...

    African Journals Online (AJOL)

    Keywords: Radiography, Location, Kidney size, Local dogs. The kidneys of dogs and cats are located retroperitoneally (Bjorling, 1993). Visualization of the kidneys on radiographs is possible due to the contrast provided by the perirenal fat (Grandage, 1975). However, this perirenal fat rarely covers the ventral surface of the ...

  2. Estimating absolute configurational entropies of macromolecules: the minimally coupled subspace approach.

    Directory of Open Access Journals (Sweden)

    Ulf Hensen

    Full Text Available We develop a general minimally coupled subspace approach (MCSA to compute absolute entropies of macromolecules, such as proteins, from computer generated canonical ensembles. Our approach overcomes limitations of current estimates such as the quasi-harmonic approximation which neglects non-linear and higher-order correlations as well as multi-minima characteristics of protein energy landscapes. Here, Full Correlation Analysis, adaptive kernel density estimation, and mutual information expansions are combined and high accuracy is demonstrated for a number of test systems ranging from alkanes to a 14 residue peptide. We further computed the configurational entropy for the full 67-residue cofactor of the TATA box binding protein illustrating that MCSA yields improved results also for large macromolecular systems.

  3. Health-related quality of life among adults 65 years and older in the United States, 2011-2012: a multilevel small area estimation approach.

    Science.gov (United States)

    Lin, Yu-Hsiu; McLain, Alexander C; Probst, Janice C; Bennett, Kevin J; Qureshi, Zaina P; Eberth, Jan M

    2017-01-01

    The purpose of this study was to develop county-level estimates of poor health-related quality of life (HRQOL) among aged 65 years and older U.S. adults and to identify spatial clusters of poor HRQOL using a multilevel, poststratification approach. Multilevel, random-intercept models were fit to HRQOL data (two domains: physical health and mental health) from the 2011-2012 Behavioral Risk Factor Surveillance System. Using a poststratification, small area estimation approach, we generated county-level probabilities of having poor HRQOL for each domain in U.S. adults aged 65 and older, and validated our model-based estimates against state and county direct estimates. County-level estimates of poor HRQOL in the United States ranged from 18.07% to 44.81% for physical health and 14.77% to 37.86% for mental health. Correlations between model-based and direct estimates were higher for physical than mental HRQOL. Counties located in the Arkansas, Kentucky, and Mississippi exhibited the worst physical HRQOL scores, but this pattern did not hold for mental HRQOL, which had the highest probability of mentally unhealthy days in Illinois, Indiana, and Vermont. Substantial geographic variation in physical and mental HRQOL scores exists among older U.S. adults. State and local policy makers should consider these local conditions in targeting interventions and policies to counties with high levels of poor HRQOL scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. A New Approach to Image-Based Estimation of Food Volume

    Directory of Open Access Journals (Sweden)

    Hamid Hassannejad

    2017-06-01

    Full Text Available A balanced diet is the key to a healthy lifestyle and is crucial for preventing or dealing with many chronic diseases such as diabetes and obesity. Therefore, monitoring diet can be an effective way of improving people’s health. However, manual reporting of food intake has been shown to be inaccurate and often impractical. This paper presents a new approach to food intake quantity estimation using image-based modeling. The modeling method consists of three steps: firstly, a short video of the food is taken by the user’s smartphone. From such a video, six frames are selected based on the pictures’ viewpoints as determined by the smartphone’s orientation sensors. Secondly, the user marks one of the frames to seed an interactive segmentation algorithm. Segmentation is based on a Gaussian Mixture Model alongside the graph-cut algorithm. Finally, a customized image-based modeling algorithm generates a point-cloud to model the food. At the same time, a stochastic object-detection method locates a checkerboard used as size/ground reference. The modeling algorithm is optimized such that the use of six input images still results in an acceptable computation cost. In our evaluation procedure, we achieved an average accuracy of 92 % on a test set that includes images of different kinds of pasta and bread, with an average processing time of about 23 s.

  5. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits

    Directory of Open Access Journals (Sweden)

    Thomson Peter C

    2003-05-01

    Full Text Available Abstract To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits.

  6. Efficient method for location and detection of partial discharge in transformer oil by DOA estimation of circular array of ultrasonic sensors

    Science.gov (United States)

    Saravanakumar, N.; Sathiyasekar, K.

    2018-01-01

    The electrical insulation failures in oil transformers are mainly occurs due to the inappropriate placing of Partial Discharge (PD) sources. In order to eliminate the insulation defects and also to locate the PD sources in an appropriate location, a new approach called circular array of ultrasonic sensors (CAUS) with various analysis is proposed. At first de-noise the PD signal from the CAUS using the fast independent component analysis (Fast ICA) algorithm. Secondly, the wide band signal from CAUS is converted into narrow band signal by using the total least square algorithm (TLS). Third, parse representation of array covariance vector (SRACV) technique is utilized to separate DOA (Direction of Arrival) in three directions from PD to CAUS. Finally, the PD sources are placed in an appropriate location by using the pitch and azimuth angles of those three DOAs and the exact coordination of three planes are calculated by using the particle swarm optimization algorithm. The simulation result demonstrates the effectiveness of proposed approach in terms of PD location in transformer oil.

  7. How to detect the location and time of a covert chemical attack a Bayesian approach

    OpenAIRE

    See, Mei Eng Elaine.

    2009-01-01

    Approved for public release, distribution unlimited In this thesis, we develop a Bayesian updating model that estimates the location and time of a chemical attack using inputs from chemical sensors and Atmospheric Threat and Dispersion (ATD) models. In bridging the critical gap between raw sensor data and threat evaluation and prediction, the model will help authorities perform better hazard prediction and damage control. The model is evaluated with respect to settings representing real-wo...

  8. An application of time-frequency signal analysis technique to estimate the location of an impact source on a plate type structure

    International Nuclear Information System (INIS)

    Park, Jin Ho; Lee, Jeong Han; Choi, Young Chul; Kim, Chan Joong; Seong, Poong Hyun

    2005-01-01

    It has been reviewed whether it would be suitable that the application of the time-frequency signal analysis techniques to estimate the location of the impact source in plate structure. The STFT(Short Time Fourier Transform), WVD(Wigner-Ville distribution) and CWT(Continuous Wavelet Transform) methods are introduced and the advantages and disadvantages of those methods are described by using a simulated signal component. The essential of the above proposed techniques is to separate the traveling waves in both time and frequency domains using the dispersion characteristics of the structural waves. These time-frequency methods are expected to be more useful than the conventional time domain analyses for the impact localization problem on a plate type structure. Also it has been concluded that the smoothed WVD can give more reliable means than the other methodologies for the location estimation in a noisy environment

  9. A Honey Bee Foraging approach for optimal location of a biomass power plant

    Energy Technology Data Exchange (ETDEWEB)

    Vera, David; Jurado, Francisco [Dept. of Electrical Engineering, University of Jaen, 23700 EPS Linares, Jaen (Spain); Carabias, Julio; Ruiz-Reyes, Nicolas [Dept. of Telecommunication Engineering, University of Jaen, 23700 EPS Linares, Jaen (Spain)

    2010-07-15

    Over eight million hectares of olive trees are cultivated worldwide, especially in Mediterranean countries, where more than 97% of the world's olive oil is produced. The three major olive oil producers worldwide are Spain, Italy, and Greece. Olive tree pruning residues are an autochthonous and important renewable source that, in most of cases, farmers burn through an uncontrolled manner. Besides, industrial uses have not yet been developed. The aim of this paper consists of a new calculation tool based on particles swarm (Binary Honey Bee Foraging, BHBF). Effectively, this approach will make possible to determine the optimal location, biomass supply area and power plant size that offer the best profitability for investor. Moreover, it prevents the accurate method (not feasible from computational viewpoint). In this work, Profitability Index (PI) is set as the fitness function for the BHBF approach. Results are compared with other evolutionary optimization algorithms such as Binary Particle Swarm Optimization (BPSO), and Genetic Algorithms (GA). All the experiments have shown that the optimal plant size is 2 MW, PI = 3.3122, the best location corresponds to coordinate: X = 49, Y = 97 and biomass supply area is 161.33 km{sup 2}. The simulation times have been reduced to the ninth of time than the greedy (accurate) solution. Matlab registered is used to run all simulations. (author)

  10. GPS-based microenvironment tracker (MicroTrac) model to estimate time-location of individuals for air pollution exposure assessments: model evaluation in central North Carolina.

    Science.gov (United States)

    Breen, Michael S; Long, Thomas C; Schultz, Bradley D; Crooks, James; Breen, Miyuki; Langstaff, John E; Isaacs, Kristin K; Tan, Yu-Mei; Williams, Ronald W; Cao, Ye; Geller, Andrew M; Devlin, Robert B; Batterman, Stuart A; Buckley, Timothy J

    2014-07-01

    A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared with 24-h diary data from nine participants, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time-location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies.

  11. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    Science.gov (United States)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  12. An alternative subspace approach to EEG dipole source localization

    Science.gov (United States)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  13. An alternative subspace approach to EEG dipole source localization

    International Nuclear Information System (INIS)

    Xu Xiaoliang; Xu, Bobby; He Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist

  14. Fault Estimation for Fuzzy Delay Systems: A Minimum Norm Least Squares Solution Approach.

    Science.gov (United States)

    Huang, Sheng-Juan; Yang, Guang-Hong

    2017-09-01

    This paper mainly focuses on the problem of fault estimation for a class of Takagi-Sugeno fuzzy systems with state delays. A minimum norm least squares solution (MNLSS) approach is first introduced to establish a fault estimation compensator, which is able to optimize the fault estimator. Compared with most of the existing fault estimation methods, the MNLSS-based fault estimation method can effectively decrease the effect of state errors on the accuracy of fault estimation. Finally, three examples are given to illustrate the effectiveness and merits of the proposed method.

  15. A super-resolution approach for uncertainty estimation of PIV measurements

    NARCIS (Netherlands)

    Sciacchitano, A.; Wieneke, B.; Scarano, F.

    2012-01-01

    A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of

  16. Use of the superpopulation approach to estimate breeding population size: An example in asynchronously breeding birds

    Science.gov (United States)

    Williams, K.A.; Frederick, P.C.; Nichols, J.D.

    2011-01-01

    Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.

  17. METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE

    Directory of Open Access Journals (Sweden)

    Татьяна Александровна Коркина

    2013-08-01

    Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11

  18. Estimating productivity costs using the friction cost approach in practice: a systematic review.

    Science.gov (United States)

    Kigozi, Jesse; Jowett, Sue; Lewis, Martyn; Barton, Pelham; Coast, Joanna

    2016-01-01

    The choice of the most appropriate approach to valuing productivity loss has received much debate in the literature. The friction cost approach has been proposed as a more appropriate alternative to the human capital approach when valuing productivity loss, although its application remains limited. This study reviews application of the friction cost approach in health economic studies and examines how its use varies in practice across different country settings. A systematic review was performed to identify economic evaluation studies that have estimated productivity costs using the friction cost approach and published in English from 1996 to 2013. A standard template was developed and used to extract information from studies meeting the inclusion criteria. The search yielded 46 studies from 12 countries. Of these, 28 were from the Netherlands. Thirty-five studies reported the length of friction period used, with only 16 stating explicitly the source of the friction period. Nine studies reported the elasticity correction factor used. The reported friction cost approach methods used to derive productivity costs varied in quality across studies from different countries. Few health economic studies have estimated productivity costs using the friction cost approach. The estimation and reporting of productivity costs using this method appears to differ in quality by country. The review reveals gaps and lack of clarity in reporting of methods for friction cost evaluation. Generating reporting guidelines and country-specific parameters for the friction cost approach is recommended if increased application and accuracy of the method is to be realized.

  19. A catalytic approach to estimate the redox potential of heme-peroxidases

    International Nuclear Information System (INIS)

    Ayala, Marcela; Roman, Rosa; Vazquez-Duhalt, Rafael

    2007-01-01

    The redox potential of heme-peroxidases varies according to a combination of structural components within the active site and its vicinities. For each peroxidase, this redox potential imposes a thermodynamic threshold to the range of oxidizable substrates. However, the instability of enzymatic intermediates during the catalytic cycle precludes the use of direct voltammetry to measure the redox potential of most peroxidases. Here we describe a novel approach to estimate the redox potential of peroxidases, which directly depends on the catalytic performance of the activated enzyme. Selected p-substituted phenols are used as substrates for the estimations. The results obtained with this catalytic approach correlate well with the oxidative capacity predicted by the redox potential of the Fe(III)/Fe(II) couple

  20. International taxation and multinational firm location decisions

    OpenAIRE

    Barrios Cobos, Salvador; Huizinga, Harry; Laeven, Luc; Nicodème, Gaëtan J.A.

    2008-01-01

    Using a large international firm-level data set, we estimate separate effects of host and parent country taxation on the location decisions of multinational firms. Both types of taxation are estimated to have a negative impact on the location of new foreign subsidiaries. In fact, the impact of parent country taxation is estimated to be relatively large, possibly reflecting its international discriminatory nature. For the cross-section of multinational firms, we find that parent firms tend to ...

  1. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  2. Estimating the location and shape of hybrid zones

    DEFF Research Database (Denmark)

    Guedj, Benjamin; Guillot, Gilles

    2011-01-01

    We propose a new model to make use of georeferenced genetic data for inferring the location and shape of a hybrid zone. The model output includes the posterior distribution of a parameter that quantifies the width of the hybrid zone. The model proposed is implemented in the GUI and command‐line v...

  3. Location-quality-aware policy optimisation for relay selection in mobile networks

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova

    2016-01-01

    for resulting performance of such network optimizations. In mobile scenarios, the required information collection and forwarding causes delays that will additionally affect the reliability of the collected information and hence will influence the performance of the relay selection method. This paper analyzes...... the joint influence of these two factors in the decision process for the example of a mobile location-based relay selection approach using a continuous time Markov chain model. Efficient algorithms are developed based on this model to obtain optimal relay policies under consideration of localization errors....... Numerical results show how information update rates, forwarding delays, and location estimation errors affect these optimal policies and allow to conclude on the required accuracy of location-based systems for such mobile relay selection scenarios. A measurement-based indoor scenario with more complex...

  4. Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach

    KAUST Repository

    Ballal, Tarig

    2014-01-01

    This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.

  5. Location as a determinant of accommodation prices: managerial approach

    OpenAIRE

    Napierała, Tomasz; Leśniewska, Katarzyna

    2014-01-01

    In the presentation authors discuss the location-based factors’ impact on accommodation prices. The aim of the presentation is to compare the results of qualitative and quantitative research on location-based determinants of accommodation prices in Lodz Metropolitan Area (Poland). The authors employ methodological triangulation (Yeung 2000), both to explore statistical significance of location-based determinants of accommodation prices, and to present managerial opinions about the influence o...

  6. A GIS approach for predicting prehistoric site locations.

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J. A.; Wescott, K. L.

    1999-08-04

    Use of geographic information system (GIS)-based predictive mapping to locate areas of high potential for prehistoric archaeological sites is becoming increasingly popular among archaeologists. Knowledge of the environmental variables influencing activities of original inhabitants is used to produce GIS layers representing the spatial distribution of those variables. The GIS layers are then analyzed to identify locations where combinations of environmental variables match patterns observed at known prehistoric sites. Presented are the results of a study to locate high-potential areas for prehistoric sites in a largely unsurveyed area of 39,000 acres in the Upper Chesapeake Bay region, including details of the analysis process. The project used environmental data from over 500 known sites in other parts of the region and the results corresponded well with known sites in the study area.

  7. Anatomy guided automated SPECT renal seed point estimation

    Science.gov (United States)

    Dwivedi, Shekhar; Kumar, Sailendra

    2010-04-01

    Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.

  8. An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems

    International Nuclear Information System (INIS)

    Fazli, Roohallah; Nakhkash, Mansor

    2012-01-01

    This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions. (paper)

  9. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  10. Hankin and Reeves' approach to estimating fish abundance in small streams: limitations and alternatives

    Science.gov (United States)

    William L. Thompson

    2003-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled...

  11. Assessing the benefits of the integration of location information in e-Government

    Science.gov (United States)

    Vandenbroucke, D.; Vancauwenberghe, G.; Crompvoets, J.

    2014-12-01

    Over the past years more and more geospatial data have been made readily accessible for different user communities as part of government efforts to set-up Spatial Data Infrastructures. As a result users from different sectors can search, find and bind spatial information and combine it with their own data resources and applications. However, too often, spatial data applications and services remain organised as separate silos, not well integrated in the business processes they are supposed to support. The European Union Location Framework (EULF), as part of the Interoperability Solutions for European Public Administrations (ISA) Programme of the EU (EC-DG DIGIT), aims to improve the integration of location information in e-Government processes through a better policy and strategy alignment, and through the improved legal, organisational, semantic and technical interoperability of data and systems. The EULF seeks to enhance interactions between Governments, Businesses and Citizens with location information and location enabled services and to make them part of the more generic ICT infrastructures of public administrations. One of the challenges that arise in this context is to describe, estimate or measure the benefits and added value of this integration of location information in e-Government. In the context of the EULF several existing approaches to assess the benefits of spatially enabled services and applications in e-Government have been studied. Two examples will be presented, one from Denmark, the other from Abu Dhabi. Both served as input to the approach developed for the EULF. A concrete case to estimate benefits at service and process level will be given with the aim to respond questions such as "which indicators can be used and how to measure them", "how can process owners collect the necessary information", "how to solve the benefits attribute question" and "how to extrapolate findings from one level of analysis to another"?

  12. Current expertise location by exploiting the dynamics of knowledge

    Directory of Open Access Journals (Sweden)

    Josef Nozicka

    2012-10-01

    Full Text Available Systems for expertise location are either very expensive in terms of the costs of maintenance or they tend to become obsolete or incomplete during the time. This article presents a new approach to knowledge mapping/expertise location allowing reducing the costs of knowledge mapping by maintaining the accuracy of the knowledge map. The efficiency of the knowledge map is achieved by introducing the knowledge estimation measures analysing the dynamics of knowledge of company employees and their textual results of work. Finding an expert with most up-to date knowledge is supported by focusing publishing history analysis. The efficiency of proposed measures within various timeframes of publishing history is evaluated by evaluation method introduced within the article. The evaluation took place in the environment of a middle-sized software company allowing seeing directly a practical usability of the expertise location technique. The results form various implications deployment of knowledge map within the company.

  13. Estimated monthly streamflows for selected locations on the Kabul and Logar Rivers, Aynak copper, cobalt, and chromium area of interest, Afghanistan, 1951-2010

    Science.gov (United States)

    Vining, Kevin C.; Vecchia, Aldo V.

    2014-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, used the stochastic monthly water-balance model and existing climate data to estimate monthly streamflows for 1951–2010 for selected streamgaging stations located within the Aynak copper, cobalt, and chromium area of interest in Afghanistan. The model used physically based, nondeterministic methods to estimate the monthly volumetric water-balance components of a watershed. A comparison of estimated and recorded monthly streamflows for the streamgaging stations Kabul River at Maidan and Kabul River at Tangi-Saidan indicated that the stochastic water-balance model was able to provide satisfactory estimates of monthly streamflows for high-flow months and low-flow months even though withdrawals for irrigation likely occurred. A comparison of estimated and recorded monthly streamflows for the streamgaging stations Logar River at Shekhabad and Logar River at Sangi-Naweshta also indicated that the stochastic water-balance model was able to provide reasonable estimates of monthly streamflows for the high-flow months; however, for the upstream streamgaging station, the model overestimated monthly streamflows during periods when summer irrigation withdrawals likely occurred. Results from the stochastic water-balance model indicate that the model should be able to produce satisfactory estimates of monthly streamflows for locations along the Kabul and Logar Rivers. This information could be used by Afghanistan authorities to make decisions about surface-water resources for the Aynak copper, cobalt, and chromium area of interest.

  14. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  15. An approach of parameter estimation for non-synchronous systems

    International Nuclear Information System (INIS)

    Xu Daolin; Lu Fangfang

    2005-01-01

    Synchronization-based parameter estimation is simple and effective but only available to synchronous systems. To come over this limitation, we propose a technique that the parameters of an unknown physical process (possibly a non-synchronous system) can be identified from a time series via a minimization procedure based on a synchronization control. The feasibility of this approach is illustrated in several chaotic systems

  16. Estimating the size of non-observed economy in Croatia using the MIMIC approach

    OpenAIRE

    Vjekoslav Klaric

    2011-01-01

    This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the defi nitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the ...

  17. Effective dose estimation from the Hp(10) value measured by film OR TL dosemeter located above the lead apron in medical diagnostic and intervention radiology

    International Nuclear Information System (INIS)

    Trousil, J.; Plichta, J.; Petrova, K.

    2001-01-01

    In medical institutions where the diagnostic and intervention radiology is examined the staff personnel doses reach for a long time the annual limit. State Office for Radiation Safety ordered the research task with a view to: (a) the influence of the dosemeter location on different parts of the body on the reliability of E value estimation by means of the value which is measured on the standard body location - left part of the chest above the lead apron. (b) the influence of the protective lead apron (neck, spectacles) with known lead equivalent on the E and H T value determination. In this contribution we present the results of this experimental study including the recommendation for the number and location on the body of dosemeters which are needed for the reliable estimation of E value. (authors)

  18. Telemetry location error in a forested habitat

    Science.gov (United States)

    Chu, D.S.; Hoover, B.A.; Fuller, M.R.; Geissler, P.H.; Amlaner, Charles J.

    1989-01-01

    The error associated with locations estimated by radio-telemetry triangulation can be large and variable in a hardwood forest. We assessed the magnitude and cause of telemetry location errors in a mature hardwood forest by using a 4-element Yagi antenna and compass bearings toward four transmitters, from 21 receiving sites. The distance error from the azimuth intersection to known transmitter locations ranged from 0 to 9251 meters. Ninety-five percent of the estimated locations were within 16 to 1963 meters, and 50% were within 99 to 416 meters of actual locations. Angles with 20o of parallel had larger distance errors than other angles. While angle appeared most important, greater distances and the amount of vegetation between receivers and transmitters also contributed to distance error.

  19. Automatic Sky View Factor Estimation from Street View Photographs—A Big Data Approach

    Directory of Open Access Journals (Sweden)

    Jianming Liang

    2017-04-01

    Full Text Available Hemispherical (fisheye photography is a well-established approach for estimating the sky view factor (SVF. High-resolution urban models from LiDAR and oblique airborne photogrammetry can provide continuous SVF estimates over a large urban area, but such data are not always available and are difficult to acquire. Street view panoramas have become widely available in urban areas worldwide: Google Street View (GSV maintains a global network of panoramas excluding China and several other countries; Baidu Street View (BSV and Tencent Street View (TSV focus their panorama acquisition efforts within China, and have covered hundreds of cities therein. In this paper, we approach this issue from a big data perspective by presenting and validating a method for automatic estimation of SVF from massive amounts of street view photographs. Comparisons were made with SVF estimates derived from two independent sources: a LiDAR-based Digital Surface Model (DSM and an oblique airborne photogrammetry-based 3D city model (OAP3D, resulting in a correlation coefficient of 0.863 and 0.987, respectively. The comparisons demonstrated the capacity of the proposed method to provide reliable SVF estimates. Additionally, we present an application of the proposed method with about 12,000 GSV panoramas to characterize the spatial distribution of SVF over Manhattan Island in New York City. Although this is a proof-of-concept study, it has shown the potential of the proposed approach to assist urban climate and urban planning research. However, further development is needed before this approach can be finally delivered to the urban climate and urban planning communities for practical applications.

  20. RiD: A New Approach to Estimate the Insolvency Risk

    Directory of Open Access Journals (Sweden)

    Marco Aurélio dos Santos Sanfins

    2014-10-01

    Full Text Available Given the recent international crises and the increasing number of defaults, several researchers have attempted to develop metrics that calculate the probability of insolvency with higher accuracy. The approaches commonly used, however, do not consider the credit risk nor the severity of the distance between receivables and obligations among different periods. In this paper we mathematically present an approach that allow us to estimate the insolvency risk by considering not only future receivables and obligations, but the severity of the distance between them and the quality of the respective receivables. Using Monte Carlo simulations and hypothetical examples, we show that our metric is able to estimate the insolvency risk with high accuracy. Moreover, our results suggest that in the absence of a smooth distribution between receivables and obligations, there is a non-null insolvency risk even when the present value of receivables is larger than the present value of the obligations.

  1. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  2. Estimation of mean-reverting oil prices: a laboratory approach

    International Nuclear Information System (INIS)

    Bjerksund, P.; Stensland, G.

    1993-12-01

    Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs

  3. A Generalized Estimating Equations Approach to Model Heterogeneity and Time Dependence in Capture-Recapture Studies

    Directory of Open Access Journals (Sweden)

    Akanda Md. Abdus Salam

    2017-03-01

    Full Text Available Individual heterogeneity in capture probabilities and time dependence are fundamentally important for estimating the closed animal population parameters in capture-recapture studies. A generalized estimating equations (GEE approach accounts for linear correlation among capture-recapture occasions, and individual heterogeneity in capture probabilities in a closed population capture-recapture individual heterogeneity and time variation model. The estimated capture probabilities are used to estimate animal population parameters. Two real data sets are used for illustrative purposes. A simulation study is carried out to assess the performance of the GEE estimator. A Quasi-Likelihood Information Criterion (QIC is applied for the selection of the best fitting model. This approach performs well when the estimated population parameters depend on the individual heterogeneity and the nature of linear correlation among capture-recapture occasions.

  4. A holistic approach to age estimation in refugee children.

    Science.gov (United States)

    Sypek, Scott A; Benson, Jill; Spanner, Kate A; Williams, Jan L

    2016-06-01

    Many refugee children arriving in Australia have an inaccurately documented date of birth (DOB). A medical assessment of a child's age is often requested when there is a concern that their documented DOB is incorrect. This study's aim was to assess the accuracy a holistic age assessment tool (AAT) in estimating the age of refugee children newly settled in Australia. A holistic AAT that combines medical and non-medical approaches was used to estimate the ages of 60 refugee children with a known DOB. The tool used four components to assess age: an oral narrative, developmental assessment, anthropometric measures and pubertal assessment. Assessors were blinded to the true age of the child. Correlation coefficients for the actual and estimated age were calculated for the tool overall and individual components. The correlation coefficient between the actual and estimated age from the AAT was very strong at 0.9802 (boys 0.9748, girls 0.9876). The oral narrative component of the tool performed best (R = 0.9603). Overall, 86.7% of age estimates were within 1 year of the true age. The range of differences was -1.43 to 3.92 years with a standard deviation of 0.77 years (9.24 months). The AAT is a holistic, simple and safe instrument that can be used to estimate age in refugee children with results comparable with radiological methods currently used. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  5. A combined telemetry - tag return approach to estimate fishing and natural mortality rates of an estuarine fish

    Science.gov (United States)

    Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.

    2009-01-01

    A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.

  6. Regional economic activity and absenteeism: a new approach to estimating the indirect costs of employee productivity loss.

    Science.gov (United States)

    Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron

    2015-02-01

    This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.

  7. Smart Indoor Positioning/Location and Navigation: A Lightweight Approach

    Directory of Open Access Journals (Sweden)

    José Antonio Puértolas Montañés

    2013-06-01

    Full Text Available In this paper a new location indoor system is presented, which shows the position and orientation of the user in closed environments, as well as the optimal route to his destination through location tags. This system is called Labelee, and it makes easier the interaction between users and devices through QR code scanning or by NFC tag reading, because this technology is increasingly common in the latest smartphones. With this system, users could locate themselves into an enclosure with less interaction.

  8. GPS-based microenvironment tracker (MicroTrac) model to estimate time–location of individuals for air pollution exposure assessments: Model evaluation in central North Carolina

    Science.gov (United States)

    Breen, Michael S.; Long, Thomas C.; Schultz, Bradley D.; Crooks, James; Breen, Miyuki; Langstaff, John E.; Isaacs, Kristin K.; Tan, Yu-Mei; Williams, Ronald W.; Cao, Ye; Geller, Andrew M.; Devlin, Robert B.; Batterman, Stuart A.; Buckley, Timothy J.

    2014-01-01

    A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared with 24-h diary data from nine participants, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time–location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies. PMID:24619294

  9. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various

  10. Estimating the size of non-observed economy in Croatia using the MIMIC approach

    Directory of Open Access Journals (Sweden)

    Vjekoslav Klarić

    2011-03-01

    Full Text Available This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the definitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the theory behind it, the MIMIC model is then applied to the Croatian economy. Considering the described characteristics of different methods, a previous estimate of the size of the non-observed economy in Croatia is chosen to provide benchmark values for the MIMIC model. Using those, the estimates of the size of non-observed economy in Croatia during the period 1998-2009 are obtained.

  11. Cost estimation: An expert-opinion approach. [cost analysis of research projects using the Delphi method (forecasting)

    Science.gov (United States)

    Buffalano, C.; Fogleman, S.; Gielecki, M.

    1976-01-01

    A methodology is outlined which can be used to estimate the costs of research and development projects. The approach uses the Delphi technique a method developed by the Rand Corporation for systematically eliciting and evaluating group judgments in an objective manner. The use of the Delphi allows for the integration of expert opinion into the cost-estimating process in a consistent and rigorous fashion. This approach can also signal potential cost-problem areas. This result can be a useful tool in planning additional cost analysis or in estimating contingency funds. A Monte Carlo approach is also examined.

  12. A Project Management Approach to Using Simulation for Cost Estimation on Large, Complex Software Development Projects

    Science.gov (United States)

    Mizell, Carolyn; Malone, Linda

    2007-01-01

    It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.

  13. Immigrants' location preferences

    DEFF Research Database (Denmark)

    Damm, Anna Piil

    This paper exploits a spatial dispersal policy for refugee immigrants to estimate the importance of local and regional factors for refugees' location preferences. The main results of a mixed proportional hazard competing risks model are that placed refugees react to high regional unemployment...

  14. A New Fault Location Approach for Acoustic Emission Techniques in Wind Turbines

    Directory of Open Access Journals (Sweden)

    Carlos Quiterio Gómez Muñoz

    2016-01-01

    Full Text Available The renewable energy industry is undergoing continuous improvement and development worldwide, wind energy being one of the most relevant renewable energies. This industry requires high levels of reliability, availability, maintainability and safety (RAMS for wind turbines. The blades are critical components in wind turbines. The objective of this research work is focused on the fault detection and diagnosis (FDD of the wind turbine blades. The FDD approach is composed of a robust condition monitoring system (CMS and a novel signal processing method. CMS collects and analyses the data from different non-destructive tests based on acoustic emission. The acoustic emission signals are collected applying macro-fiber composite (MFC sensors to detect and locate cracks on the surface of the blades. Three MFC sensors are set in a section of a wind turbine blade. The acoustic emission signals are generated by breaking a pencil lead in the blade surface. This method is used to simulate the acoustic emission due to a breakdown of the composite fibers. The breakdown generates a set of mechanical waves that are collected by the MFC sensors. A graphical method is employed to obtain a system of non-linear equations that will be used for locating the emission source. This work demonstrates that a fiber breakage in the wind turbine blade can be detected and located by using only three low cost sensors. It allows the detection of potential failures at an early stages, and it can also reduce corrective maintenance tasks and downtimes and increase the RAMS of the wind turbine.

  15. Joint sensor location/power rating optimization for temporally-correlated source estimation

    KAUST Repository

    Bushnaq, Osama M.; Chaaban, Anas; Al-Naffouri, Tareq Y.

    2017-01-01

    via wireless AWGN channel. In addition to selecting the optimal sensing location, the sensor type to be placed in these locations is selected from a pool of T sensor types such that different sensor types have different power ratings and costs

  16. A decision making method based on interval type-2 fuzzy sets: An approach for ambulance location preference

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah

    2018-01-01

    Full Text Available Selecting the best solution to deploy an ambulance in a strategic location is of the important variables that need to be accounted for improving the emergency medical services. The selection requires both quantitative and qualitative evaluation. Fuzzy set based approach is one of the well-known theories that help decision makers to handle fuzziness, uncertainty in decision making and vagueness of information. This paper proposes a new decision making method of Interval Type-2 Fuzzy Simple Additive Weighting (IT2 FSAW as to deal with uncertainty and vagueness. The new IT2 FSAW is applied to establish a preference in ambulance location. The decision making framework defines four criteria and five alternatives of ambulance location preference. Four experts attached to a Malaysian government hospital and a university medical center were interviewed to provide linguistic evaluation prior to analyzing with the new IT2 FSAW. Implementation of the proposed method in the case of ambulance location preference suggests that the ‘road network’ is the best alternative for ambulance location. The results indicate that the proposed method offers a consensus solution for handling the vague and qualitative criteria of ambulance location preference.

  17. A multiple-location model for natural gas forward curves

    International Nuclear Information System (INIS)

    Buffington, J.C.

    1999-06-01

    This thesis presents an approach for financial modelling of natural gas in which connections between locations are incorporated and the complexities of forward curves in natural gas are considered. Apart from electricity, natural gas is the most volatile commodity traded. Its price is often dependent on the weather and price shocks can be felt across several geographic locations. This modelling approach incorporates multiple risk factors that correspond to various locations. One of the objectives was to determine if the model could be used for closed-form option prices. It was suggested that an adequate model for natural gas must consider 3 statistical properties: volatility term structure, backwardation and contango, and stochastic basis. Data from gas forward prices at Chicago, NYMEX and AECO were empirically tested to better understand these 3 statistical properties at each location and to verify if the proposed model truly incorporates these properties. In addition, this study examined the time series property of the difference of two locations (the basis) and determines that these empirical properties are consistent with the model properties. Closed-form option solutions were also developed for call options of forward contracts and call options on forward basis. The options were calibrated and compared to other models. The proposed model is capable of pricing options, but the prices derived did not pass the test of economic reasonableness. However, the model was able to capture the effect of transportation as well as aspects of seasonality which is a benefit over other existing models. It was determined that modifications will be needed regarding the estimation of the convenience yields. 57 refs., 2 tabs., 7 figs., 1 append

  18. Novel approaches to the estimation of intake and bioavailability of radiocaesium in ruminants grazing forested areas

    International Nuclear Information System (INIS)

    Mayes, R.W.; Lamb, C.S.; Beresford, N.A.

    1994-01-01

    It is difficult to measure transfer of radiocaesium to the tissues of forest ruminants because they can potentially ingest a wide range of plant types. Measurements on undomesticated forest ruminants incur further difficulties. Existing techniques of estimating radiocaesium intake are imprecise when applied to forest systems. New approaches to measure this parameter are discussed. Two methods of intake estimation are described and evaluated. In the first method, radiocaesium intake is estimated from the radiocaesium activity concentrations of plants, combined with estimates of dry-matter (DM) intake and plant species composition of the diet, using plant and orally-dosed hydrocarbons (n-alkanes) as markers. The second approach estimates the total radiocaesium intake of an animal from the rate of excretion of radiocaesium in the faeces and an assumed value for the apparent absorption coefficient. Estimates of radiocaesium intake, using these approaches, in lactating goats and adult sheep were used to calculate transfer coefficients for milk and muscle; these compared favourably with transfer coefficients previously obtained under controlled experimental conditions. Potential variations in bioavailability of dietary radiocaesium sources to forest ruminants have rarely been considered. Approaches that can be used to describe bioavailability, including the true absorption coefficient and in vitro extractability, are outlined

  19. Universal Approach to Estimate Perfluorocarbons Emissions During Individual High-Voltage Anode Effect for Prebaked Cell Technologies

    Science.gov (United States)

    Dion, Lukas; Gaboury, Simon; Picard, Frédéric; Kiss, Laszlo I.; Poncsak, Sandor; Morais, Nadia

    2018-04-01

    Recent investigations on aluminum electrolysis cell demonstrated limitations to the commonly used tier-3 slope methodology to estimate perfluorocarbon (PFC) emissions from high-voltage anode effects (HVAEs). These limitations are greater for smelters with a reduced HVAE frequency. A novel approach is proposed to estimate the specific emissions using a tier 2 model resulting from individual HVAE instead of estimating monthly emissions for pot lines with the slope methodology. This approach considers the nonlinear behavior of PFC emissions as a function of the polarized anode effect duration but also integrates the change in behavior attributed to cell productivity. Validation was performed by comparing the new approach and the slope methodology with measurement campaigns from different smelters. The results demonstrate a good agreement between measured and estimated emissions as well as more accurately reflect individual HVAE dynamics occurring over time. Finally, the possible impact of this approach for the aluminum industry is discussed.

  20. A novel multi-model probability battery state of charge estimation approach for electric vehicles using H-infinity algorithm

    International Nuclear Information System (INIS)

    Lin, Cheng; Mu, Hao; Xiong, Rui; Shen, Weixiang

    2016-01-01

    Highlights: • A novel multi-model probability battery SOC fusion estimation approach was proposed. • The linear matrix inequality-based H∞ technique is employed to estimate the SOC. • The Bayes theorem has been employed to realize the optimal weight for the fusion. • The robustness of the proposed approach is verified by different batteries. • The results show that the proposed method can promote global estimation accuracy. - Abstract: Due to the strong nonlinearity and complex time-variant property of batteries, the existing state of charge (SOC) estimation approaches based on a single equivalent circuit model (ECM) cannot provide the accurate SOC for the entire discharging period. This paper aims to present a novel SOC estimation approach based on a multiple ECMs fusion method for improving the practical application performance. In the proposed approach, three battery ECMs, namely the Thevenin model, the double polarization model and the 3rd order RC model, are selected to describe the dynamic voltage of lithium-ion batteries and the genetic algorithm is then used to determine the model parameters. The linear matrix inequality-based H-infinity technique is employed to estimate the SOC from the three models and the Bayes theorem-based probability method is employed to determine the optimal weights for synthesizing the SOCs estimated from the three models. Two types of lithium-ion batteries are used to verify the feasibility and robustness of the proposed approach. The results indicate that the proposed approach can improve the accuracy and reliability of the SOC estimation against uncertain battery materials and inaccurate initial states.

  1. A simple approach to estimate soil organic carbon and soil co/sub 2/ emission

    International Nuclear Information System (INIS)

    Abbas, F.

    2013-01-01

    SOC (Soil Organic Carbon) and soil CO/sub 2/ (Carbon Dioxide) emission are among the indicator of carbon sequestration and hence global climate change. Researchers in developed countries benefit from advance technologies to estimate C (Carbon) sequestration. However, access to the latest technologies has always been challenging in developing countries to conduct such estimates. This paper presents a simple and comprehensive approach for estimating SOC and soil CO/sub 2/ emission from arable- and forest soils. The approach includes various protocols that can be followed in laboratories of the research organizations or academic institutions equipped with basic research instruments and technology. The protocols involve soil sampling, sample analysis for selected properties, and the use of a worldwide tested Rothamsted carbon turnover model. With this approach, it is possible to quantify SOC and soil CO/sub 2/ emission over short- and long-term basis for global climate change assessment studies. (author)

  2. A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations

    Directory of Open Access Journals (Sweden)

    Aranzazu Berbey Alvarez

    2015-04-01

    Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.

  3. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    Science.gov (United States)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  4. Geostatistical estimates of future recharge for the Death Valley region

    International Nuclear Information System (INIS)

    Hevesi, J.A.; Flint, A.L.

    1998-01-01

    Spatially distributed estimates of regional ground water recharge rates under both current and potential future climates are needed to evaluate a potential geologic repository for high-level nuclear waste at Yucca Mountain, Nevada, which is located within the Death Valley ground-water region (DVGWR). Determining the spatial distribution of recharge is important for regional saturated-zone ground-water flow models. In the southern Nevada region, the Maxey-Eakin method has been used for estimating recharge based on average annual precipitation. Although this method does not directly account for a variety of location-specific factors which control recharge (such as bedrock permeability, soil cover, and net radiation), precipitation is the primary factor that controls in the region. Estimates of recharge obtained by using the Maxey-Eakin method are comparable to estimates of recharge obtained by using chloride balance studies. The authors consider the Maxey-Eakin approach as a relatively simple method of obtaining preliminary estimates of recharge on a regional scale

  5. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma.

    Science.gov (United States)

    Yu, Jinhua; Shi, Zhifeng; Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan; Chen, Liang; Mao, Ying

    2017-08-01

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.

  6. Location Based Services and Applications

    OpenAIRE

    Elenis Gorrita Michel; Rónier Sierra Dávila; Samuel Montejo Sánchez

    2012-01-01

    Location Based Services (LBS) continue to grow in popularity, effectiveness and reliability, to the extent that applications are designed and implemented taking into account the facilities of the user location information. In this work, some of the main applications are addressed, in order to make an assessment of the current importance of the LBS, as a branch of technology in full swing. In addition, the main techniques for location estimation are studied, essential information to the LBS. B...

  7. Estimating soil erosion in Natura 2000 areas located on three semi-arid Mediterranean Islands.

    Science.gov (United States)

    Zaimes, George N; Emmanouloudis, Dimitris; Iakovoglou, Valasia

    2012-03-01

    A major initiative in Europe is the protection of its biodiversity. To accomplish this, specific areas from all countries of the European Union are protected by the establishment of the "Natura 2000" network. One of the major threats to these areas and in general to ecosystems is soil erosion. The objective of this study was to quantitatively estimate surface soil losses for three of these protected areas that are located on semi-arid islands of the Mediterranean. One Natura 2000 area was selected from each of the following islands: Sicily in Italy, Cyprus and Rhodes in Greece. To estimate soil losses, Gerlach troughs were used. These troughs were established on slopes that ranged from 35-40% in four different vegetation types: i) Quercus ilex and Quercus rotundifolia forests, ii) Pinus brutia forests, iii) "Phrygana" shrublands and iv) vineyards. The shrublands had the highest soil losses (270 kg ha(-1) yr(-1)) that were 5-13 times more than the other three vegetation types. Soil losses in these shrublands should be considered a major concern. However, the other vegetation types also had high soil losses (21-50 kg ha(-1) yr(-1)). Conclusively, in order to enhance and conserve the biodiversity of these Natura 2000 areas protective management measures should be taken into consideration to decrease soil losses.

  8. Sink-oriented Dynamic Location Service Protocol for Mobile Sinks with an Energy Efficient Grid-Based Approach

    Directory of Open Access Journals (Sweden)

    Hyunseung Choo

    2009-03-01

    Full Text Available Sensor nodes transmit the sensed information to the sink through wireless sensor networks (WSNs. They have limited power, computational capacities and memory. Portable wireless devices are increasing in popularity. Mechanisms that allow information to be efficiently obtained through mobile WSNs are of significant interest. However, a mobile sink introduces many challenges to data dissemination in large WSNs. For example, it is important to efficiently identify the locations of mobile sinks and disseminate information from multi-source nodes to the multi-mobile sinks. In particular, a stationary dissemination path may no longer be effective in mobile sink applications, due to sink mobility. In this paper, we propose a Sink-oriented Dynamic Location Service (SDLS approach to handle sink mobility. In SDLS, we propose an Eight-Direction Anchor (EDA system that acts as a location service server. EDA prevents intensive energy consumption at the border sensor nodes and thus provides energy balancing to all the sensor nodes. Then we propose a Location-based Shortest Relay (LSR that efficiently forwards (or relays data from a source node to a sink with minimal delay path. Our results demonstrate that SDLS not only provides an efficient and scalable location service, but also reduces the average data communication overhead in scenarios with multiple and moving sinks and sources.

  9. Sink-oriented Dynamic Location Service Protocol for Mobile Sinks with an Energy Efficient Grid-Based Approach.

    Science.gov (United States)

    Jeon, Hyeonjae; Park, Kwangjin; Hwang, Dae-Joon; Choo, Hyunseung

    2009-01-01

    Sensor nodes transmit the sensed information to the sink through wireless sensor networks (WSNs). They have limited power, computational capacities and memory. Portable wireless devices are increasing in popularity. Mechanisms that allow information to be efficiently obtained through mobile WSNs are of significant interest. However, a mobile sink introduces many challenges to data dissemination in large WSNs. For example, it is important to efficiently identify the locations of mobile sinks and disseminate information from multi-source nodes to the multi-mobile sinks. In particular, a stationary dissemination path may no longer be effective in mobile sink applications, due to sink mobility. In this paper, we propose a Sink-oriented Dynamic Location Service (SDLS) approach to handle sink mobility. In SDLS, we propose an Eight-Direction Anchor (EDA) system that acts as a location service server. EDA prevents intensive energy consumption at the border sensor nodes and thus provides energy balancing to all the sensor nodes. Then we propose a Location-based Shortest Relay (LSR) that efficiently forwards (or relays) data from a source node to a sink with minimal delay path. Our results demonstrate that SDLS not only provides an efficient and scalable location service, but also reduces the average data communication overhead in scenarios with multiple and moving sinks and sources.

  10. Approaches to estimating decommissioning costs

    International Nuclear Information System (INIS)

    Smith, R.I.

    1990-07-01

    The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

  11. Unsteady force estimation using a Lagrangian drift-volume approach

    Science.gov (United States)

    McPhaden, Cameron J.; Rival, David E.

    2018-04-01

    A novel Lagrangian force estimation technique for unsteady fluid flows has been developed, using the concept of a Darwinian drift volume to measure unsteady forces on accelerating bodies. The construct of added mass in viscous flows, calculated from a series of drift volumes, is used to calculate the reaction force on an accelerating circular flat plate, containing highly-separated, vortical flow. The net displacement of fluid contained within the drift volumes is, through Darwin's drift-volume added-mass proposition, equal to the added mass of the plate and provides the reaction force of the fluid on the body. The resultant unsteady force estimates from the proposed technique are shown to align with the measured drag force associated with a rapid acceleration. The critical aspects of understanding unsteady flows, relating to peak and time-resolved forces, often lie within the acceleration phase of the motions, which are well-captured by the drift-volume approach. Therefore, this Lagrangian added-mass estimation technique opens the door to fluid-dynamic analyses in areas that, until now, were inaccessible by conventional means.

  12. Estimation of stature from hand impression: a nonconventional approach.

    Science.gov (United States)

    Ahemad, Nasir; Purkait, Ruma

    2011-05-01

    Stature is used for constructing a biological profile that assists with the identification of an individual. So far, little attention has been paid to the fact that stature can be estimated from hand impressions left at scene of crime. The present study based on practical observations adopted a new methodology of measuring hand length from the depressed area between hypothenar and thenar region on the proximal surface of the palm. Stature and bilateral hand impressions were obtained from 503 men of central India. Seventeen dimensions of hand were measured on the impression. Linear regression equations derived showed hand length followed by palm length are best estimates of stature. Testing the practical utility of the suggested method on latent prints of 137 subjects, a statistically insignificant result was obtained when known and estimated stature derived from latent prints was compared. The suggested approach points to a strong possibility of its usage in crime scene investigation, albeit the fact that validation studies in real-life scenarios are performed. © 2011 American Academy of Forensic Sciences.

  13. Approaches to Refining Estimates of Global Burden and Economics of Dengue

    Science.gov (United States)

    Shepard, Donald S.; Undurraga, Eduardo A.; Betancourt-Cravioto, Miguel; Guzmán, María G.; Halstead, Scott B.; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O.; Tapia-Conyer, Roberto; Gubler, Duane J.

    2014-01-01

    Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools

  14. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    An integrated approach to estimate the storage reliability is proposed. • A non-parametric measure to estimate the number of failures and the reliability at each testing time is presented. • E-Baysian method to estimate the failure probability is introduced. • The possible initial failures in storage are introduced. • The non-parametric estimates of failure numbers can be used into the parametric models.

  15. Estimation of small area populations using remote sensing and other approaches

    International Nuclear Information System (INIS)

    Honea, R.B.; Shumpert, B.L.; Edwards, R.G.; Margle, S.M.; Coleman, P.R.; Smyre, J.L.; Rush, R.M.; Durfee, R.C.

    1983-01-01

    This paper documents the results of an assessment of a variety of techniques for estimating residential population for a five-mile radial grid around a nuclear power plant. The study area surrounded the proposed Limerick Nuclear Power Plant located near Philadelphia, PA. Techniques evaluated ranged from the use of air photos to infer population from housing distributions to the use of Landsat data to characterize probable residential population around the plant site. Although the techniques involving the use of Landsat data provided good results, a simple proportional area allocation method and the current procedure used by Oak Ridge National Laboratory for the Nuclear Regulatory Commission were among the best techniques. Further research using other sites and better resolution satellite data is recommended to investigate the possible refinement of population estimates using remote sensing media. 34 references, 10 figures, 2 tables

  16. Combined Yamamoto approach for simultaneous estimation of adsorption isotherm and kinetic parameters in ion-exchange chromatography.

    Science.gov (United States)

    Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand

    2015-09-25

    Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Investigating the correspondence between driver head position and glance location

    Directory of Open Access Journals (Sweden)

    Joonbum Lee

    2018-02-01

    Full Text Available The relationship between a driver’s glance orientation and corresponding head rotation is highly complex due to its nonlinear dependence on the individual, task, and driving context. This paper presents expanded analytic detail and findings from an effort that explored the ability of head pose to serve as an estimator for driver gaze by connecting head rotation data with manually coded gaze region data using both a statistical analysis approach and a predictive (i.e., machine learning approach. For the latter, classification accuracy increased as visual angles between two glance locations increased. In other words, the greater the shift in gaze, the higher the accuracy of classification. This is an intuitive but important concept that we make explicit through our analysis. The highest accuracy achieved was 83% using the method of Hidden Markov Models (HMM for the binary gaze classification problem of (a glances to the forward roadway versus (b glances to the center stack. Results suggest that although there are individual differences in head-glance correspondence while driving, classifier models based on head-rotation data may be robust to these differences and therefore can serve as reasonable estimators for glance location. The results suggest that driver head pose can be used as a surrogate for eye gaze in several key conditions including the identification of high-eccentricity glances. Inexpensive driver head pose tracking may be a key element in detection systems developed to mitigate driver distraction and inattention.

  18. Estimating the timing and location of shallow rainfall-induced landslides using a model for transient, unsaturated infiltration

    Science.gov (United States)

    Baum, Rex L.; Godt, Jonathan W.; Savage, William Z.

    2010-01-01

    Shallow rainfall-induced landslides commonly occur under conditions of transient infiltration into initially unsaturated soils. In an effort to predict the timing and location of such landslides, we developed a model of the infiltration process using a two-layer system that consists of an unsaturated zone above a saturated zone and implemented this model in a geographic information system (GIS) framework. The model links analytical solutions for transient, unsaturated, vertical infiltration above the water table to pressure-diffusion solutions for pressure changes below the water table. The solutions are coupled through a transient water table that rises as water accumulates at the base of the unsaturated zone. This scheme, though limited to simplified soil-water characteristics and moist initial conditions, greatly improves computational efficiency over numerical models in spatially distributed modeling applications. Pore pressures computed by these coupled models are subsequently used in one-dimensional slope-stability computations to estimate the timing and locations of slope failures. Applied over a digital landscape near Seattle, Washington, for an hourly rainfall history known to trigger shallow landslides, the model computes a factor of safety for each grid cell at any time during a rainstorm. The unsaturated layer attenuates and delays the rainfall-induced pore-pressure response of the model at depth, consistent with observations at an instrumented hillside near Edmonds, Washington. This attenuation results in realistic estimates of timing for the onset of slope instability (7 h earlier than observed landslides, on average). By considering the spatial distribution of physical properties, the model predicts the primary source areas of landslides.

  19. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    Directory of Open Access Journals (Sweden)

    AMAD HAMZA

    2016-10-01

    Full Text Available In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time.For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation. Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods

  20. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    International Nuclear Information System (INIS)

    Hamza, A.; Jan, T.; Ali, A.

    2016-01-01

    In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time). For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation). Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods. (author)

  1. Location Privacy Techniques in Client-Server Architectures

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yiu, Man Lung

    2009-01-01

    A typical location-based service returns nearby points of interest in response to a user location. As such services are becoming increasingly available and popular, location privacy emerges as an important issue. In a system that does not offer location privacy, users must disclose their exact...... locations in order to receive the desired services. We view location privacy as an enabling technology that may lead to increased use of location-based services. In this chapter, we consider location privacy techniques that work in traditional client-server architectures without any trusted components other....... Third, their effectiveness is independent of the distribution of other users, unlike the k-anonymity approach. The chapter characterizes the privacy models assumed by existing techniques and categorizes these according to their approach. The techniques are then covered in turn according...

  2. An approach to the estimation of the value of agricultural residues used as biofuels

    International Nuclear Information System (INIS)

    Kumar, A.; Purohit, P.; Rana, S.; Kandpal, T.C.

    2002-01-01

    A simple demand side approach for estimating the monetary value of agricultural residues used as biofuels is proposed. Some of the important issues involved in the use of biomass feedstocks in coal-fired boilers are briefly discussed along with their implications for the maximum acceptable price estimates for the agricultural residues. Results of some typical calculations are analysed along with the estimates obtained on the basis of a supply side approach (based on production cost) developed earlier. The prevailing market prices of some agricultural residues used as feedstocks for briquetting are also indicated. The results obtained can be used as preliminary indicators for identifying niche areas for immediate/short-term utilization of agriculture residues in boilers for process heating and power generation. (author)

  3. Equivalence among three alternative approaches to estimating live tree carbon stocks in the eastern United States

    Science.gov (United States)

    Coeli M. Hoover; James E. Smith

    2017-01-01

    Assessments of forest carbon are available via multiple alternate tools or applications and are in use to address various regulatory and reporting requirements. The various approaches to making such estimates may or may not be entirely comparable. Knowing how the estimates produced by some commonly used approaches vary across forest types and regions allows users of...

  4. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    Science.gov (United States)

    Wen, Fang-Qing; Zhang, Gong; Ben, De

    2015-11-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.

  5. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Directory of Open Access Journals (Sweden)

    Weiqiang Pan

    2015-03-01

    Full Text Available In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  6. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Science.gov (United States)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  7. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    Science.gov (United States)

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  8. An EKF-based approach for estimating leg stiffness during walking.

    Science.gov (United States)

    Ochoa-Diaz, Claudia; Menegaz, Henrique M; Bó, Antônio P L; Borges, Geovany A

    2013-01-01

    The spring-like behavior is an inherent condition for human walking and running. Since leg stiffness k(leg) is a parameter that cannot be directly measured, many techniques has been proposed in order to estimate it, most of them using force data. This paper intends to address this problem using an Extended Kalman Filter (EKF) based on the Spring-Loaded Inverted Pendulum (SLIP) model. The formulation of the filter only uses as measurement information the Center of Mass (CoM) position and velocity, no a priori information about the stiffness value is known. From simulation results, it is shown that the EKF-based approach can generate a reliable stiffness estimation for walking.

  9. Estimation of daily global solar irradiation by coupling ground measurements of bright sunshine hours to satellite imagery

    International Nuclear Information System (INIS)

    Ener Rusen, Selmin; Hammer, Annette; Akinoglu, Bulent G.

    2013-01-01

    In this work, the current version of the satellite-based HELIOSAT method and ground-based linear Ångström–Prescott type relations are used in combination. The first approach is based on the use of a correlation between daily bright sunshine hours (s) and cloud index (n). In the second approach a new correlation is proposed between daily solar irradiation and daily data of s and n which is based on a physical parameterization. The performances of the proposed two combined models are tested against conventional methods. We test the use of obtained correlation coefficients for nearby locations. Our results show that the use of sunshine duration together with the cloud index is quite satisfactory in the estimation of daily horizontal global solar irradiation. We propose to use the new approaches to estimate daily global irradiation when the bright sunshine hours data is available for the location of interest, provided that some regression coefficients are determined using the data of a nearby station. In addition, if surface data for a close location does not exist then it is recommended to use satellite models like HELIOSAT or the new approaches instead the Ångström type models. - Highlights: • Satellite imagery together with surface measurements in solar radiation estimation. • The new coupled and conventional models (satellite and ground-based) are analyzed. • New models result in highly accurate estimation of daily global solar irradiation

  10. Investigating the Importance of the Pocket-estimation Method in Pocket-based Approaches: An Illustration Using Pocket-ligand Classification.

    Science.gov (United States)

    Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie

    2017-09-01

    Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Dealing with Insufficient Location Fingerprints in Wi-Fi Based Indoor Location Fingerprinting

    Directory of Open Access Journals (Sweden)

    Kai Dong

    2017-01-01

    Full Text Available The development of the Internet of Things has accelerated research in the indoor location fingerprinting technique, which provides value-added localization services for existing WLAN infrastructures without the need for any specialized hardware. The deployment of a fingerprinting based localization system requires an extremely large amount of measurements on received signal strength information to generate a location fingerprint database. Nonetheless, this requirement can rarely be satisfied in most indoor environments. In this paper, we target one but common situation when the collected measurements on received signal strength information are insufficient, and show limitations of existing location fingerprinting methods in dealing with inadequate location fingerprints. We also introduce a novel method to reduce noise in measuring the received signal strength based on the maximum likelihood estimation, and compute locations from inadequate location fingerprints by using the stochastic gradient descent algorithm. Our experiment results show that our proposed method can achieve better localization performance even when only a small quantity of RSS measurements is available. Especially when the number of observations at each location is small, our proposed method has evident superiority in localization accuracy.

  12. Indirect estimation of radioactivity in containerized cargo

    International Nuclear Information System (INIS)

    Jarman, K.D.; Scherrer, C.; Smith, L.E.; Chilton, L.K.; Anderson, K.K.; Ressler, J.J.; Trease, L.L.

    2011-01-01

    Naturally occurring radioactive material in containerized cargo challenges the state of the art in national and international efforts to detect illicit nuclear and radiological material in transported containers. Current systems are being evaluated and new systems envisioned to provide the high probability of detection necessary to thwart potential threats, combined with extremely low nuisance and false alarm rates necessary to maintain the flow of commerce impacted by the enormous volume of commodities imported in shipping containers. Maintaining flow of commerce also means that inspection must be rapid, requiring relatively non-intrusive, indirect measurements of cargo from outside containers to the extent possible. With increasing information content in such measurements, it is natural to ask how the information might be combined to improve detection. Toward this end, we present an approach to estimating isotopic activity of naturally occurring radioactive material in cargo grouped by commodity type, combining container manifest data with radiography and gamma-ray spectroscopy aligned to location along the container. The heart of this approach is our statistical model of gamma-ray counts within peak regions of interest, which captures the effects of background suppression, counting noise, convolution of neighboring cargo contributions, and down-scattered photons to provide estimates of counts due to decay of specific radioisotopes in cargo alone. Coupled to that model, we use a mechanistic model of self-attenuated radiation flux to estimate the isotopic activity within cargo, segmented by location within each container, that produces those counts. We test our approach by applying it to a set of measurements taken at the Port of Seattle in 2006. This approach to synthesizing disparate available data streams and extraction of cargo characteristics, while relying on several simplifying assumptions and approximations, holds the potential to support improvement of

  13. Parametric estimation in the wave buoy analogy - an elaborated approach based on energy considerations

    DEFF Research Database (Denmark)

    Montazeri, Najmeh; Nielsen, Ulrik Dam

    2014-01-01

    the ship’s wave-induced responses based on different statistical inferences including parametric and non-parametric approaches. This paper considers a concept to improve the estimate obtained by the parametric method for sea state estimation. The idea is illustrated by an analysis made on full-scale...

  14. Using cohort change ratios to estimate life expectancy in populations with negligible migration: A new approach

    Directory of Open Access Journals (Sweden)

    David A. Swanson

    2012-07-01

    Full Text Available Census survival methods are the oldest and most widely applicable methods of estimating adult mortality, and for populations with negligible migration they can provide excellent results. The reason for this ubiquity is threefold: (1 their data requirements are minimal in that only two successive age distributions are needed; (2 the two successive age distributions are usually easily obtained from census counts; and (3 the method is straightforward in that it requires neither a great deal of judgment nor “data-fitting” techniques to implement. This ubiquity is in contrast to other methods, which require more data, as well as judgment and, often, data fitting. In this short note, the new approach we demonstrate is that life expectancy at birth can be computed by using census survival rates in combination with an identity whereby the radix of a life table is equal to 1 (l0 = 1.00. We point out that our suggested method is less involved than the existing approach. We compare estimates using our approach against other estimates, and find it works reasonably well. As well as some nuances and cautions, we discuss the benefits of using this approach to estimate life expectancy, including the ability to develop estimates of average remaining life at any age. We believe that the technique is worthy of consideration for use in estimating life expectancy in populations that experience negligible migration.

  15. Using cohort change ratios to estimate life expectancy in populations with negligible migration: A new approach

    Directory of Open Access Journals (Sweden)

    Lucky Tedrow

    2012-01-01

    Full Text Available Census survival methods are the oldest and most widely applicable methods of estimating adult mortality, and for populations with negligible migration they can provide excellent results. The reason for this ubiquity is threefold: (1 their data requirements are minimal in that only two successive age distributions are needed; (2 the two successive age distributions are usually easily obtained from census counts; and (3 the method is straightforward in that it requires neither a great deal of judgment nor “data-fitting” techniques to implement. This ubiquity is in contrast to other methods, which require more data, as well as judgment and, often, data fitting. In this short note, the new approach we demonstrate is that life expectancy at birth can be computed by using census survival rates in combination with an identity whereby the radix of a life table is equal to 1 (l0 = 1.00. We point out that our suggested method is less involved than the existing approach. We compare estimates using our approach against other estimates, and find it works reasonably well. As well as some nuances and cautions, we discuss the benefits of using this approach to estimate life expectancy, including the ability to develop estimates of average remaining life at any age. We believe that the technique is worthy of consideration for use in estimating life expectancy in populations that experience negligible migration.

  16. Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots.

    Science.gov (United States)

    Bengoa, Pablo; Zubizarreta, Asier; Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles; Mata, Sara

    2017-08-23

    The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software.

  17. Aspects of using a best-estimate approach for VVER safety analysis in reactivity initiated accidents

    Energy Technology Data Exchange (ETDEWEB)

    Ovdiienko, Iurii; Bilodid, Yevgen; Ieremenko, Maksym [State Scientific and Technical Centre on Nuclear and Radiation, Safety (SSTC N and RS), Kyiv (Ukraine); Loetsch, Thomas [TUEV SUED Industrie Service GmbH, Energie und Systeme, Muenchen (Germany)

    2016-09-15

    At present time, Ukraine faces the problem of small margins of acceptance criteria in connection with the implementation of a conservative approach for safety evaluations. The problem is particularly topical conducting feasibility analysis of power up-rating for Ukrainian nuclear power plants. Such situation requires the implementation of a best-estimate approach on the basis of an uncertainty analysis. For some kind of accidents, such as loss-of-coolant accident (LOCA), the best estimate approach is, more or less, developed and established. However, for reactivity initiated accident (RIA) analysis an application of best estimate method could be problematical. A regulatory document in Ukraine defines a nomenclature of neutronics calculations and so called ''generic safety parameters'' which should be used as boundary conditions for all VVER-1000 (V-320) reactors in RIA analysis. In this paper the ideas of uncertainty evaluations of generic safety parameters in RIA analysis in connection with the use of the 3D neutron kinetic code DYN3D and the GRS SUSA approach are presented.

  18. Empirical models for the estimation of global solar radiation with sunshine hours on horizontal surface in various cities of Pakistan

    International Nuclear Information System (INIS)

    Gadiwala, M.S.; Usman, A.; Akhtar, M.; Jamil, K.

    2013-01-01

    In developing countries like Pakistan the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Only five long-period locations data of solar radiation data is available in Pakistan (Karachi, Quetta, Lahore, Multan and Peshawar). These locations almost encompass the different geographical features of Pakistan. For this reason in this study the Mean monthly global solar radiation has been estimated using empirical models of Angstrom, FAO, Glover Mc-Culloch, Sangeeta & Tiwari for the diversity of approach and use of climatic and geographical parameters. Empirical constants for these models have been estimated and the results obtained by these models have been tested statistically. The results show encouraging agreement between estimated and measured values. The outcome of these empirical models will assist the researchers working on solar energy estimation of the location having similar conditions

  19. A plan for location calibration of IMS stations and near Kazakhstan

    International Nuclear Information System (INIS)

    Richards, P.G.; Kim, W.-Yo.; Khalturin, V.I.

    2001-01-01

    For purposes of monitoring compliance with the Comprehensive Nuclear Test-Ban Treaty, it is desirable to be able to locate seismic events routinely to within an uncertainty not greater than 1000 square km. From more than five years of experience with publication of the Reviewed Event Bulletin (REB) by the Prototype International Data Centre (PIDC), resulting in estimated locations for more than 100,000 seismic events, it is apparent that improved location accuracy is needed in order to reduce uncertainties below 1000 square km. In this paper, we outline a three-year program of applied research which commenced in March 2000 and which has the goal of achieving improved REB locations based upon data to be contributed to the International Data Centre from 30 IMS stations in Eastern Asia. Our first efforts will focus on the four IMS seismographic stations in Kazakhstan (AKT, BRV, KUR, MAK), together with IMS stations ZAL in Russia and AAK in Kyrgyzstan. Following the recommendations of two 'IMS Location Calibration Workshops' held in Oslo, Norway, in 1999 and 2000, our approach is to generate station-specific travel times for each observable seismic phase, as a function of distance and azimuth (and depth, where possible). Such travel times are obtained on the basis of (i) early studies based mainly on earthquake data (e.g. Nersesov and Rautian, 1964), (ii) Deep Seismic Sounding, and (iii) recent studies of nuclear and chemical explosions. We are also using (iv) an empirical approach in which phases are picked at IMS stations, for so-called Ground Truth events whose location is known quite accurately on the basis of additional data, obtained for example from local and regional networks. (author)

  20. A filtering approach to edge preserving MAP estimation of images.

    Science.gov (United States)

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  1. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  2. Location of airports - selected quantitative methods

    Directory of Open Access Journals (Sweden)

    Agnieszka Merkisz-Guranowska

    2016-09-01

    Full Text Available Background: The role of air transport in  the economic development of a country and its regions cannot be overestimated. The decision concerning an airport's location must be in line with the expectations of all the stakeholders involved. This article deals with the issues related to the choice of  sites where airports should be located. Methods: Two main quantitative approaches related to the issue of airport location are presented in this article, i.e. the question of optimizing such a choice and the issue of selecting the location from a predefined set. The former involves mathematical programming and formulating the problem as an optimization task, the latter, however, involves ranking the possible variations. Due to various methodological backgrounds, the authors present the advantages and disadvantages of both approaches and point to the one which currently has its own practical application. Results: Based on real-life examples, the authors present a multi-stage procedure, which renders it possible to solve the problem of airport location. Conclusions: Based on the overview of literature of the subject, the authors point to three types of approach to the issue of airport location which could enable further development of currently applied methods.

  3. H∞ Channel Estimation for DS-CDMA Systems: A Partial Difference Equation Approach

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2013-01-01

    Full Text Available In the communications literature, a number of different algorithms have been proposed for channel estimation problems with the statistics of the channel noise and observation noise exactly known. In practical systems, however, the channel parameters are often estimated using training sequences which lead to the statistics of the channel noise difficult to obtain. Moreover, the received signals are corrupted not only by the ambient noises but also by multiple-access interferences, so the statistics of observation noises is also difficult to obtain. In this paper, we will investigate the H∞ channel estimation problem for direct-sequence code-division multiple-access (DS-CDMA communication systems with time-varying multipath fading channels. The channel estimator is designed by applying a partial difference equation approach together with the innovation analysis theory. This method can give a sufficient and necessary condition for the existence of an H∞ channel estimator.

  4. A multi-objective possibilistic programming approach for locating distribution centers and allocating customers demands in supply chains

    Directory of Open Access Journals (Sweden)

    Seyed Ahmad Yazdian

    2011-01-01

    Full Text Available In this paper, we present a multi-objective possibilistic programming model to locate distribution centers (DCs and allocate customers' demands in a supply chain network design (SCND problem. The SCND problem deals with determining locations of facilities (DCs and/or plants, and also shipment quantities between each two consecutive tier of the supply chain. The primary objective of this study is to consider different risk factors which are involved in both locating DCs and shipping products as an objective function. The risk consists of various components: the risks related to each potential DC location, the risk associated with each arc connecting a plant to a DC and the risk of shipment from a DC to a customer. The proposed method of this paper considers the risk phenomenon in fuzzy forms to handle the uncertainties inherent in these factors. A possibilistic programming approach is proposed to solve the resulted multi-objective problem and a numerical example for three levels of possibility is conducted to analyze the model.

  5. Estimating petroleum products demand elasticities in Nigeria. A multivariate cointegration approach

    International Nuclear Information System (INIS)

    Iwayemi, Akin; Adenikinju, Adeola; Babatunde, M. Adetunji

    2010-01-01

    This paper formulates and estimates petroleum products demand functions in Nigeria at both aggregative and product level for the period 1977 to 2006 using multivariate cointegration approach. The estimated short and long-run price and income elasticities confirm conventional wisdom that energy consumption responds positively to changes in GDP and negatively to changes in energy price. However, the price and income elasticities of demand varied according to product type. Kerosene and gasoline have relatively high short-run income and price elasticities compared to diesel. Overall, the results show petroleum products to be price and income inelastic. (author)

  6. Estimating petroleum products demand elasticities in Nigeria. A multivariate cointegration approach

    Energy Technology Data Exchange (ETDEWEB)

    Iwayemi, Akin; Adenikinju, Adeola; Babatunde, M. Adetunji [Department of Economics, University of Ibadan, Ibadan (Nigeria)

    2010-01-15

    This paper formulates and estimates petroleum products demand functions in Nigeria at both aggregative and product level for the period 1977 to 2006 using multivariate cointegration approach. The estimated short and long-run price and income elasticities confirm conventional wisdom that energy consumption responds positively to changes in GDP and negatively to changes in energy price. However, the price and income elasticities of demand varied according to product type. Kerosene and gasoline have relatively high short-run income and price elasticities compared to diesel. Overall, the results show petroleum products to be price and income inelastic. (author)

  7. FRACTURE MECHANICS APPROACH TO ESTIMATE FATIGUE LIVES OF WELDED LAP-SHEAR SPECIMENS

    Energy Technology Data Exchange (ETDEWEB)

    Lam, P.; Michigan, J.

    2014-04-25

    A full range of stress intensity factor solutions for a kinked crack is developed as a function of weld width and the sheet thickness. When used with the associated main crack solutions (global stress intensity factors) in terms of the applied load and specimen geometry, the fatigue lives can be estimated for the laser-welded lap-shear specimens. The estimations are in good agreement with the experimental data. A classical solution for an infinitesimal kink is also employed in the approach. However, the life predictions tend to overestimate the actual fatigue lives. The traditional life estimations with the structural stress along with the experimental stress-fatigue life data (S-N curve) are also provided. In this case, the estimations only agree with the experimental data under higher load conditions.

  8. A new approach for product cost estimation using data envelopment analysis

    Directory of Open Access Journals (Sweden)

    Adil Salam

    2012-10-01

    Full Text Available Cost estimation of new products has always been difficult as only few design, manufacturing and operational features will be known. In these situations, parametric or non-parametric methods are commonly used to estimate the cost of a product given the corresponding cost drivers. The parametric models use priori determined cost function where the parameters of the function are evaluated from historical data. Non-parametric methods, on the other hand, attempt to fit curves to the historic data without predetermined function. In both methods, it is assumed that the historic data used in the analysis is a true representation of the relation between the cost drivers and the corresponding costs. However, because of efficiency variations of the manufacturers and suppliers, changes in supplier selections, market fluctuations, and several other reasons, certain costs in the historic data may be too high whereas other costs may represent better deals for their corresponding cost drivers. Thus, it may be important to rank the historic data and identify benchmarks and estimate the target costs of the product based on these benchmarks. In this paper, a novel adaptation of cost drivers and cost data is introduced in order to use data envelopment analysis for the purpose of ranking cost data and identify benchmarks, and then estimate the target costs of a new product based on these benchmarks. An illustrative case study has been presented for the cost estimation of landing gears of an aircraft manufactured by an aerospace company located in Montreal, CANADA.

  9. Different approaches to estimation of reactor pressure vessel material embrittlement

    Directory of Open Access Journals (Sweden)

    V. M. Revka

    2013-03-01

    Full Text Available The surveillance test data for the nuclear power plant which is under operation in Ukraine have been used to estimate WWER-1000 reactor pressure vessel (RPV material embrittlement. The beltline materials (base and weld metal were characterized using Charpy impact and fracture toughness test methods. The fracture toughness test data were analyzed according to the standard ASTM 1921-05. The pre-cracked Charpy specimens were tested to estimate a shift of reference temperature T0 due to neutron irradiation. The maximum shift of reference temperature T0 is 84 °C. A radiation embrittlement rate AF for the RPV material was estimated using fracture toughness test data. In addition the AF factor based on the Charpy curve shift (ΔTF has been evaluated. A comparison of the AF values estimated according to different approaches has shown there is a good agreement between the radiation shift of Charpy impact and fracture toughness curves for weld metal with high nickel content (1,88 % wt. Therefore Charpy impact test data can be successfully applied to estimate the fracture toughness curve shift and therefore embrittlement rate. Furthermore it was revealed that radiation embrittlement rate for weld metal is higher than predicted by a design relationship. The enhanced embrittlement is most probably related to simultaneously high nickel and high manganese content in weld metal.

  10. Approach of the estimation for the highest energy of the gamma rays

    International Nuclear Information System (INIS)

    Dumitrescu, Gheorghe

    2004-01-01

    In the last decade there was under debate the issue concerning the composition of the ultra high energy cosmic rays and some authors suggested that the light composition seems to be a relating issue. There was another debate concerning the limit of the energy of gamma rays. The bottom-up approaches suggest a limit at 10 15 eV. Some top-down approaches rise this limit at about 10 20 eV or above. The present paper provides an approach to estimate the limit of the energy of gamma rays using the recent paper of Claus W. Turtur. (author)

  11. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin

    2015-02-03

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  12. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin; Dutta, Rishabh; Jonsson, Sigurjon

    2015-01-01

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  13. Probability Grid: A Location Estimation Scheme for Wireless Sensor Networks

    National Research Council Canada - National Science Library

    Stoleru, Radu; Stankovic, John A

    2004-01-01

    Location information is of paramount importance for Wireless Sensor Networks (WSN). The accuracy of collected data can significantly be affected by an imprecise positioning of the event of interest...

  14. Simplified approach for estimating large early release frequency

    International Nuclear Information System (INIS)

    Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.

    1998-04-01

    The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB

  15. A new approach for estimating the density of liquids.

    Science.gov (United States)

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-05

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  16. An efficient algebraic approach to observability analysis in state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)

    2010-03-15

    An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)

  17. Delay Kalman Filter to Estimate the Attitude of a Mobile Object with Indoor Magnetic Field Gradients

    Directory of Open Access Journals (Sweden)

    Christophe Combettes

    2016-05-01

    Full Text Available More and more services are based on knowing the location of pedestrians equipped with connected objects (smartphones, smartwatches, etc.. One part of the location estimation process is attitude estimation. Many algorithms have been proposed but they principally target open space areas where the local magnetic field equals the Earth’s field. Unfortunately, this approach is impossible indoors, where the use of magnetometer arrays or magnetic field gradients has been proposed. However, current approaches omit the impact of past state estimates on the current orientation estimate, especially when a reference field is computed over a sliding window. A novel Delay Kalman filter is proposed in this paper to integrate this time correlation: the Delay MAGYQ. Experimental assessment, conducted in a motion lab with a handheld inertial and magnetic mobile unit, shows that the novel filter better estimates the Euler angles of the handheld device with an 11.7° mean error on the yaw angle as compared to 16.4° with a common Additive Extended Kalman filter.

  18. Spatio-temporal patterns of Cu contamination in mosses using geostatistical estimation

    International Nuclear Information System (INIS)

    Martins, Anabela; Figueira, Rui; Sousa, António Jorge; Sérgio, Cecília

    2012-01-01

    Several recent studies have reported temporal trends in metal contamination in mosses, but such assessments did not evaluate uncertainty in temporal changes, therefore providing weak statistical support for time comparisons. Furthermore, levels of contaminants in the environment change in both space and time, requiring space-time modelling methods for map estimation. We propose an indicator of spatial and temporal variation based on space-time estimation by indicator kriging, where uncertainty at each location is estimated from the local distribution function, thereby calculating variability intervals for comparison between several biomonitoring dates. This approach was exemplified using copper concentrations in mosses from four Portuguese surveys (1992, 1997, 2002 and 2006). Using this approach, we identified a general decrease in copper contamination, but spatial patterns were not uniform, and from the uncertainty intervals, changes could not be considered significant in the majority of the study area. - Highlights: ► We estimated copper contamination in mosses by spatio-temporal kriging between 1992 and 2006. ► We determined local distribution functions to define variation intervals at each location. ► Significance of temporal changes is assessed using an indicator based on uncertainty interval. ► There is general decrease in copper contamination, but spatial patterns are not uniform. - The contamination of copper in mosses was estimated by spatio-temporal kriging, with determination of uncertainty classes in the temporal variation.

  19. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  20. Value drivers: an approach for estimating health and disease management program savings.

    Science.gov (United States)

    Phillips, V L; Becker, Edmund R; Howard, David H

    2013-12-01

    Health and disease management (HDM) programs have faced challenges in documenting savings related to their implementation. The objective of this eliminate study was to describe OptumHealth's (Optum) methods for estimating anticipated savings from HDM programs using Value Drivers. Optum's general methodology was reviewed, along with details of 5 high-use Value Drivers. The results showed that the Value Driver approach offers an innovative method for estimating savings associated with HDM programs. The authors demonstrated how real-time savings can be estimated for 5 Value Drivers commonly used in HDM programs: (1) use of beta-blockers in treatment of heart disease, (2) discharge planning for high-risk patients, (3) decision support related to chronic low back pain, (4) obesity management, and (5) securing transportation for primary care. The validity of savings estimates is dependent on the type of evidence used to gauge the intervention effect, generating changes in utilization and, ultimately, costs. The savings estimates derived from the Value Driver method are generally reasonable to conservative and provide a valuable framework for estimating financial impacts from evidence-based interventions.

  1. A multi-method and multi-scale approach for estimating city-wide anthropogenic heat fluxes

    Science.gov (United States)

    Chow, Winston T. L.; Salamanca, Francisco; Georgescu, Matei; Mahalov, Alex; Milne, Jeffrey M.; Ruddell, Benjamin L.

    2014-12-01

    A multi-method approach estimating summer waste heat emissions from anthropogenic activities (QF) was applied for a major subtropical city (Phoenix, AZ). These included detailed, quality-controlled inventories of city-wide population density and traffic counts to estimate waste heat emissions from population and vehicular sources respectively, and also included waste heat simulations derived from urban electrical consumption generated by a coupled building energy - regional climate model (WRF-BEM + BEP). These component QF data were subsequently summed and mapped through Geographic Information Systems techniques to enable analysis over local (i.e. census-tract) and regional (i.e. metropolitan area) scales. Through this approach, local mean daily QF estimates compared reasonably versus (1.) observed daily surface energy balance residuals from an eddy covariance tower sited within a residential area and (2.) estimates from inventory methods employed in a prior study, with improved sensitivity to temperature and precipitation variations. Regional analysis indicates substantial variations in both mean and maximum daily QF, which varied with urban land use type. Average regional daily QF was ∼13 W m-2 for the summer period. Temporal analyses also indicated notable differences using this approach with previous estimates of QF in Phoenix over different land uses, with much larger peak fluxes averaging ∼50 W m-2 occurring in commercial or industrial areas during late summer afternoons. The spatio-temporal analysis of QF also suggests that it may influence the form and intensity of the Phoenix urban heat island, specifically through additional early evening heat input, and by modifying the urban boundary layer structure through increased turbulence.

  2. Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions

    OpenAIRE

    Adrian Jinich; Dmitrij Rappoport; Ian Dunn; Benjamin Sanchez-Lengeling; Roberto Olivares-Amaya; Elad Noor; Arren Bar Even; Alán Aspuru-Guzik

    2014-01-01

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfe...

  3. Narrow Artificial Intelligence with Machine Learning for Real-Time Estimation of a Mobile Agent’s Location Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cédric Beaulac

    2017-01-01

    Full Text Available We propose to use a supervised machine learning technique to track the location of a mobile agent in real time. Hidden Markov Models are used to build artificial intelligence that estimates the unknown position of a mobile target moving in a defined environment. This narrow artificial intelligence performs two distinct tasks. First, it provides real-time estimation of the mobile agent’s position using the forward algorithm. Second, it uses the Baum–Welch algorithm as a statistical learning tool to gain knowledge of the mobile target. Finally, an experimental environment is proposed, namely, a video game that we use to test our artificial intelligence. We present statistical and graphical results to illustrate the efficiency of our method.

  4. A brute-force spectral approach for wave estimation using measured vessel motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.

    2018-01-01

    , and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...

  5. Estimating a WTP-based value of a QALY: the 'chained' approach.

    Science.gov (United States)

    Robinson, Angela; Gyrd-Hansen, Dorte; Bacon, Philomena; Baker, Rachel; Pennington, Mark; Donaldson, Cam

    2013-09-01

    A major issue in health economic evaluation is that of the value to place on a quality adjusted life year (QALY), commonly used as a measure of health care effectiveness across Europe. This critical policy issue is reflected in the growing interest across Europe in development of more sound methods to elicit such a value. EuroVaQ was a collaboration of researchers from 9 European countries, the main aim being to develop more robust methods to determine the monetary value of a QALY based on surveys of the general public. The 'chained' approach of deriving a societal willingness-to-pay (WTP) based monetary value of a QALY used the following basic procedure. First, utility values were elicited for health states using the standard gamble (SG) and time trade off (TTO) methods. Second, a monetary value to avoid some risk/duration of that health state was elicited and the implied WTP per QALY estimated. We developed within EuroVaQ an adaptation to the 'chained approach' that attempts to overcome problems documented previously (in particular the tendency to arrive at exceedingly high WTP per QALY values). The survey was administered via Internet panels in each participating country and almost 22,000 responses achieved. Estimates of the value of a QALY varied across question and were, if anything, on the low side with the (trimmed) 'all country' mean WTP per QALY ranging from $18,247 to $34,097. Untrimmed means were considerably higher and medians considerably lower in each case. We conclude that the adaptation to the chained approach described here is a potentially useful technique for estimating WTP per QALY. A number of methodological challenges do still exist, however, and there is scope for further refinement. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Protein (multi-)location prediction: using location inter-dependencies in a probabilistic framework

    Science.gov (United States)

    2014-01-01

    Motivation Knowing the location of a protein within the cell is important for understanding its function, role in biological processes, and potential use as a drug target. Much progress has been made in developing computational methods that predict single locations for proteins. Most such methods are based on the over-simplifying assumption that proteins localize to a single location. However, it has been shown that proteins localize to multiple locations. While a few recent systems attempt to predict multiple locations of proteins, their performance leaves much room for improvement. Moreover, they typically treat locations as independent and do not attempt to utilize possible inter-dependencies among locations. Our hypothesis is that directly incorporating inter-dependencies among locations into both the classifier-learning and the prediction process can improve location prediction performance. Results We present a new method and a preliminary system we have developed that directly incorporates inter-dependencies among locations into the location-prediction process of multiply-localized proteins. Our method is based on a collection of Bayesian network classifiers, where each classifier is used to predict a single location. Learning the structure of each Bayesian network classifier takes into account inter-dependencies among locations, and the prediction process uses estimates involving multiple locations. We evaluate our system on a dataset of single- and multi-localized proteins (the most comprehensive protein multi-localization dataset currently available, derived from the DBMLoc dataset). Our results, obtained by incorporating inter-dependencies, are significantly higher than those obtained by classifiers that do not use inter-dependencies. The performance of our system on multi-localized proteins is comparable to a top performing system (YLoc+), without being restricted only to location-combinations present in the training set. PMID:24646119

  7. Protein (multi-)location prediction: using location inter-dependencies in a probabilistic framework.

    Science.gov (United States)

    Simha, Ramanuja; Shatkay, Hagit

    2014-03-19

    Knowing the location of a protein within the cell is important for understanding its function, role in biological processes, and potential use as a drug target. Much progress has been made in developing computational methods that predict single locations for proteins. Most such methods are based on the over-simplifying assumption that proteins localize to a single location. However, it has been shown that proteins localize to multiple locations. While a few recent systems attempt to predict multiple locations of proteins, their performance leaves much room for improvement. Moreover, they typically treat locations as independent and do not attempt to utilize possible inter-dependencies among locations. Our hypothesis is that directly incorporating inter-dependencies among locations into both the classifier-learning and the prediction process can improve location prediction performance. We present a new method and a preliminary system we have developed that directly incorporates inter-dependencies among locations into the location-prediction process of multiply-localized proteins. Our method is based on a collection of Bayesian network classifiers, where each classifier is used to predict a single location. Learning the structure of each Bayesian network classifier takes into account inter-dependencies among locations, and the prediction process uses estimates involving multiple locations. We evaluate our system on a dataset of single- and multi-localized proteins (the most comprehensive protein multi-localization dataset currently available, derived from the DBMLoc dataset). Our results, obtained by incorporating inter-dependencies, are significantly higher than those obtained by classifiers that do not use inter-dependencies. The performance of our system on multi-localized proteins is comparable to a top performing system (YLoc+), without being restricted only to location-combinations present in the training set.

  8. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    Science.gov (United States)

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

  9. Online Location-based Mobile Gaming: CityZombie - A basic approach to introducing location in mobile games

    OpenAIRE

    Rolland, Øyvind

    2008-01-01

    Mobile phone gaming has seen an enormous growth over the last decade and many countries now have more cell phone subscriptions than they have people. Combined with the ever increasing interest in games, the mobile gaming market still hasn't reached it's full potential. Newer and more powerful phones with interesting features hit the market every day. Many of those features are directed at locating position, and that opens up for the prospect of location-based gaming. This branch of gaming is ...

  10. Model for 3D-visualization of streams and techno-economic estimate of locations for construction of small hydropower plants

    International Nuclear Information System (INIS)

    Izeiroski, Subija

    2012-01-01

    The main researches of this dissertation are focused to a development of a model for preliminary assesment of the hydro power potentials for small hydropower plants construction using Geographic Information System - GIS. For this purpose, in the first part of dissertation is developed a contemporary methodological approach for 3D- visualization of the land surface and river streams in a GIS platform. In the methodology approach, as input graphical data are used digitized maps in scale 1:25000, where each map covers an area of 10x14 km and consists of many layers with graphic data in shape (vector) format. Using GIS tools, from the input point and isohyetal contour data layers with different interpolation techniques have been obtained digital elevation model - DEM, which further is used for determination of additional graphic maps with useful land surface parameters such as: slope raster maps, hill shade models of the surface, different maps with hydrologic parameters and many others. The main focus of researches is directed toward the developing of contemporary methodological approaches based on GIS systems, for assessment of the hydropower potentials and selection of suitable location for small hydropower plant construction - SHPs, and especially in the mountainous hilly area that are rich with water resources. For this purpose it is done a practical analysis at a study area which encompasses the watershed area of the Brajchanska River at the east part of Prespa Lake. The main accent considering the analysis of suitable locations for SHP construction is set to the techno-engineering criteria, and in this context is made a topographic analysis regarding the slope (gradient) either of all as well of particular river streams. It is also made a hydrological analysis regarding the flow rates (discharges). The slope analysis is executed at a pixel (cell) level a swell as at a segment (line) level along a given stream. The slope value at segment level gives in GIS

  11. Development of a matrix approach to estimate soil clean-up levels for BTEX compounds

    International Nuclear Information System (INIS)

    Erbas-White, I.; San Juan, C.

    1993-01-01

    A draft state-of-the-art matrix approach has been developed for the State of Washington to estimate clean-up levels for benzene, toluene, ethylbenzene and xylene (BTEX) in deep soils based on an endangerment approach to groundwater. Derived soil clean-up levels are estimated using a combination of two computer models, MULTIMED and VLEACH. The matrix uses a simple scoring system that is used to assign a score at a given site based on the parameters such as depth to groundwater, mean annual precipitation, type of soil, distance to potential groundwater receptor and the volume of contaminated soil. The total score is then used to obtain a soil clean-up level from a table. The general approach used involves the utilization of computer models to back-calculate soil contaminant levels in the vadose zone that would create that particular contaminant concentration in groundwater at a given receptor. This usually takes a few iterations of trial runs to estimate the clean-up levels since the models use the soil clean-up levels as ''input'' and the groundwater levels as ''output.'' The selected contaminant levels in groundwater are Model Toxic control Act (MTCA) values used in the State of Washington

  12. MRS/IS facility co-located with a repository: preconceptual design and life-cycle cost estimates

    International Nuclear Information System (INIS)

    Smith, R.I.; Nesbitt, J.F.

    1982-11-01

    A program is described to examine the various alternatives for monitored retrievable storage (MRS) and interim storage (IS) of spent nuclear fuel, solidified high-level waste (HLW), and transuranic (TRU) waste until appropriate geologic repository/repositories are available. The objectives of this study are: (1) to develop a preconceptual design for an MRS/IS facility that would become the principal surface facility for a deep geologic repository when the repository is opened, (2) to examine various issues such as transportation of wastes, licensing of the facility, and environmental concerns associated with operation of such a facility, and (3) to estimate the life cycle costs of the facility when operated in response to a set of scenarios which define the quantities and types of waste requiring storage in specific time periods, which generally span the years from 1990 until 2016. The life cycle costs estimated in this study include: the capital expenditures for structures, casks and/or drywells, storage areas and pads, and transfer equipment; the cost of staff labor, supplies, and services; and the incremental cost of transporting the waste materials from the site of origin to the MRS/IS facility. Three scenarios are examined to develop estimates of life cycle costs of the MRS/IS facility. In the first scenario, HLW canisters are stored, starting in 1990, until the co-located repository is opened in the year 1998. Additional reprocessing plants and repositories are placed in service at various intervals. In the second scenario, spent fuel is stored, starting in 1990, because the reprocessing plants are delayed in starting operations by 10 years, but no HLW is stored because the repositories open on schedule. In the third scenario, HLW is stored, starting in 1990, because the repositories are delayed 10 years, but the reprocessing plants open on schedule

  13. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  14. Evaluation of alternative model-data fusion approaches in water balance estimation across Australia

    Science.gov (United States)

    van Dijk, A. I. J. M.; Renzullo, L. J.

    2009-04-01

    Australia's national agencies are developing a continental modelling system to provide a range of water information services. It will include rolling water balance estimation to underpin national water accounts, water resources assessments that interpret current water resources availability and trends in a historical context, and water resources predictions coupled to climate and weather forecasting. The nation-wide coverage, currency, accuracy, and consistency required means that remote sensing will need to play an important role along with in-situ observations. Different approaches to blending models and observations can be considered. Integration of on-ground and remote sensing data into land surface models in atmospheric applications often involves state updating through model-data assimilation techniques. By comparison, retrospective water balance estimation and hydrological scenario modelling to date has mostly relied on static parameter fitting against observations and has made little use of earth observation. The model-data fusion approach most appropriate for a continental water balance estimation system will need to consider the trade-off between computational overhead and the accuracy gains achieved when using more sophisticated synthesis techniques and additional observations. This trade-off was investigated using a landscape hydrological model and satellite-based estimates of soil moisture and vegetation properties for aseveral gauged test catchments in southeast Australia.

  15. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    Science.gov (United States)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  16. The estimated economic burden of genital herpes in the United States. An analysis using two costing approaches

    Directory of Open Access Journals (Sweden)

    Fisman David N

    2001-06-01

    Full Text Available Abstract Background Only limited data exist on the costs of genital herpes (GH in the USA. We estimated the economic burden of GH in the USA using two different costing approaches. Methods The first approach was a cross-sectional survey of a sample of primary and secondary care physicians, analyzing health care resource utilization. The second approach was based on the analysis of a large administrative claims data set. Both approaches were used to generate the number of patients with symptomatic GH seeking medical treatment, the average medical expenditures and estimated national costs. Costs were valued from a societal and a third party payer's perspective in 1996 US dollars. Results In the cross-sectional study, based on an estimated 3.1 million symptomatic episodes per year in the USA, the annual direct medical costs were estimated at a maximum of $984 million. Of these costs, 49.7% were caused by drug expenditures, 47.7% by outpatient medical care and 2.6% by hospital costs. Indirect costs accounted for further $214 million. The analysis of 1,565 GH cases from the claims database yielded a minimum national estimate of $283 million direct medical costs. Conclusions GH appears to be an important public health problem from the health economic point of view. The observed difference in direct medical costs may be explained with the influence of compliance to treatment and possible undersampling of subpopulations in the claims data set. The present study demonstrates the validity of using different approaches in estimating the economic burden of a specific disease to the health care system.

  17. Understanding the Representativeness of Mobile Phone Location Data in Characterizing Human Mobility Indicators

    Directory of Open Access Journals (Sweden)

    Shiwei Lu

    2017-01-01

    Full Text Available The advent of big data has aided understanding of the driving forces of human mobility, which is beneficial for many fields, such as mobility prediction, urban planning, and traffic management. However, the data sources used in many studies, such as mobile phone location and geo-tagged social media data, are sparsely sampled in the temporal scale. An individual’s records can be distributed over a few hours a day, or a week, or over just a few hours a month. Thus, the representativeness of sparse mobile phone location data in characterizing human mobility requires analysis before using data to derive human mobility patterns. This paper investigates this important issue through an approach that uses subscriber mobile phone location data collected by a major carrier in Shenzhen, China. A dataset of over 5 million mobile phone subscribers that covers 24 h a day is used as a benchmark to test the representativeness of mobile phone location data on human mobility indicators, such as total travel distance, movement entropy, and radius of gyration. This study divides this dataset by hour, using 2- to 23-h segments to evaluate the representativeness due to the availability of mobile phone location data. The results show that different numbers of hourly segments affect estimations of human mobility indicators and can cause overestimations or underestimations from the individual perspective. On average, the total travel distance and movement entropy tend to be underestimated. The underestimation coefficient results for estimation of total travel distance are approximately linear, declining as the number of time segments increases, and the underestimation coefficient results for estimating movement entropy decline logarithmically as the time segments increase, whereas the radius of gyration tends to be more ambiguous due to the loss of isolated locations. This paper suggests that researchers should carefully interpret results derived from this type of

  18. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  19. Optimizing lengths of confidence intervals: fourth-order efficiency in location models

    NARCIS (Netherlands)

    Klaassen, C.; Venetiaan, S.

    2010-01-01

    Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation

  20. Smartphone-Based Real-Time Indoor Location Tracking With 1-m Precision.

    Science.gov (United States)

    Liang, Po-Chou; Krause, Paul

    2016-05-01

    Monitoring the activities of daily living of the elderly at home is widely recognized as useful for the detection of new or deteriorating health conditions. However, the accuracy of existing indoor location tracking systems remains unsatisfactory. The aim of this study was, therefore, to develop a localization system that can identify a patient's real-time location in a home environment with maximum estimation error of 2 m at a 95% confidence level. A proof-of-concept system based on a sensor fusion approach was built with considerations for lower cost, reduced intrusiveness, and higher mobility, deployability, and portability. This involved the development of both a step detector using the accelerometer and compass of an iPhone 5, and a radio-based localization subsystem using a Kalman filter and received signal strength indication to tackle issues that had been identified as limiting accuracy. The results of our experiments were promising with an average estimation error of 0.47 m. We are confident that with the proposed future work, our design can be adapted to a home-like environment with a more robust localization solution.

  1. Location estimation method of steam leak in pipeline using leakage area analysis

    International Nuclear Information System (INIS)

    Kim, Se Oh; Jeon, Hyeong Seop; Son, Ki Sung; Park, Jong Won

    2016-01-01

    It is important to have a pipeline leak-detection system that determines the presence of a leak and quickly identifies its location. Current leak detection methods use a acoustic emission sensors, microphone arrays, and camera images. Recently, many researchers have been focusing on using cameras for detecting leaks. The advantage of this method is that it can survey a wide area and monitor a pipeline over a long distance. However, conventional methods using camera monitoring are unable to target an exact leak location. In this paper, we propose a method of detecting leak locations using leak-detection results combined with multi-frame analysis. The proposed method is verified by experiment

  2. Location estimation method of steam leak in pipeline using leakage area analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Se Oh; Jeon, Hyeong Seop; Son, Ki Sung [Sae An Engineering Corp., Seoul (Korea, Republic of); Park, Jong Won [Dept. of Information Communications Engineering, Chungnam National University, Daejeon (Korea, Republic of)

    2016-10-15

    It is important to have a pipeline leak-detection system that determines the presence of a leak and quickly identifies its location. Current leak detection methods use a acoustic emission sensors, microphone arrays, and camera images. Recently, many researchers have been focusing on using cameras for detecting leaks. The advantage of this method is that it can survey a wide area and monitor a pipeline over a long distance. However, conventional methods using camera monitoring are unable to target an exact leak location. In this paper, we propose a method of detecting leak locations using leak-detection results combined with multi-frame analysis. The proposed method is verified by experiment.

  3. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    Science.gov (United States)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  4. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  5. Using Landsat Vegetation Indices to Estimate Impervious Surface Fractions for European Cities

    DEFF Research Database (Denmark)

    Kaspersen, Per Skougaard; Fensholt, Rasmus; Drews, Martin

    2015-01-01

    and applicability of vegetation indices (VI), from Landsat imagery, to estimate IS fractions for European cities. The accuracy of three different measures of vegetation cover is examined for eight urban areas at different locations in Europe. The Normalized Difference Vegetation Index (NDVI) and Soil Adjusted...... Vegetation Index (SAVI) are converted to IS fractions using a regression modelling approach. Also, NDVI is used to estimate fractional vegetation cover (FR), and consequently IS fractions. All three indices provide fairly accurate estimates (MAEs ≈ 10%, MBE’s

  6. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  7. Post-classification approaches to estimating change in forest area using remotely sense auxiliary data.

    Science.gov (United States)

    Ronald E. McRoberts

    2014-01-01

    Multiple remote sensing-based approaches to estimating gross afforestation, gross deforestation, and net deforestation are possible. However, many of these approaches have severe data requirements in the form of long time series of remotely sensed data and/or large numbers of observations of land cover change to train classifiers and assess the accuracy of...

  8. A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2016-12-01

    Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.

  9. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  10. An Integrated Approach for Characterization of Uncertainty in Complex Best Estimate Safety Assessment

    International Nuclear Information System (INIS)

    Pourgol-Mohamad, Mohammad; Modarres, Mohammad; Mosleh, Ali

    2013-01-01

    This paper discusses an approach called Integrated Methodology for Thermal-Hydraulics Uncertainty Analysis (IMTHUA) to characterize and integrate a wide range of uncertainties associated with the best estimate models and complex system codes used for nuclear power plant safety analyses. Examples of applications include complex thermal hydraulic and fire analysis codes. In identifying and assessing uncertainties, the proposed methodology treats the complex code as a 'white box', thus explicitly treating internal sub-model uncertainties in addition to the uncertainties related to the inputs to the code. The methodology accounts for uncertainties related to experimental data used to develop such sub-models, and efficiently propagates all uncertainties during best estimate calculations. Uncertainties are formally analyzed and probabilistically treated using a Bayesian inference framework. This comprehensive approach presents the results in a form usable in most other safety analyses such as the probabilistic safety assessment. The code output results are further updated through additional Bayesian inference using any available experimental data, for example from thermal hydraulic integral test facilities. The approach includes provisions to account for uncertainties associated with user-specified options, for example for choices among alternative sub-models, or among several different correlations. Complex time-dependent best-estimate calculations are computationally intense. The paper presents approaches to minimize computational intensity during the uncertainty propagation. Finally, the paper will report effectiveness and practicality of the methodology with two applications to a complex thermal-hydraulics system code as well as a complex fire simulation code. In case of multiple alternative models, several techniques, including dynamic model switching, user-controlled model selection, and model mixing, are discussed. (authors)

  11. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia

    Science.gov (United States)

    Hawdon, Aaron; McJannet, David; Wallace, Jim

    2014-06-01

    The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.

  12. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    Science.gov (United States)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  13. Zone-based RSS Reporting for Location Fingerprinting

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun; Treu, Georg; Linnhoff–Popien, Claudia

    2007-01-01

    In typical location fingerprinting systems a tracked terminal reports sampled Received Signal Strength (RSS) values to a location server, which estimates its position based on a database of pre-recorded RSS fingerprints. So far, poll-based and periodic RSS reporting has been proposed. However......, for supporting proactive Location-based Services (LBSs), triggered by pre-defined spatial events, the periodic protocol is inefficient. Hence, this paper introduces zone-based RSS reporting: the location server translates geographical zones defined by the LBS into RSS-based representations, which are dynamically...

  14. Methods for estimating flow-duration curve and low-flow frequency statistics for ungaged locations on small streams in Minnesota

    Science.gov (United States)

    Ziegeweid, Jeffrey R.; Lorenz, David L.; Sanocki, Chris A.; Czuba, Christiana R.

    2015-12-24

    Knowledge of the magnitude and frequency of low flows in streams, which are flows in a stream during prolonged dry weather, is fundamental for water-supply planning and design; waste-load allocation; reservoir storage design; and maintenance of water quality and quantity for irrigation, recreation, and wildlife conservation. This report presents the results of a statewide study for which regional regression equations were developed for estimating 13 flow-duration curve statistics and 10 low-flow frequency statistics at ungaged stream locations in Minnesota. The 13 flow-duration curve statistics estimated by regression equations include the 0.0001, 0.001, 0.02, 0.05, 0.1, 0.25, 0.50, 0.75, 0.9, 0.95, 0.99, 0.999, and 0.9999 exceedance-probability quantiles. The low-flow frequency statistics include annual and seasonal (spring, summer, fall, winter) 7-day mean low flows, seasonal 30-day mean low flows, and summer 122-day mean low flows for a recurrence interval of 10 years. Estimates of the 13 flow-duration curve statistics and the 10 low-flow frequency statistics are provided for 196 U.S. Geological Survey continuous-record streamgages using streamflow data collected through September 30, 2012.

  15. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    Science.gov (United States)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures

  16. Supplementary Appendix for: Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.

    2016-01-01

    In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  17. Estimating oil price 'Value at Risk' using the historical simulation approach

    International Nuclear Information System (INIS)

    David Cabedo, J.; Moya, Ismael

    2003-01-01

    In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach, developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification

  18. Estimating oil price 'Value at Risk' using the historical simulation approach

    International Nuclear Information System (INIS)

    Cabedo, J.D.; Moya, I.

    2003-01-01

    In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach. developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification. (author)

  19. Regression Analysis of Top of Descent Location for Idle-thrust Descents

    Science.gov (United States)

    Stell, Laurel; Bronsvoort, Jesper; McDonald, Greg

    2013-01-01

    In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. The independent variables cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also recorded or computed post-operations. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajec- tory parameters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowl- edge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace. In particular, a model for TOD location that is linear in the independent variables would enable decision support tool human-machine interfaces for which a kinetic approach would be computationally too slow.

  20. Probabilistic Location-based Routing Protocol for Mobile Wireless Sensor Networks with Intermittent Communication

    Directory of Open Access Journals (Sweden)

    Sho KUMAGAI

    2015-02-01

    Full Text Available In a sensor network, sensor data messages reach the nearest stationary sink node connected to the Internet by wireless multihop transmissions. Recently, various mobile sensors are available due to advances of robotics technologies and communication technologies. A location based message-by-message routing protocol, such as Geographic Distance Routing (GEDIR is suitable for such mobile wireless networks; however, it is required for each mobile wireless sensor node to know the current locations of all its neighbor nodes. On the other hand, various intermittent communication methods for a low power consumption requirement have been proposed for wireless sensor networks. Intermittent Receiver-driven Data Transmission (IRDT is one of the most efficient methods; however, it is difficult to combine the location based routing and the intermittent communication. In order to solve this problem, this paper proposes a probabilistic approach IRDT-GEDIR with the help of one of the solutions of the secretaries problem. Here, each time a neighbor sensor node wakes up from its sleep mode, an intermediate sensor node determines whether it forwards its buffered sensor data messages to it or not based on an estimation of achieved pseudo speed of the messages. Simulation experiments show that IRDT-GEDIR achieves higher pseudo speed of sensor data message transmissions and shorter transmission delay than achieves shorter transmission delay than the two naive combinations of IRDT and GEDIR in sensor networks with mobile sensor nodes and a stationary sink node. In addition, the guideline of the estimated numbers of the neighbor nodes of each intermediate sensor node is provided based on the results of the simulation experiments to apply the probabilistic approach IRDT-GEDIR.

  1. Bias correction of bounded location errors in presence-only data

    Science.gov (United States)

    Hefley, Trevor J.; Brost, Brian M.; Hooten, Mevin B.

    2017-01-01

    Location error occurs when the true location is different than the reported location. Because habitat characteristics at the true location may be different than those at the reported location, ignoring location error may lead to unreliable inference concerning species–habitat relationships.We explain how a transformation known in the spatial statistics literature as a change of support (COS) can be used to correct for location errors when the true locations are points with unknown coordinates contained within arbitrary shaped polygons.We illustrate the flexibility of the COS by modelling the resource selection of Whooping Cranes (Grus americana) using citizen contributed records with locations that were reported with error. We also illustrate the COS with a simulation experiment.In our analysis of Whooping Crane resource selection, we found that location error can result in up to a five-fold change in coefficient estimates. Our simulation study shows that location error can result in coefficient estimates that have the wrong sign, but a COS can efficiently correct for the bias.

  2. Selecting optimum locations for co-located wave and wind energy farms. Part I: The Co-Location Feasibility index

    International Nuclear Information System (INIS)

    Astariz, S.; Iglesias, G.

    2016-01-01

    Highlights: • New approach to identifying suitable sites for co-located wave and wind farms. • A new tool, the Co-Location Feasibility (CLF) index, is defined. • Its application is analysed by means of a case study off the Danish coast. • Hindcast and measured wave and wind data from 2005 to 2015 are used. • Third-generation models of winds and waves (WAsP and SWAN) are used. - Abstract: Marine energy is poised to play a fundamental role in meeting renewable energy and carbon emission targets thanks to the abundant, and still largely untapped, wave and tidal resources. However, it is often considered difficult and uneconomical – as is usually the case of nascent technologies. Combining various renewables, such as wave and offshore wind energy, has emerged as a solution to improve their competitiveness and in the process overcome other challenges that hinder their development. The objective of this paper is to develop a new approach to identifying suitable sites for co-located wave and wind farms based on the assessment of the available resources and technical constraints, and to illustrate its application by means of a case study off the Danish coast – an area of interest for combining wave and wind energy. The method is based on an ad hoc tool, the Co-Location Feasibility (CLF) index, and is based on a joint characterisation of the wave and wind resources, which takes into account not only the available power but also the correlation between both resources and the power variability. The analysis is carried out based on hindcast data and observations from 2005 to 2015, and using third-generation models of winds and waves – WAsP and SWAN, respectively. Upon selection and ranking, it is found that a number of sites in the study region are indeed suited to realising the synergies between wave and offshore wind energy. The approach developed in this work can be applied elsewhere.

  3. Performance Measurement of Location Enabled e-Government Processes: A Use Case on Traffic Safety Monitoring

    Science.gov (United States)

    Vandenbroucke, D.; Vancauwenberghe, G.

    2016-12-01

    The European Union Location Framework (EULF), as part of the Interoperable Solutions for European Public Administrations (ISA) Programme of the EU (EC DG DIGIT), aims to enhance the interactions between governments, businesses and citizens by embedding location information into e-Government processes. The challenge remains to find scientific sound and at the same time practicable approaches to estimate or measure the impact of location enablement of e-Government processes on the performance of the processes. A method has been defined to estimate process performance in terms of variables describing the efficiency, effectiveness, as well as the quality of the output of the work processes. A series of use cases have been identified, corresponding to existing e-Government work processes in which location information could bring added value. In a first step, the processes are described by means of BPMN (Business Process Model and Notation) to better understand the process steps, the actors involved, the spatial data flows, as well as the required input and the generated output. In a second step the processes are assessed in terms of the (sub-optimal) use of location information and the potential enhancement of the process by better integrating location information and services. The process performance is measured ex ante (before using location enabled e-Government services) and ex-post (after the integration of such services) in order to estimate and measure the impact of location information. The paper describes the method for performance measurement and highlights how the method is applied to one use case, i.e. the process of traffic safety monitoring. The use case is analysed and assessed in terms of location enablement and its potential impact on process performance. The results of applying the methodology on the use case revealed that performance is highly impacted by factors such as the way location information is collected, managed and shared throughout the

  4. The RHNumtS compilation: Features and bioinformatics approaches to locate and quantify Human NumtS

    Directory of Open Access Journals (Sweden)

    Saccone Cecilia

    2008-06-01

    Full Text Available Abstract Background To a greater or lesser extent, eukaryotic nuclear genomes contain fragments of their mitochondrial genome counterpart, deriving from the random insertion of damaged mtDNA fragments. NumtS (Nuclear mt Sequences are not equally abundant in all species, and are redundant and polymorphic in terms of copy number. In population and clinical genetics, it is important to have a complete overview of NumtS quantity and location. Searching PubMed for NumtS or Mitochondrial pseudo-genes yields hundreds of papers reporting Human NumtS compilations produced by in silico or wet-lab approaches. A comparison of published compilations clearly shows significant discrepancies among data, due both to unwise application of Bioinformatics methods and to a not yet correctly assembled nuclear genome. To optimize quantification and location of NumtS, we produced a consensus compilation of Human NumtS by applying various bioinformatics approaches. Results Location and quantification of NumtS may be achieved by applying database similarity searching methods: we have applied various methods such as Blastn, MegaBlast and BLAT, changing both parameters and database; the results were compared, further analysed and checked against the already published compilations, thus producing the Reference Human Numt Sequences (RHNumtS compilation. The resulting NumtS total 190. Conclusion The RHNumtS compilation represents a highly reliable reference basis, which may allow designing a lab protocol to test the actual existence of each NumtS. Here we report preliminary results based on PCR amplification and sequencing on 41 NumtS selected from RHNumtS among those with lower score. In parallel, we are currently designing the RHNumtS database structure for implementation in the HmtDB resource. In the future, the same database will host NumtS compilations from other organisms, but these will be generated only when the nuclear genome of a specific organism has reached a high

  5. The effect of handover location on trauma theatre start time: An estimated cost saving of £131 000 per year.

    Science.gov (United States)

    Nahas, Sam; Ali, Adam; Majid, Kiran; Joseph, Roshan; Huber, Chris; Babu, Victor

    2018-02-08

    The National Health Service was estimated to be in £2.45 billion deficit in 2015 to 2016. Trauma theatre utilization and efficiency has never been so important as it is estimated to cost £15/minute. Structured questionnaires were given to 23 members of staff at our Trust who are actively involved in the organization or delivery of orthopaedic trauma lists at least once per week. This was used to identify key factors that may improve theatre efficiency. Following focus group evaluation, the location of the preoperative theatre meeting was changed, with all staff involved being required to attend this. Our primary outcome measure was mean theatre start time (time of arrival in the anaesthetic room) during the 1 month immediately preceding the change and the month following the change. Theatre start time was improved on average 24 minutes (1 month premeeting and postmeeting change). This equates to a saving of £360 per day, or £131 040 per year. Changing the trauma meeting location to a venue adjacent to the trauma theatre can improve theatre start times, theatre efficiency, and therefore result in significant cost savings. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Location estimation in a smart home: system implementation and evaluation using experimental data.

    Science.gov (United States)

    Rahal, Youcef; Pigot, Hélène; Mabilleau, Philippe

    2008-01-01

    In the context of a constantly increasing aging population with cognitive deficiencies, insuring the autonomy of the elders at home becomes a priority. The DOMUS laboratory is addressing this issue by conceiving a smart home which can both assist people and preserve their quality of life. Obviously, the ability to monitor properly the occupant's activities and thus provide the pertinent assistance depends highly on location information inside the smart home. This paper proposes a solution to localize the occupant thanks to Bayesian filtering and a set of anonymous sensors disseminated throughout the house. The localization system is designed for a single person inside the house. It could however be used in conjunction with other localization systems in case more people are present. Our solution is functional in real conditions. We conceived an experiment to estimate precisely its accuracy and evaluate its robustness. The experiment consists of a scenario of daily routine meant to maximize the occupant's motion in meaningful activities. It was performed by 14 subjects, one subject at a time. The results are satisfactory: the system's accuracy exceeds 85% and is independent of the occupant's profile. The system works in real time and behaves well in presence of noise.

  7. Estimation of a planetary magnetic field using a reduced magnetohydrodynamic model

    Directory of Open Access Journals (Sweden)

    C. Nabert

    2017-03-01

    Full Text Available Knowledge of planetary magnetic fields provides deep insights into the structure and dynamics of planets. Due to the interaction of a planet with the solar wind plasma, a rather complex magnetic environment is generated. The situation at planet Mercury is an example of the complexities occurring as this planet's field is rather weak and the magnetosphere rather small. New methods are presented to separate interior and exterior magnetic field contributions which are based on a dynamic inversion approach using a reduced magnetohydrodynamic (MHD model and time-varying spacecraft observations. The methods select different data such as bow shock location information or magnetosheath magnetic field data. Our investigations are carried out in preparation for the upcoming dual-spacecraft BepiColombo mission set out to precisely estimate Mercury's intrinsic magnetic field. To validate our new approaches, we use THEMIS magnetosheath observations to estimate the known terrestrial dipole moment. The terrestrial magnetosheath provides observations from a strongly disturbed magnetic environment, comparable to the situation at Mercury. Statistical and systematic errors are considered and their dependence on the selected data sets are examined. Including time-dependent upstream solar wind variations rather than averaged conditions significantly reduces the statistical error of the estimation. Taking the entire magnetosheath data along the spacecraft's trajectory instead of only the bow shock location into account further improves accuracy of the estimated dipole moment.

  8. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  9. A spatial approach to the modelling and estimation of areal precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Skaugen, T

    1996-12-31

    In hydroelectric power technology it is important that the mean precipitation that falls in an area can be calculated. This doctoral thesis studies how the morphology of rainfall, described by the spatial statistical parameters, can be used to improve interpolation and estimation procedures. It attempts to formulate a theory which includes the relations between the size of the catchment and the size of the precipitation events in the modelling of areal precipitation. The problem of estimating and modelling areal precipitation can be formulated as the problem of estimating an inhomogeneously distributed flux of a certain spatial extent being measured at points in a randomly placed domain. The information contained in the different morphology of precipitation types is used to improve estimation procedures of areal precipitation, by interpolation (kriging) or by constructing areal reduction factors. A new approach to precipitation modelling is introduced where the analysis of the spatial coverage of precipitation at different intensities plays a key role in the formulation of a stochastic model for extreme areal precipitation and in deriving the probability density function of areal precipitation. 127 refs., 30 figs., 13 tabs.

  10. A Hybrid Location Privacy Solution for Mobile LBS

    Directory of Open Access Journals (Sweden)

    Ruchika Gupta

    2017-01-01

    Full Text Available The prevalent usage of location based services, where getting any service is solely based on the user’s current location, has raised an extreme concern over location privacy of the user. Generalized approaches dealing with location privacy, referred to as cloaking and obfuscation, are mainly based on a trusted third party, in which all the data remain available at a central server and thus complete knowledge of the query exists at the central node. This is the major limitation of such approaches; on the other hand, in trusted third-party-free framework clients collaborate with each other and freely communicate with the service provider without any third-party involvement. Measuring and evaluating trust among peers is a crucial aspect in trusted third-party-free framework. This paper exploits the merits and mitigating the shortcomings of both of these approaches. We propose a hybrid solution, HYB, to achieve location privacy for the mobile users who use location services frequently. The proposed HYB scheme is based on the collaborative preprocessing of location data and utilizes the benefits of homomorphic encryption technique. Location privacy is achieved at two levels, namely, at the proximity level and at distant level. The proposed HYB solution preserves the user’s location privacy effectively under specific, pull-based, sporadic query scenario.

  11. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....

  12. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  13. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  14. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  15. A hybrid system approach to airspeed, angle of attack and sideslip estimation in Unmanned Aerial Vehicles

    KAUST Repository

    Shaqura, Mohammad; Claudel, Christian

    2015-01-01

    , low power autopilots in real-time. The computational method is based on a hybrid decomposition of the modes of operation of the UAV. A Bayesian approach is considered for estimation, in which the estimated airspeed, angle of attack and sideslip

  16. New alternatives for reference evapotranspiration estimation in West Africa using limited weather data and ancillary data supply strategies.

    Science.gov (United States)

    Landeras, Gorka; Bekoe, Emmanuel; Ampofo, Joseph; Logah, Frederick; Diop, Mbaye; Cisse, Madiama; Shiri, Jalal

    2018-05-01

    Accurate estimation of reference evapotranspiration ( ET 0 ) is essential for the computation of crop water requirements, irrigation scheduling, and water resources management. In this context, having a battery of alternative local calibrated ET 0 estimation methods is of great interest for any irrigation advisory service. The development of irrigation advisory services will be a major breakthrough for West African agriculture. In the case of many West African countries, the high number of meteorological inputs required by the Penman-Monteith equation has been indicated as constraining. The present paper investigates for the first time in Ghana, the estimation ability of artificial intelligence-based models (Artificial Neural Networks (ANNs) and Gene Expression Programing (GEPs)), and ancillary/external approaches for modeling reference evapotranspiration ( ET 0 ) using limited weather data. According to the results of this study, GEPs have emerged as a very interesting alternative for ET 0 estimation at all the locations of Ghana which have been evaluated in this study under different scenarios of meteorological data availability. The adoption of ancillary/external approaches has been also successful, moreover in the southern locations. The interesting results obtained in this study using GEPs and some ancillary approaches could be a reference for future studies about ET 0 estimation in West Africa.

  17. How the 2SLS/IV estimator can handle equality constraints in structural equation models: a system-of-equations approach.

    Science.gov (United States)

    Nestler, Steffen

    2014-05-01

    Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.

  18. South American smoke coverage and flux estimations from the Fire Locating and Modeling of Burning Emissions (FLAMBE') system.

    Science.gov (United States)

    Reid, J. S.; Westphal, D. L.; Christopher, S. A.; Prins, E. M.; Gasso, S.; Reid, E.; Theisen, M.; Schmidt, C. C.; Hunter, J.; Eck, T.

    2002-05-01

    The Fire Locating and Modeling of Burning Emissions (FLAMBE') project is a joint Navy, NOAA, NASA and university project to integrate satellite products with numerical aerosol models to produce a real time fire and emissions inventory. At the center of the program is the Wildfire Automated Biomass Burning Algorithm (WF ABBA) which provides real-time fire products and the NRL Aerosol Analysis and Prediction System to model smoke transport. In this presentation we give a brief overview of the system and methods, but emphasize new estimations of smoke coverage and emission fluxes from the South American continent. Temporal and smoke patterns compare reasonably well with AERONET and MODIS aerosol optical depth products for the 2000 and 2001 fire seasons. Fluxes are computed by relating NAAPS output fields and MODIS optical depth maps with modeled wind fields. Smoke emissions and transport fluxes out of the continent can then be estimated by perturbing the modeled emissions to gain agreement with the satellite and wind products. Regional smoke emissions are also presented for grass and forest burning.

  19. Microwave implementation of two-source energy balance approach for estimating evapotranspiration

    Directory of Open Access Journals (Sweden)

    T. R. H. Holmes

    2018-02-01

    Full Text Available A newly developed microwave (MW land surface temperature (LST product is used to substitute thermal infrared (TIR-based LST in the Atmosphere–Land Exchange Inverse (ALEXI modeling framework for estimating evapotranspiration (ET from space. ALEXI implements a two-source energy balance (TSEB land surface scheme in a time-differential approach, designed to minimize sensitivity to absolute biases in input records of LST through the analysis of the rate of temperature change in the morning. Thermal infrared retrievals of the diurnal LST curve, traditionally from geostationary platforms, are hindered by cloud cover, reducing model coverage on any given day. This study tests the utility of diurnal temperature information retrieved from a constellation of satellites with microwave radiometers that together provide six to eight observations of Ka-band brightness temperature per location per day. This represents the first ever attempt at a global implementation of ALEXI with MW-based LST and is intended as the first step towards providing all-weather capability to the ALEXI framework. The analysis is based on 9-year-long, global records of ALEXI ET generated using both MW- and TIR-based diurnal LST information as input. In this study, the MW-LST (MW-based LST sampling is restricted to the same clear-sky days as in the IR-based implementation to be able to analyze the impact of changing the LST dataset separately from the impact of sampling all-sky conditions. The results show that long-term bulk ET estimates from both LST sources agree well, with a spatial correlation of 92 % for total ET in the Europe–Africa domain and agreement in seasonal (3-month totals of 83–97 % depending on the time of year. Most importantly, the ALEXI-MW (MW-based ALEXI also matches ALEXI-IR (IR-based ALEXI very closely in terms of 3-month inter-annual anomalies, demonstrating its ability to capture the development and extent of drought conditions. Weekly ET output

  20. Locating industrial VOC sources with aircraft observations

    International Nuclear Information System (INIS)

    Toscano, P.; Gioli, B.; Dugheri, S.; Salvini, A.; Matese, A.; Bonacchi, A.; Zaldei, A.; Cupelli, V.; Miglietta, F.

    2011-01-01

    Observation and characterization of environmental pollution, focussing on Volatile Organic Compounds (VOCs), in a high-risk industrial area, are particularly important in order to provide indications on a safe level of exposure, indicate eventual priorities and advise on policy interventions. The aim of this study is to use the Solid Phase Micro Extraction (SPME) method to measure VOCs, directly coupled with atmospheric measurements taken on a small aircraft environmental platform, to evaluate and locate the presence of VOC emission sources in the Marghera industrial area. Lab analysis of collected SPME fibres and subsequent analysis of mass spectrum and chromatograms in Scan Mode allowed the detection of a wide range of VOCs. The combination of this information during the monitoring campaign allowed a model (Gaussian Plume) to be implemented that estimates the localization of emission sources on the ground. - Highlights: → Flight plan aimed at sampling industrial area at various altitudes and locations. → SPME sampling strategy was based on plume detection by means of CO 2 . → Concentrations obtained were lower than the limit values or below the detection limit. → Scan mode highlighted presence of γ-butyrolactone (GBL) compound. → Gaussian dispersion modelling was used to estimate GBL source location and strength. - An integrated strategy based on atmospheric aircraft observations and dispersion modelling was developed, aimed at estimating spatial location and strength of VOC point source emissions in industrial areas.

  1. Cross-cultural similarities and differences in North Americans' geographic location judgments.

    Science.gov (United States)

    Friedman, Alinda; Kerkman, Dennis D; Brown, Norman R; Stea, David; Cappello, Hector M

    2005-12-01

    We examined some potential causes of bias in geographic location estimates by comparing location estimates of North American cities made by Canadian, U.S., and Mexican university students. All three groups placed most Mexican cities near the equator, which implies that all three groups were influenced by shared beliefs about the locations of geographical regions relative to global reference points. However, the groups divided North America into different regions and differed in the relative accuracy of the estimates within them, which implies that there was an influence of culture-specific knowledge. The data support a category-based system of plausible reasoning, in which biases in judgments are multiply determined, and underscore the utility of the estimation paradigm as a tool in cross-cultural cognitive research.

  2. Prediction of the location and size of the stomach using patient characteristics for retrospective radiation dose estimation following radiotherapy

    Science.gov (United States)

    Lamart, Stephanie; Imran, Rebecca; Simon, Steven L.; Doi, Kazutaka; Morton, Lindsay M.; Curtis, Rochelle E.; Lee, Choonik; Drozdovitch, Vladimir; Maass-Moreno, Roberto; Chen, Clara C.; Whatley, Millie; Miller, Donald L.; Pacak, Karel; Lee, Choonsik

    2013-12-01

    Following cancer radiotherapy, reconstruction of doses to organs, other than the target organ, is of interest for retrospective health risk studies. Reliable estimation of doses to organs that may be partially within or fully outside the treatment field requires reliable knowledge of the location and size of the organs, e.g., the stomach, which is at risk from abdominal irradiation. The stomach location and size are known to be highly variable between individuals, but have been little studied. Moreover, for treatments conducted years ago, medical images of patients are usually not available in medical records to locate the stomach. In light of the poor information available to locate the stomach in historical dose reconstructions, the purpose of this work was to investigate the variability of stomach location and size among adult male patients and to develop prediction models for the stomach location and size using predictor variables generally available in medical records of radiotherapy patients treated in the past. To collect data on stomach size and position, we segmented the contours of the stomach and of the skeleton on contemporary computed tomography (CT) images for 30 male patients in supine position. The location and size of the stomach was found to depend on body mass index (BMI), ponderal index (PI), and age. For example, the anteroposterior dimension of the stomach was found to increase with increasing BMI (≈0.25 cm kg-1 m2) whereas its craniocaudal dimension decreased with increasing PI (≈-3.3 cm kg-1 m3) and its transverse dimension increased with increasing PI (≈2.5 cm kg-1 m3). Using the prediction models, we generated three-dimensional computational stomach models from a deformable hybrid phantom for three patients of different BMI. Based on a typical radiotherapy treatment, we simulated radiotherapy treatments on the predicted stomach models and on the CT images of the corresponding patients. Those dose calculations demonstrated good

  3. Robust facility location: Hedging against failures

    International Nuclear Information System (INIS)

    Hernandez, Ivan; Emmanuel Ramirez-Marquez, Jose; Rainwater, Chase; Pohl, Edward; Medal, Hugh

    2014-01-01

    While few companies would be willing to sacrifice day-to-day operations to hedge against disruptions, designing for robustness can yield solutions that perform well before and after failures have occurred. Through a multi-objective optimization approach this paper provides decision makers the option to trade-off total weighted distance before and after disruptions in the Facility Location Problem. Additionally, this approach allows decision makers to understand the impact on the opening of facilities on total distance and on system robustness (considering the system as the set of located facilities). This approach differs from previous studies in that hedging against failures is done without having to elicit facility failure probabilities concurrently without requiring the allocation of additional hardening/protections resources. The approach is applied to two datasets from the literature

  4. Development of Model for Pedestrian Gap Based on Land Use Pattern at Midblock Location and Estimation of Delay at Intersections

    Science.gov (United States)

    Ramesh, Adepu; Ashritha, Kilari; Kumar, Molugaram

    2018-04-01

    Walking has always been a prime source of human mobility for short distance travel. Traffic congestion has become a major problem for safe pedestrian crossing in most of the metropolitan cities. This has emphasized for providing a sufficient pedestrian gap for safe crossing on urban road. The present works aims in understanding factors that influence pedestrian crossing behaviour. Four locations were chosen for identification of pedestrian crossing behaviour, gap characteristics, waiting time etc., in Hyderabad city. From the study it was observed that pedestrian behaviour and crossing patterns are different and is influenced by land use pattern. A gap acceptance model was developed from the data for improving pedestrian safety at mid-block location; the model was validated using the existing data. Pedestrian delay was estimated at intersection using Highway Capacity Manual (HCM). It was observed that field delays are less when compared to delay arrived from HCM method.

  5. Application of a Hybrid Detection and Location Scheme to Volcanic Systems

    Science.gov (United States)

    Thurber, C. H.; Lanza, F.; Roecker, S. W.

    2017-12-01

    We are using a hybrid method for automated detection and onset estimation, called REST, that combines a modified version of the nearest-neighbor similarity scheme of Rawles and Thurber (2015; RT15) with the regression approach of Kushnir et al. (1990; K90). This approach incorporates some of the windowing ideas proposed by RT15 into the regression techniques described in K90. The K90 and RT15 algorithms both define an onset as that sample where a segment of noise at earlier times is most "unlike" a segment of data at later times; the main difference between the approaches is how one defines "likeness." Hence, it is fairly straightforward to adapt the RT15 ideas to a K90 approach. We also incorporated the running mean normalization scheme of Bensen et al. (2007), used in ambient noise pre-processing, to reduce the effects of coherent signals (such as earthquakes) in defining noise segments. This is especially useful for aftershock sequences, when the persistent high amplitudes due to many earthquakes biases the true noise level. We use the fall-off of the K90 estimation function to assign uncertainties and the asymmetry of the function as a causality constraint. The detection and onset estimation stage is followed by iterative pick association and event location using a grid-search method. Some fine-tuning of some parameters is generally required for optimal results. We present 2 applications of this scheme to data from volcanic systems: Makushin volcano, Alaska, and Laguna del Maule (LdM), Chile. In both cases, there are permanent seismic networks, operated by the Alaska Volcano Observatory (AVO) and Observatorio Volcanológico de Los Andes del Sur (OVDAS), respectively, and temporary seismic arrays were deployed for a year or more. For Makushin, we have analyzed a year of data, from summer 2015 to summer 2016. The AVO catalog has 691 events in our study volume; REST processing yields 1784 more events. After quality control, the event numbers are 151 AVO events and

  6. Approaches in estimation of external cost for fuel cycles in the ExternE project

    International Nuclear Information System (INIS)

    Afanas'ev, A.A.; Maksimenko, B.N.

    1998-01-01

    The purposes, content and main results of studies realized within the frameworks of the International Project ExternE which is the first comprehensive attempt to develop general approach to estimation of external cost for different fuel cycles based on utilization of nuclear and fossil fuels, as well as on renewable power sources are discussed. The external cost of a fuel cycle is treated as social and environmental expenditures which are not taken into account by energy producers and consumers, i.e. these are expenditures not included into commercial cost nowadays. The conclusion on applicability of the approach suggested for estimation of population health hazards and environmental impacts connected with electric power generation growth (expressed in money or some other form) is made

  7. Estimating the per-capita contribution of habitats and pathways in a migratory network: A modelling approach

    Science.gov (United States)

    Wiederholt, Ruscena; Mattsson, Brady J.; Thogmartin, Wayne E.; Runge, Michael C.; Diffendorfer, Jay E.; Erickson, Richard A.; Federico, Paula; Lopez-Hoffman, Laura; Fryxell, John; Norris, D. Ryan; Sample, Christine

    2018-01-01

    Every year, migratory species undertake seasonal movements along different pathways between discrete regions and habitats. The ability to assess the relative demographic contributions of these different habitats and pathways to the species’ overall population dynamics is critical for understanding the ecology of migratory species, and also has practical applications for management and conservation. Metrics for assessing habitat contributions have been well-developed for metapopulations, but an equivalent metric is not currently available for migratory populations. Here, we develop a framework for estimating the demographic contributions of the discrete habitats and pathways used by migratory species throughout the annual cycle by estimating the per capita contribution of cohorts using these locations. Our framework accounts for seasonal movements between multiple breeding and non-breeding habitats and for both resident and migratory cohorts. We illustrate our framework using a hypothetical migratory network of four habitats, which allows us to better understand how variations in habitat quality affect per capita contributions. Results indicate that per capita contributions for any habitat or pathway are dependent on habitat-specific survival probabilities in all other areas used as part of the migratory circuit, and that contribution metrics are spatially linked (e.g. reduced survival in one habitat also decreases the contribution metric for other habitats). Our framework expands existing theory on the dynamics of spatiotemporally structured populations by developing a generalized approach to estimate the habitat- and pathway-specific contributions of species migrating between multiple breeding and multiple non-breeding habitats for a range of life histories or migratory strategies. Most importantly, it provides a means of prioritizing conservation efforts towards those migratory pathways and habitats that are most critical for the population viability of

  8. Estimating a planetary magnetic field with time-dependent global MHD simulations using an adjoint approach

    Directory of Open Access Journals (Sweden)

    C. Nabert

    2017-05-01

    Full Text Available The interaction of the solar wind with a planetary magnetic field causes electrical currents that modify the magnetic field distribution around the planet. We present an approach to estimating the planetary magnetic field from in situ spacecraft data using a magnetohydrodynamic (MHD simulation approach. The method is developed with respect to the upcoming BepiColombo mission to planet Mercury aimed at determining the planet's magnetic field and its interior electrical conductivity distribution. In contrast to the widely used empirical models, global MHD simulations allow the calculation of the strongly time-dependent interaction process of the solar wind with the planet. As a first approach, we use a simple MHD simulation code that includes time-dependent solar wind and magnetic field parameters. The planetary parameters are estimated by minimizing the misfit of spacecraft data and simulation results with a gradient-based optimization. As the calculation of gradients with respect to many parameters is usually very time-consuming, we investigate the application of an adjoint MHD model. This adjoint MHD model is generated by an automatic differentiation tool to compute the gradients efficiently. The computational cost for determining the gradient with an adjoint approach is nearly independent of the number of parameters. Our method is validated by application to THEMIS (Time History of Events and Macroscale Interactions during Substorms magnetosheath data to estimate Earth's dipole moment.

  9. A GIS methodology to identify potential corn stover collection locations

    Energy Technology Data Exchange (ETDEWEB)

    Haddad, Monica A. [Department of Community and Regional Planning, 583 College of Design, Iowa State University, Ames, IA 50011-3095 (United States); Anderson, Paul F. [Department of Landscape Architecture, 481 College of Design, Iowa State University, Ames, IA 50011 (United States); Department of Agronomy, 481 College of Design, Iowa State University, Ames, IA 50011 (United States)

    2008-12-15

    In this study, we use geographic information systems technology to identify potential locations in a Midwestern region for collection and storage of corn stover for use as biomass feedstock. Spatial location models are developed to identify potential collection sites along an existing railroad. Site suitability analysis is developed based on two main models: agronomic productivity potential and environmental costs. The analysis includes the following steps: (1) elaboration of site selection criteria; (2) identification of the study region and service area based on transportation network analysis; (3) reclassification of input spatial layers based on common scales; (4) overlaying the reclassified spatial layers with equal weights to generate the two main models; and (5) overlaying the main models using different weights. A pluralistic approach is adopted, presenting three different scenarios as alternatives for the potential locations. Our results suggest that there is a significant subset of potential sites that meet site selection criteria. Additional studies are needed to evaluate potential sites through field visits, assess economic and social costs, and estimate the proportion of corn producers willing to sell and transport corn stover to collection facilities. (author)

  10. A Variational Approach to the Estimate of the Permittivity of a Composite with Dispersed Inclusions

    Directory of Open Access Journals (Sweden)

    V. S. Zarubin

    2015-01-01

    composite structure and on the distribution in the equal-element of homogeneous medium with the desired dielectric constant of the composite we determined the dependence of this value on the dielectric characteristics of the matrix and inclusions and on the volume concentration of inclusions.Quantitative analysis of the obtained dependence in a wide range of defining parameters showed that all the results of the calculations are located in the area of possible values. This area is defined by constructed bilateral estimates. This confirms the appropriate use of the variational approach and the possibility of its application for the prediction of the dielectric characteristics of composites with dispersed inclusions.

  11. Location-Aware Cross-Layer Design Using Overlay Watermarks

    Directory of Open Access Journals (Sweden)

    Paul Ho

    2007-04-01

    Full Text Available A new orthogonal frequency division multiplexing (OFDM system embedded with overlay watermarks for location-aware cross-layer design is proposed in this paper. One major advantage of the proposed system is the multiple functionalities the overlay watermark provides, which includes a cross-layer signaling interface, a transceiver identification for position-aware routing, as well as its basic role as a training sequence for channel estimation. Wireless terminals are typically battery powered and have limited wireless communication bandwidth. Therefore, efficient collaborative signal processing algorithms that consume less energy for computation and less bandwidth for communication are needed. Transceiver aware of its location can also improve the routing efficiency by selective flooding or selective forwarding data only in the desired direction, since in most cases the location of a wireless host is unknown. In the proposed OFDM system, location information of a mobile for efficient routing can be easily derived when a unique watermark is associated with each individual transceiver. In addition, cross-layer signaling and other interlayer interactive information can be exchanged with a new data pipe created by modulating the overlay watermarks. We also study the channel estimation and watermark removal techniques at the physical layer for the proposed overlay OFDM. Our channel estimator iteratively estimates the channel impulse response and the combined signal vector from the overlay OFDM signal. Cross-layer design that leads to low-power consumption and more efficient routing is investigated.

  12. UD-WCMA: An Energy Estimation and Forecast Scheme for Solar Powered Wireless Sensor Networks

    KAUST Repository

    Dehwah, Ahmad H.; Elmetennani, Shahrazed; Claudel, Christian

    2017-01-01

    -WCMA) to estimate and predict the variations of the solar power in a wireless sensor network. The presented approach combines the information from the real-time measurement data and a set of stored profiles representing the energy patterns in the WSNs location

  13. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.

    Science.gov (United States)

    Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M

    2015-11-13

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach

  14. On analyzing free-response data on location level

    Science.gov (United States)

    Bandos, Andriy I.; Obuchowski, Nancy A.

    2017-03-01

    Free-response ROC (FROC) data are typically collected when primary question of interest is focused on the proportions of the correct detection-localization of known targets and frequencies of false positive responses, which can be multiple per subject (image). These studies are particularly relevant for CAD and related applications. The fundamental tool of the location-level FROC analysis is the FROC curve. Although there are many methods of FROC analysis, as we describe in this work, some of the standard and popular approaches, while important, are not suitable for analyzing specifically the location-level FROC performance as summarized by the FROC curve. Analysis of the FROC curve, on the other hand, might not be straightforward. Recently we developed an approach for the location-level analysis of the FROC data using the well-known tools for clustered ROC analysis. In the current work, based on previously developed concepts, and using specific examples, we demonstrate the key reasons why specifically location-level FROC performance cannot be fully addressed by the common approaches as well as illustrate the proposed solution. Specifically, we consider the two most salient FROC approaches, namely JAFROC and the area under the exponentially transformed FROC curve (AFE) and show that clearly superior FROC curves can have lower values for these indices. We describe the specific features that make these approaches inconsistent with FROC curves. This work illustrates some caveats for using the common approaches for location-level FROC analysis and provides guidelines for the appropriate assessment or comparison of FROC systems.

  15. Joint estimation over multiple individuals improves behavioural state inference from animal movement data.

    Science.gov (United States)

    Jonsen, Ian

    2016-02-08

    State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.

  16. A Robust Localization, Slip Estimation, and Compensation System for WMR in the Indoor Environments

    Directory of Open Access Journals (Sweden)

    Zakir Ullah

    2018-05-01

    Full Text Available A novel approach is proposed for the path tracking of a Wheeled Mobile Robot (WMR in the presence of an unknown lateral slip. Much of the existing work has assumed pure rolling conditions between the wheel and ground. Under the pure rolling conditions, the wheels of a WMR are supposed to roll without slipping. Complex wheel-ground interactions, acceleration and steering system noise are the factors which cause WMR wheel slip. A basic research problem in this context is localization and slip estimation of WMR from a stream of noisy sensors data when the robot is moving on a slippery surface, or moving at a high speed. DecaWave based ranging system and Particle Filter (PF are good candidates to estimate the location of WMR indoors and outdoors. Unfortunately, wheel-slip of WMR limits the ultimate performance that can be achieved by real-world implementation of the PF, because location estimation systems typically partially rely on the robot heading. A small error in the WMR heading leads to a large error in location estimation of the PF because of its cumulative nature. In order to enhance the tracking and localization performance of the PF in the environments where the main reason for an error in the PF location estimation is angular noise, two methods were used for heading estimation of the WMR (1: Reinforcement Learning (RL and (2: Location-based Heading Estimation (LHE. Trilateration is applied to DecaWave based ranging system for calculating the probable location of WMR, this noisy location along with PF current mean is used to estimate the WMR heading by using the above two methods. Beside the WMR location calculation, DecaWave based ranging system is also used to update the PF weights. The localization and tracking performance of the PF is significantly improved through incorporating heading error in localization by applying RL and LHE. Desired trajectory information is then used to develop an algorithm for extracting the lateral slip along

  17. Human location estimation using thermopile array sensor

    Science.gov (United States)

    Parnin, S.; Rahman, M. M.

    2017-11-01

    Utilization of Thermopile sensor at an early stage of human detection is challenging as there are many things that produce thermal heat other than human such as electrical appliances and animals. Therefrom, an algorithm for early presence detection has been developed through the study of human body temperature behaviour with respect to the room temperature. The change in non-contact detected temperature of human varied according to body parts. In an indoor room, upper parts of human body change up to 3°C whereas lower part ranging from 0.58°C to 1.71°C. The average changes in temperature of human is used as a conditional set-point value in the program algorithm to detect human presence. The current position of human and its respective angle is gained when human is presence at certain pixels of Thermopile’s sensor array. Human position is estimated successfully as the developed sensory system is tested to the actuator of a stand fan.

  18. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    Science.gov (United States)

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  19. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  20. Gis Approach to Estimation of the Total Phosphorous Transfer in the Pilica River Lowland Catchment

    Directory of Open Access Journals (Sweden)

    Magnuszewski Artur

    2014-09-01

    Full Text Available In this paper, the Pilica River catchment (central Poland is analyzed with a focus on understanding the total phosphorous transfer along the river system which also contains the large artificial Sulejów Reservoir. The paper presents a GIS method for estimating the total phosphorous (TP load from proxy data representing sub-catchment land use and census data. The modelled load of TP is compared to the actual transfer of TP in the Pilica River system. The results shows that the metrics of connectivity between river system and dwelling areas as well as settlement density in the sub-catchments are useful predictors of the total phosphorous load. The presence of a large reservoir in the middle course of the river can disrupt nutrient transport along a river continuum by trapping and retaining suspended sediment and its associated TP load. Analysis of the indirect estimation of TP loads with the GIS analysis can be useful for identifying beneficial reservoir locations in a catchment. The study has shown that the Sulejów Reservoir has been located in a subcatchment with a largest load of the TP, and this feature helps determine the problem of reservoir eutrphication

  1. Detection and Location of Structural Degradation in Mechanical Systems

    International Nuclear Information System (INIS)

    Blakeman, E.D.; Damiano, B.; Phillips, L.D.

    1999-01-01

    The investigation of a diagnostic method for detecting and locating the source of structural degradation in a mechanical system is described in this paper. The diagnostic method uses a mathematical model of the mechanical system to determine relationships between system parameters and measurable spectral features. These relationships are incorporated into a neural network, which associates measured spectral features with system parameters. Condition diagnosis is performed by presenting the neural network with measured spectral features and comparing the system parameters estimated by the neural network to previously estimated values. Changes in the estimated system parameters indicate the location and severity of degradation in the mechanical system

  2. A non-stationary cost-benefit analysis approach for extreme flood estimation to explore the nexus of 'Risk, Cost and Non-stationarity'

    Science.gov (United States)

    Qi, Wei

    2017-11-01

    Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.

  3. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    Science.gov (United States)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential

  4. Location and distribution management of relief centers: a genetic algorithm approach

    NARCIS (Netherlands)

    Najafi, M.; Farahani, R.Z.; de Brito, M.; Dullaert, W.E.H.

    2015-01-01

    Humanitarian logistics is regarded as a key area for improved disaster management efficiency and effectiveness. In this study, a multi-objective integrated logistic model is proposed to locate disaster relief centers while taking into account network costs and responsiveness. Because this location

  5. Estimating CH4 emission from paddy managed soils in southern guinea savanna zone of Nigeria using an integrated approach

    Science.gov (United States)

    Akpeokhai, Agatha; Menz, Gunter; Thonfeld, Frank; Akinluyi, Francis

    2016-04-01

    ESTIMATING CH4 EMISSION FROM PADDY MANAGED SOILS IN SOUTHERN GUINEA SAVANNA ZONE OF NIGERIA USING AN INTEGRATED APPROACH Akpeokhai Agatha 1, Menz Gunter 1, Thonfeld Frank 1, Akinluyi Francis 2 1 Remote Sensing Research Group (RSRG), Geography Institute, University of Bonn, Germany. 2 Department Remote Sensing and Geo-Science Information System, School of Earth and Mineral Science, Federal University of Technology, Akure Nigeria. Methane is one of the most important greenhouse gases as it has the second greatest climate forcing potential. Paddy fields have been identified to be sources of methane and Nigerian paddies are not left out. In Nigeria, the guinea savanna region is regarded as the bread basket of the nation and this area is one of the major rice producing regions in Nigeria. Its location in the food basket region of the country makes this part a very important study site. However, since Nigerian paddies contribute to methane emission by how much do these paddies contribute to the emissions? Also, so far, there limited studies on methane from rice fields in West Africa thus making this study a very important start off point. To answer this huge question, methane emission will be estimated using an integrated approach in the North Central part of Nigeria. Land use change cultivated to rice was analysed using Remote sensing techniques to determine the changes in land cultivated to rice. Methane emission from these identified rice fields will be estimated using the IPCC Tier 1 set of equations. First relevant indices (Normalized Differential Moisture Index, Normalized Differential Wetness Index and Rice Growth Vegetation Index) were generated to aid classification of rice fields using LANDSAT data from the USGS. Next the LANDSAT datasets were analyzed for land use change cultivated to rice from 1990 to 2014 to generate rice field maps. ERDAS Imagine, ARCGIS and ENVI tools were used to meet these spatial needs. Methane emissions from this region will be

  6. Validation of a Robust Neural Real-Time Voltage Estimator for Active Distribution Grids on Field Data

    DEFF Research Database (Denmark)

    Pertl, Michael; Douglass, Philip James; Heussen, Kai

    2018-01-01

    network approach for voltage estimation in active distribution grids by means of measured data from two feeders of a real low voltage distribution grid. The approach enables a real-time voltage estimation at locations in the distribution grid, where otherwise only non-real-time measurements are available......The installation of measurements in distribution grids enables the development of data driven methods for the power system. However, these methods have to be validated in order to understand the limitations and capabilities for their use. This paper presents a systematic validation of a neural...

  7. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research.

    Science.gov (United States)

    Golino, Hudson F; Epskamp, Sacha

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman's eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study.

  8. Detection of infarct lesions from single MRI modality using inconsistency between voxel intensity and spatial location--a 3-D automatic approach.

    Science.gov (United States)

    Shen, Shan; Szameitat, André J; Sterr, Annette

    2008-07-01

    Detection of infarct lesions using traditional segmentation methods is always problematic due to intensity similarity between lesions and normal tissues, so that multispectral MRI modalities were often employed for this purpose. However, the high costs of MRI scan and the severity of patient conditions restrict the collection of multiple images. Therefore, in this paper, a new 3-D automatic lesion detection approach was proposed, which required only a single type of anatomical MRI scan. It was developed on a theory that, when lesions were present, the voxel-intensity-based segmentation and the spatial-location-based tissue distribution should be inconsistent in the regions of lesions. The degree of this inconsistency was calculated, which indicated the likelihood of tissue abnormality. Lesions were identified when the inconsistency exceeded a defined threshold. In this approach, the intensity-based segmentation was implemented by the conventional fuzzy c-mean (FCM) algorithm, while the spatial location of tissues was provided by prior tissue probability maps. The use of simulated MRI lesions allowed us to quantitatively evaluate the performance of the proposed method, as the size and location of lesions were prespecified. The results showed that our method effectively detected lesions with 40-80% signal reduction compared to normal tissues (similarity index > 0.7). The capability of the proposed method in practice was also demonstrated on real infarct lesions from 15 stroke patients, where the lesions detected were in broad agreement with true lesions. Furthermore, a comparison to a statistical segmentation approach presented in the literature suggested that our 3-D lesion detection approach was more reliable. Future work will focus on adapting the current method to multiple sclerosis lesion detection.

  9. Reassessing the forest impacts of protection: the challenge of nonrandom location and a corrective method.

    Science.gov (United States)

    Joppa, Lucas; Pfaff, Alexander

    2010-01-01

    Protected areas are leading tools in efforts to slow global species loss and appear also to have a role in climate change policy. Understanding their impacts on deforestation informs environmental policies. We review several approaches to evaluating protection's impact on deforestation, given three hurdles to empirical evaluation, and note that "matching" techniques from economic impact evaluation address those hurdles. The central hurdle derives from the fact that protected areas are distributed nonrandomly across landscapes. Nonrandom location can be intentional, and for good reasons, including biological and political ones. Yet even so, when protected areas are biased in their locations toward less-threatened areas, many methods for impact evaluation will overestimate protection's effect. The use of matching techniques allows one to control for known landscape biases when inferring the impact of protection. Applications of matching have revealed considerably lower impact estimates of forest protection than produced by other methods. A reduction in the estimated impact from existing parks does not suggest, however, that protection is unable to lower clearing. Rather, it indicates the importance of variation across locations in how much impact protection could possibly have on rates of deforestation. Matching, then, bundles improved estimates of the average impact of protection with guidance on where new parks' impacts will be highest. While many factors will determine where new protected areas will be sited in the future, we claim that the variation across space in protection's impact on deforestation rates should inform site choice.

  10. A 3-level Bayesian mixed effects location scale model with an application to ecological momentary assessment data.

    Science.gov (United States)

    Lin, Xiaolei; Mermelstein, Robin J; Hedeker, Donald

    2018-06-15

    Ecological momentary assessment studies usually produce intensively measured longitudinal data with large numbers of observations per unit, and research interest is often centered around understanding the changes in variation of people's thoughts, emotions and behaviors. Hedeker et al developed a 2-level mixed effects location scale model that allows observed covariates as well as unobserved variables to influence both the mean and the within-subjects variance, for a 2-level data structure where observations are nested within subjects. In some ecological momentary assessment studies, subjects are measured at multiple waves, and within each wave, subjects are measured over time. Li and Hedeker extended the original 2-level model to a 3-level data structure where observations are nested within days and days are then nested within subjects, by including a random location and scale intercept at the intermediate wave level. However, the 3-level random intercept model assumes constant response change rate for both the mean and variance. To account for changes in variance across waves, as well as clustering attributable to waves, we propose a more comprehensive location scale model that allows subject heterogeneity at baseline as well as across different waves, for a 3-level data structure where observations are nested within waves and waves are then further nested within subjects. The model parameters are estimated using Markov chain Monte Carlo methods. We provide details on the Bayesian estimation approach and demonstrate how the Stan statistical software can be used to sample from the desired distributions and achieve consistent estimates. The proposed model is validated via a series of simulation studies. Data from an adolescent smoking study are analyzed to demonstrate this approach. The analyses clearly favor the proposed model and show significant subject heterogeneity at baseline as well as change over time, for both mood mean and variance. The proposed 3-level

  11. An integrative modeling approach for the efficient estimation of cross sectional tibial stresses during locomotion.

    Science.gov (United States)

    Derrick, Timothy R; Edwards, W Brent; Fellin, Rebecca E; Seay, Joseph F

    2016-02-08

    The purpose of this research was to utilize a series of models to estimate the stress in a cross section of the tibia, located 62% from the proximal end, during walking. Twenty-eight male, active duty soldiers walked on an instrumented treadmill while external force data and kinematics were recorded. A rigid body model was used to estimate joint moments and reaction forces. A musculoskeletal model was used to gather muscle length, muscle velocity, moment arm and orientation information. Optimization procedures were used to estimate muscle forces and finally internal bone forces and moments were applied to an inhomogeneous, subject specific bone model obtained from CT scans to estimate stress in the bone cross section. Validity was assessed by comparison to stresses calculated from strain gage data in the literature and sensitivity was investigated using two simplified versions of the bone model-a homogeneous model and an ellipse approximation. Peak compressive stress occurred on the posterior aspect of the cross section (-47.5 ± 14.9 MPa). Peak tensile stress occurred on the anterior aspect (27.0 ± 11.7 MPa) while the location of peak shear was variable between subjects (7.2 ± 2.4 MPa). Peak compressive, tensile and shear stresses were within 0.52 MPa, 0.36 MPa and 3.02 MPa respectively of those calculated from the converted strain gage data. Peak values from a inhomogeneous model of the bone correlated well with homogeneous model (normal: 0.99; shear: 0.94) as did the normal ellipse model (r=0.89-0.96). However, the relationship between shear stress in the inhomogeneous model and ellipse model was less accurate (r=0.64). The procedures detailed in this paper provide a non-invasive and relatively quick method of estimating cross sectional stress that holds promise for assessing injury and osteogenic stimulus in bone during normal physical activity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A simplified approach to estimating the distribution of occasionally-consumed dietary components, applied to alcohol intake

    Directory of Open Access Journals (Sweden)

    Julia Chernova

    2016-07-01

    Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the

  13. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    Science.gov (United States)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  14. Reconciling estimates of the contemporary North American carbon balance among terrestrial biosphere models, atmospheric inversions, and a new approach for estimating net ecosystem exchange from inventory-based data

    Science.gov (United States)

    Hayes, Daniel J.; Turner, David P.; Stinson, Graham; McGuire, A. David; Wei, Yaxing; West, Tristram O.; Heath, Linda S.; de Jong, Bernardus; McConkey, Brian G.; Birdsey, Richard A.; Kurz, Werner A.; Jacobson, Andrew R.; Huntzinger, Deborah N.; Pan, Yude; Post, W. Mac; Cook, Robert B.

    2012-01-01

    We develop an approach for estimating net ecosystem exchange (NEE) using inventory-based information over North America (NA) for a recent 7-year period (ca. 2000–2006). The approach notably retains information on the spatial distribution of NEE, or the vertical exchange between land and atmosphere of all non-fossil fuel sources and sinks of CO2, while accounting for lateral transfers of forest and crop products as well as their eventual emissions. The total NEE estimate of a -327 ± 252 TgC yr-1 sink for NA was driven primarily by CO2 uptake in the Forest Lands sector (-248 TgC yr-1), largely in the Northwest and Southeast regions of the US, and in the Crop Lands sector (-297 TgC yr-1), predominantly in the Midwest US states. These sinks are counteracted by the carbon source estimated for the Other Lands sector (+218 TgC yr-1), where much of the forest and crop products are assumed to be returned to the atmosphere (through livestock and human consumption). The ecosystems of Mexico are estimated to be a small net source (+18 TgC yr-1) due to land use change between 1993 and 2002. We compare these inventory-based estimates with results from a suite of terrestrial biosphere and atmospheric inversion models, where the mean continental-scale NEE estimate for each ensemble is -511 TgC yr-1 and -931 TgC yr-1, respectively. In the modeling approaches, all sectors, including Other Lands, were generally estimated to be a carbon sink, driven in part by assumed CO2 fertilization and/or lack of consideration of carbon sources from disturbances and product emissions. Additional fluxes not measured by the inventories, although highly uncertain, could add an additional -239 TgC yr-1 to the inventory-based NA sink estimate, thus suggesting some convergence with the modeling approaches.

  15. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    of geocodes resolving into incorrect census tracts ranged between 4.5% and 5.3%, depending upon the county and geocoding agency. Conclusion Geocode location uncertainty can be estimated using tax parcel databases in a GIS. This approach is a viable alternative to global positioning system field validation of geocodes.

  16. Application of a time probabilistic approach to seismic landslide hazard estimates in Iran

    Science.gov (United States)

    Rajabi, A. M.; Del Gaudio, V.; Capolongo, D.; Khamehchiyan, M.; Mahdavifar, M. R.

    2009-04-01

    Iran is a country located in a tectonic active belt and is prone to earthquake and related phenomena. In the recent years, several earthquakes caused many fatalities and damages to facilities, e.g. the Manjil (1990), Avaj (2002), Bam (2003) and Firuzabad-e-Kojur (2004) earthquakes. These earthquakes generated many landslides. For instance, catastrophic landslides triggered by the Manjil Earthquake (Ms = 7.7) in 1990 buried the village of Fatalak, killed more than 130 peoples and cut many important road and other lifelines, resulting in major economic disruption. In general, earthquakes in Iran have been concentrated in two major zones with different seismicity characteristics: one is the region of Alborz and Central Iran and the other is the Zagros Orogenic Belt. Understanding where seismically induced landslides are most likely to occur is crucial in reducing property damage and loss of life in future earthquakes. For this purpose a time probabilistic approach for earthquake-induced landslide hazard at regional scale, proposed by Del Gaudio et al. (2003), has been applied to the whole Iranian territory to provide the basis of hazard estimates. This method consists in evaluating the recurrence of seismically induced slope failure conditions inferred from the Newmark's model. First, by adopting Arias Intensity to quantify seismic shaking and using different Arias attenuation relations for Alborz - Central Iran and Zagros regions, well-established methods of seismic hazard assessment, based on the Cornell (1968) method, were employed to obtain the occurrence probabilities for different levels of seismic shaking in a time interval of interest (50 year). Then, following Jibson (1998), empirical formulae specifically developed for Alborz - Central Iran and Zagros, were used to represent, according to the Newmark's model, the relation linking Newmark's displacement Dn to Arias intensity Ia and to slope critical acceleration ac. These formulae were employed to evaluate

  17. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    Science.gov (United States)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  18. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    Science.gov (United States)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  19. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    Science.gov (United States)

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  20. Lord's Wald Test for Detecting Dif in Multidimensional Irt Models: A Comparison of Two Estimation Approaches

    Science.gov (United States)

    Lee, Soo; Suh, Youngsuk

    2018-01-01

    Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…

  1. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  2. Bluetooth enables in-door mobile location services

    OpenAIRE

    Thongthammachart, Saowanee; Olesen, Henning

    2003-01-01

    Several technologies can be applied to enable mobile location services, but most of them suffer from limited accuracy and availability. GPS can solve the problem of determining the location of users in most outdoor situations, but an end-user position inside a building cannot be pinpointed. Other mobile location techniques can also provide the user's position, but the accuracy is rather low. In order to improve the accuracy and make location-based services really attractive, existing approach...

  3. Bayesian Spatial Design of Optimal Deep Tubewell Locations in Matlab, Bangladesh.

    Science.gov (United States)

    Warren, Joshua L; Perez-Heydrich, Carolina; Yunus, Mohammad

    2013-09-01

    We introduce a method for statistically identifying the optimal locations of deep tubewells (dtws) to be installed in Matlab, Bangladesh. Dtw installations serve to mitigate exposure to naturally occurring arsenic found at groundwater depths less than 200 meters, a serious environmental health threat for the population of Bangladesh. We introduce an objective function, which incorporates both arsenic level and nearest town population size, to identify optimal locations for dtw placement. Assuming complete knowledge of the arsenic surface, we then demonstrate how minimizing the objective function over a domain favors dtws placed in areas with high arsenic values and close to largely populated regions. Given only a partial realization of the arsenic surface over a domain, we use a Bayesian spatial statistical model to predict the full arsenic surface and estimate the optimal dtw locations. The uncertainty associated with these estimated locations is correctly characterized as well. The new method is applied to a dataset from a village in Matlab and the estimated optimal locations are analyzed along with their respective 95% credible regions.

  4. Single-phased Fault Location on Transmission Lines Using Unsynchronized Voltages

    Directory of Open Access Journals (Sweden)

    ISTRATE, M.

    2009-10-01

    Full Text Available The increased accuracy into the fault's detection and location makes it easier for maintenance, this being the reason to develop new possibilities for a precise estimation of the fault location. In the field literature, many methods for fault location using voltages and currents measurements at one or both terminals of power grids' lines are presented. The double-end synchronized data algorithms are very precise, but the current transformers can limit the accuracy of these estimations. The paper presents an algorithm to estimate the location of the single-phased faults which uses only voltage measurements at both terminals of the transmission lines by eliminating the error due to current transformers and without introducing the restriction of perfect data synchronization. In such conditions, the algorithm can be used with the actual equipment of the most power grids, the installation of phasor measurement units with GPS system synchronized timer not being compulsory. Only the positive sequence of line parameters and sources are used, thus, eliminating the incertitude in zero sequence parameter estimation. The algorithm is tested using the results of EMTP-ATP simulations, after the validation of the ATP models on the basis of registered results in a real power grid.

  5. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    Science.gov (United States)

    Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.

    2015-01-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The

  6. Location of microseismic swarms induced by salt solution mining

    Science.gov (United States)

    Kinscher, J.; Bernard, P.; Contrucci, I.; Mangeney, A.; Piguet, J. P.; Bigarre, P.

    2015-01-01

    Ground failures, caving processes and collapses of large natural or man-made underground cavities can produce significant socio-economic damages and represent a serious risk envisaged by the mine managements and municipalities. In order to improve our understanding of the mechanisms governing such a geohazard and to test the potential of geophysical methods to prevent them, the development and collapse of a salt solution mining cavity was monitored in the Lorraine basin in northeastern France. During the experiment, a huge microseismic data set (˜50 000 event files) was recorded by a local microseismic network. 80 per cent of the data comprised unusual swarming sequences with complex clusters of superimposed microseismic events which could not be processed through standard automatic detection and location routines. Here, we present two probabilistic methods which provide a powerful tool to assess the spatio-temporal characteristics of these swarming sequences in an automatic manner. Both methods take advantage of strong attenuation effects and significantly polarized P-wave energies at higher frequencies (>100 Hz). The first location approach uses simple signal amplitude estimates for different frequency bands, and an attenuation model to constrain the hypocentre locations. The second approach was designed to identify significantly polarized P-wave energies and the associated polarization angles which provide very valuable information on the hypocentre location. Both methods are applied to a microseismic data set recorded during an important step of the development of the cavity, that is, before its collapse. From our results, systematic spatio-temporal epicentre migration trends are observed in the order of seconds to minutes and several tens of meters which are partially associated with cyclic behaviours. In addition, from spatio-temporal distribution of epicentre clusters we observed similar epicentre migration in the order of hours and days. All together, we

  7. Assessment of New Approaches in Geothermal Exploration Decision Making: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Akar, S.; Young, K. R.

    2015-02-01

    Geothermal exploration projects have significant amount of risk associated with uncertainties encountered in the discovery of the geothermal resource. Understanding when and how to proceed in an exploration program, and when to walk away from a site, are two of the largest challenges for increased geothermal deployment. Current methodologies for exploration decision making is left to subjective by subjective expert opinion which can be incorrectly biased by expertise (e.g. geochemistry, geophysics), geographic location of focus, and the assumed conceptual model. The aim of this project is to develop a methodology for more objective geothermal exploration decision making at a given location, including go-no-go decision points to help developers and investors decide when to give up on a location. In this scope, two different approaches are investigated: 1) value of information analysis (VOIA) which is used for evaluating and quantifying the value of a data before they are purchased, and 2) enthalpy-based exploration targeting based on reservoir size, temperature gradient estimates, and internal rate of return (IRR). The first approach, VOIA, aims to identify the value of a particular data when making decisions with an uncertain outcome. This approach targets the pre-drilling phase of exploration. These estimated VOIs are highly affected by the size of the project and still have a high degree of subjectivity in assignment of probabilities. The second approach, exploration targeting, is focused on decision making during the drilling phase. It starts with a basic geothermal project definition that includes target and minimum required production capacity and initial budgeting for exploration phases. Then, it uses average temperature gradient, reservoir temperature estimates, and production capacity to define targets and go/no-go limits. The decision analysis in this approach is based on achieving a minimum IRR at each phase of the project. This second approach was

  8. High Sensitivity TSS Prediction: Estimates of Locations Where TSS Cannot Occur

    KAUST Repository

    Schaefer, Ulf; Kodzius, Rimantas; Kai, Chikatoshi; Kawai, Jun; Carninci, Piero; Hayashizaki, Yoshihide; Bajic, Vladimir B.

    2013-01-01

    from mouse and human genomes, we developed a methodology that allows us, by performing computational TSS prediction with very high sensitivity, to annotate, with a high accuracy in a strand specific manner, locations of mammalian genomes that are highly

  9. Enumerating the Hidden Homeless: Strategies to Estimate the Homeless Gone Missing From a Point-in-Time Count

    Directory of Open Access Journals (Sweden)

    Agans Robert P.

    2014-06-01

    Full Text Available To receive federal homeless funds, communities are required to produce statistically reliable, unduplicated counts or estimates of homeless persons in sheltered and unsheltered locations during a one-night period (within the last ten days of January called a point-in-time (PIT count. In Los Angeles, a general population telephone survey was implemented to estimate the number of unsheltered homeless adults who are hidden from view during the PIT count. Two estimation approaches were investigated: i the number of homeless persons identified as living on private property, which employed a conventional household weight for the estimated total (Horvitz-Thompson approach; and ii the number of homeless persons identified as living on a neighbor’s property, which employed an additional adjustment derived from the size of the neighborhood network to estimate the total (multiplicity-based approach. This article compares the results of these two methods and discusses the implications therein.

  10. Estimation of the order of an autoregressive time series: a Bayesian approach

    International Nuclear Information System (INIS)

    Robb, L.J.

    1980-01-01

    Finite-order autoregressive models for time series are often used for prediction and other inferences. Given the order of the model, the parameters of the models can be estimated by least-squares, maximum-likelihood, or Yule-Walker method. The basic problem is estimating the order of the model. The problem of autoregressive order estimation is placed in a Bayesian framework. This approach illustrates how the Bayesian method brings the numerous aspects of the problem together into a coherent structure. A joint prior probability density is proposed for the order, the partial autocorrelation coefficients, and the variance; and the marginal posterior probability distribution for the order, given the data, is obtained. It is noted that the value with maximum posterior probability is the Bayes estimate of the order with respect to a particular loss function. The asymptotic posterior distribution of the order is also given. In conclusion, Wolfer's sunspot data as well as simulated data corresponding to several autoregressive models are analyzed according to Akaike's method and the Bayesian method. Both methods are observed to perform quite well, although the Bayesian method was clearly superior, in most cases

  11. Real-time approaches to the estimation of local wind velocity for a fixed-wing unmanned air vehicle

    International Nuclear Information System (INIS)

    Chan, W L; Lee, C S; Hsiao, F B

    2011-01-01

    Three real-time approaches to estimating local wind velocity for a fixed-wing unmanned air vehicle are presented in this study. All three methods work around the navigation equations with added wind components. The first approach calculates the local wind speed by substituting the ground speed and ascent rate data given by the Global Positioning System (GPS) into the navigation equations. The second and third approaches utilize the extended Kalman filter (EKF) and the unscented Kalman filter (UKF), respectively. The results show that, despite the nonlinearity of the navigation equations, the EKF performance is proven to be on a par with the UKF. A time-varying noise estimation method based on the Wiener filter is also discussed. Results are compared with the average wind speed measured on the ground. All three approaches are proven to be reliable with stated advantages and disadvantages

  12. Balancing uncertainty of context in ERP project estimation: an approach and a case study

    NARCIS (Netherlands)

    Daneva, Maia

    2010-01-01

    The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach -

  13. A multicriteria decision making approach based on fuzzy theory and credibility mechanism for logistics center location selection.

    Science.gov (United States)

    Wang, Bowen; Xiong, Haitao; Jiang, Chengrui

    2014-01-01

    As a hot topic in supply chain management, fuzzy method has been widely used in logistics center location selection to improve the reliability and suitability of the logistics center location selection with respect to the impacts of both qualitative and quantitative factors. However, it does not consider the consistency and the historical assessments accuracy of experts in predecisions. So this paper proposes a multicriteria decision making model based on credibility of decision makers by introducing priority of consistency and historical assessments accuracy mechanism into fuzzy multicriteria decision making approach. In this way, only decision makers who pass the credibility check are qualified to perform the further assessment. Finally, a practical example is analyzed to illustrate how to use the model. The result shows that the fuzzy multicriteria decision making model based on credibility mechanism can improve the reliability and suitability of site selection for the logistics center.

  14. Absolute earthquake locations using 3-D versus 1-D velocity models below a local seismic network: example from the Pyrenees

    Science.gov (United States)

    Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.

    2018-03-01

    Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.

  15. Development of a low-maintenance measurement approach to continuously estimate methane emissions: A case study.

    Science.gov (United States)

    Riddick, S N; Hancock, B R; Robinson, A D; Connors, S; Davies, S; Allen, G; Pitt, J; Harris, N R P

    2018-03-01

    The chemical breakdown of organic matter in landfills represents a significant source of methane gas (CH 4 ). Current estimates suggest that landfills are responsible for between 3% and 19% of global anthropogenic emissions. The net CH 4 emissions resulting from biogeochemical processes and their modulation by microbes in landfills are poorly constrained by imprecise knowledge of environmental constraints. The uncertainty in absolute CH 4 emissions from landfills is therefore considerable. This study investigates a new method to estimate the temporal variability of CH 4 emissions using meteorological and CH 4 concentration measurements downwind of a landfill site in Suffolk, UK from July to September 2014, taking advantage of the statistics that such a measurement approach offers versus shorter-term, but more complex and instantaneously accurate, flux snapshots. Methane emissions were calculated from CH 4 concentrations measured 700m from the perimeter of the landfill with observed concentrations ranging from background to 46.4ppm. Using an atmospheric dispersion model, we estimate a mean emission flux of 709μgm -2 s -1 over this period, with a maximum value of 6.21mgm -2 s -1 , reflecting the wide natural variability in biogeochemical and other environmental controls on net site emission. The emissions calculated suggest that meteorological conditions have an influence on the magnitude of CH 4 emissions. We also investigate the factors responsible for the large variability observed in the estimated CH 4 emissions, and suggest that the largest component arises from uncertainty in the spatial distribution of CH 4 emissions within the landfill area. The results determined using the low-maintenance approach discussed in this paper suggest that a network of cheaper, less precise CH 4 sensors could be used to measure a continuous CH 4 emission time series from a landfill site, something that is not practical using far-field approaches such as tracer release methods

  16. A METHOD TO ESTIMATE TEMPORAL INTERACTION IN A CONDITIONAL RANDOM FIELD BASED APPROACH FOR CROP RECOGNITION

    Directory of Open Access Journals (Sweden)

    P. M. A. Diaz

    2016-06-01

    Full Text Available This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.

  17. Intercomparisons of Prognostic, Diagnostic, and Inversion Modeling Approaches for Estimation of Net Ecosystem Exchange over the Pacific Northwest Region

    Science.gov (United States)

    Turner, D. P.; Jacobson, A. R.; Nemani, R. R.

    2013-12-01

    The recent development of large spatially-explicit datasets for multiple variables relevant to monitoring terrestrial carbon flux offers the opportunity to estimate the terrestrial land flux using several alternative, potentially complimentary, approaches. Here we developed and compared regional estimates of net ecosystem exchange (NEE) over the Pacific Northwest region of the U.S. using three approaches. In the prognostic modeling approach, the process-based Biome-BGC model was driven by distributed meteorological station data and was informed by Landsat-based coverages of forest stand age and disturbance regime. In the diagnostic modeling approach, the quasi-mechanistic CFLUX model estimated net ecosystem production (NEP) by upscaling eddy covariance flux tower observations. The model was driven by distributed climate data and MODIS FPAR (the fraction of incident PAR that is absorbed by the vegetation canopy). It was informed by coarse resolution (1 km) data about forest stand age. In both the prognostic and diagnostic modeling approaches, emissions estimates for biomass burning, harvested products, and river/stream evasion were added to model-based NEP to get NEE. The inversion model (CarbonTracker) relied on observations of atmospheric CO2 concentration to optimize prior surface carbon flux estimates. The Pacific Northwest is heterogeneous with respect to land cover and forest management, and repeated surveys of forest inventory plots support the presence of a strong regional carbon sink. The diagnostic model suggested a stronger carbon sink than the prognostic model, and a much larger sink that the inversion model. The introduction of Landsat data on disturbance history served to reduce uncertainty with respect to regional NEE in the diagnostic and prognostic modeling approaches. The FPAR data was particularly helpful in capturing the seasonality of the carbon flux using the diagnostic modeling approach. The inversion approach took advantage of a global

  18. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    Science.gov (United States)

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  19. A pragmatic approach to estimate alpha factors for common cause failure analysis

    International Nuclear Information System (INIS)

    Hassija, Varun; Senthil Kumar, C.; Velusamy, K.

    2014-01-01

    Highlights: • Estimation of coefficients in alpha factor model for common cause analysis. • A derivation of plant specific alpha factors is demonstrated. • We examine sensitivity of common cause contribution to total system failure. • We compare beta factor and alpha factor models for various redundant configurations. • The use of alpha factors is preferable, especially for large redundant systems. - Abstract: Most of the modern technological systems are deployed with high redundancy but still they fail mainly on account of common cause failures (CCF). Various models such as Beta Factor, Multiple Greek Letter, Binomial Failure Rate and Alpha Factor exists for estimation of risk from common cause failures. Amongst all, alpha factor model is considered most suitable for high redundant systems as it arrives at common cause failure probabilities from a set of ratios of failures and the total component failure probability Q T . In the present study, alpha factor model is applied for the assessment of CCF of safety systems deployed at two nuclear power plants. A method to overcome the difficulties in estimation of the coefficients viz., alpha factors in the model, importance of deriving plant specific alpha factors and sensitivity of common cause contribution to the total system failure probability with respect to hazard imposed by various CCF events is highlighted. An approach described in NUREG/CR-5500 is extended in this study to provide more explicit guidance for a statistical approach to derive plant specific coefficients for CCF analysis especially for high redundant systems. The procedure is expected to aid regulators for independent safety assessment

  20. A probabilistic approach for estimating the spatial extent of pesticide agricultural use sites and potential co-occurrence with listed species for use in ecological risk assessments.

    Science.gov (United States)

    Budreski, Katherine; Winchell, Michael; Padilla, Lauren; Bang, JiSu; Brain, Richard A

    2016-04-01

    A crop footprint refers to the estimated spatial extent of growing areas for a specific crop, and is commonly used to represent the potential "use site" footprint for a pesticide labeled for use on that crop. A methodology for developing probabilistic crop footprints to estimate the likelihood of pesticide use and the potential co-occurrence of pesticide use and listed species locations was tested at the national scale and compared to alternative methods. The probabilistic aspect of the approach accounts for annual crop rotations and the uncertainty in remotely sensed crop and land cover data sets. The crop footprints used historically are derived exclusively from the National Land Cover Database (NLCD) Cultivated Crops and/or Pasture/Hay classes. This approach broadly aggregates agriculture into 2 classes, which grossly overestimates the spatial extent of individual crops that are labeled for pesticide use. The approach also does not use all the available crop data, represents a single point in time, and does not account for the uncertainty in land cover data set classifications. The probabilistic crop footprint approach described herein incorporates best available information at the time of analysis from the National Agricultural Statistics Service (NASS) Cropland Data Layer (CDL) for 5 y (2008-2012 at the time of analysis), the 2006 NLCD, the 2007 NASS Census of Agriculture, and 5 y of NASS Quick Stats (2008-2012). The approach accounts for misclassification of crop classes in the CDL by incorporating accuracy assessment information by state, year, and crop. The NLCD provides additional information to improve the CDL crop probability through an adjustment based on the NLCD accuracy assessment data using the principles of Bayes' Theorem. Finally, crop probabilities are scaled at the state level by comparing against NASS surveys (Census of Agriculture and Quick Stats) of reported planted acres by crop. In an example application of the new method, the probabilistic

  1. An Adaptive Nonlinear Aircraft Maneuvering Envelope Estimation Approach for Online Applications

    Science.gov (United States)

    Schuet, Stefan R.; Lombaerts, Thomas Jan; Acosta, Diana; Wheeler, Kevin; Kaneshige, John

    2014-01-01

    A nonlinear aircraft model is presented and used to develop an overall unified robust and adaptive approach to passive trim and maneuverability envelope estimation with uncertainty quantification. The concept of time scale separation makes this method suitable for the online characterization of altered safe maneuvering limitations after impairment. The results can be used to provide pilot feedback and/or be combined with flight planning, trajectory generation, and guidance algorithms to help maintain safe aircraft operations in both nominal and off-nominal scenarios.

  2. Locating AED Enabled Medical Drones to Enhance Cardiac Arrest Response Times.

    Science.gov (United States)

    Pulver, Aaron; Wei, Ran; Mann, Clay

    2016-01-01

    Out-of-hospital cardiac arrest (OOHCA) is prevalent in the United States. Each year between 180,000 and 400,000 people die due to cardiac arrest. The automated external defibrillator (AED) has greatly enhanced survival rates for OOHCA. However, one of the important components of successful cardiac arrest treatment is emergency medical services (EMS) response time (i.e., the time from EMS "wheels rolling" until arrival at the OOHCA scene). Unmanned Aerial Vehicles (UAV) have regularly been used for remote sensing and aerial imagery collection, but there are new opportunities to use drones for medical emergencies. The purpose of this study is to develop a geographic approach to the placement of a network of medical drones, equipped with an automated external defibrillator, designed to minimize travel time to victims of out-of-hospital cardiac arrest. Our goal was to have one drone on scene within one minute for at least 90% of demand for AED shock therapy, while minimizing implementation costs. In our study, the current estimated travel times were evaluated in Salt Lake County using geographical information systems (GIS) and compared to the estimated travel times of a network of AED enabled medical drones. We employed a location model, the Maximum Coverage Location Problem (MCLP), to determine the best configuration of drones to increase service coverage within one minute. We found that, using traditional vehicles, only 4.3% of the demand can be reached (travel time) within one minute utilizing current EMS agency locations, while 96.4% of demand can be reached within five minutes using current EMS vehicles and facility locations. Analyses show that using existing EMS stations to launch drones resulted in 80.1% of cardiac arrest demand being reached within one minute Allowing new sites to launch drones resulted in 90.3% of demand being reached within one minute. Finally, using existing EMS and new sites resulted in 90.3% of demand being reached while greatly reducing

  3. A RFID specific participatory design approach to support design and implementation of real-time location systems in the operating room.

    Science.gov (United States)

    Guédon, A C P; Wauben, L S G L; de Korne, D F; Overvelde, M; Dankelman, J; van den Dobbelsteen, J J

    2015-01-01

    Information technology, such as real-time location (RTL) systems using Radio Frequency IDentification (RFID) may contribute to overcome patient safety issues and high costs in healthcare. The aim of this work is to study if a RFID specific Participatory Design (PD) approach supports the design and the implementation of RTL systems in the Operating Room (OR). A RFID specific PD approach was used to design and implement two RFID based modules. The Device Module monitors the safety status of OR devices and the Patient Module tracks the patients' locations during their hospital stay. The PD principles 'multidisciplinary team', 'participation users (active involvement)' and 'early adopters' were used to include users from the RFID company, the university and the hospital. The design and implementation process consisted of two 'structured cycles' ('iterations'). The effectiveness of this approach was assessed by the acceptance in terms of level of use, continuity of the project and purchase. The Device Module included eight strategic and twelve tactical actions and the Patient Module included six strategic and twelve tactical actions. Both modules are now used on a daily basis and are purchased by the hospitals for continued use. The RFID specific PD approach was effective in guiding and supporting the design and implementation process of RFID technology in the OR. The multidisciplinary teams and their active participation provided insights in the social and the organizational context of the hospitals making it possible to better fit the technology to the hospitals' (future) needs.

  4. Centralization vs. Decentralization: A Location Analysis Approach for Librarians

    Science.gov (United States)

    Raffel, Jeffrey; Shishko, Robert

    1972-01-01

    An application of location theory to the question of centralized versus decentralized library facilities for a university, with relevance for special libraries is presented. The analysis provides models for a single library, for two or more libraries, or for decentralized facilities. (6 references) (Author/NH)