WorldWideScience

Sample records for maximum estimated subsurface

  1. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  2. Parameterization of vertical chlorophyll a in the Arctic Ocean: impact of the subsurface chlorophyll maximum on regional, seasonal, and annual primary production estimates

    Directory of Open Access Journals (Sweden)

    M. Ardyna

    2013-06-01

    Full Text Available Predicting water-column phytoplankton biomass from near-surface measurements is a common approach in biological oceanography, particularly since the advent of satellite remote sensing of ocean color (OC. In the Arctic Ocean, deep subsurface chlorophyll maxima (SCMs that significantly contribute to primary production (PP are often observed. These are neither detected by ocean color sensors nor accounted for in the primary production models applied to the Arctic Ocean. Here, we assemble a large database of pan-Arctic observations (i.e., 5206 stations and develop an empirical model to estimate vertical chlorophyll a (Chl a according to (1 the shelf–offshore gradient delimited by the 50 m isobath, (2 seasonal variability along pre-bloom, post-bloom, and winter periods, and (3 regional differences across ten sub-Arctic and Arctic seas. Our detailed analysis of the dataset shows that, for the pre-bloom and winter periods, as well as for high surface Chl a concentration (Chl asurf; 0.7–30 mg m−3 throughout the open water period, the Chl a maximum is mainly located at or near the surface. Deep SCMs occur chiefly during the post-bloom period when Chl asurf is low (0–0.5 mg m−3. By applying our empirical model to annual Chl asurf time series, instead of the conventional method assuming vertically homogenous Chl a, we produce novel pan-Arctic PP estimates and associated uncertainties. Our results show that vertical variations in Chl a have a limited impact on annual depth-integrated PP. Small overestimates found when SCMs are shallow (i.e., pre-bloom, post-bloom > 0.7 mg m−3, and the winter period somehow compensate for the underestimates found when SCMs are deep (i.e., post-bloom −3. SCMs are, however, important seasonal features with a substantial impact on depth-integrated PP estimates, especially when surface nitrate is exhausted in the Arctic Ocean and where highly stratified and oligotrophic conditions prevail.

  3. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  4. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  5. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  6. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  7. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  8. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  9. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  10. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  11. Multicollinearity and maximum entropy leuven estimator

    OpenAIRE

    Sudhanshu Mishra

    2004-01-01

    Multicollinearity is a serious problem in applied regression analysis. Q. Paris (2001) introduced the MEL estimator to resolve the multicollinearity problem. This paper improves the MEL estimator to the Modular MEL (MMEL) estimator and shows by Monte Carlo experiments that MMEL estimator performs significantly better than OLS as well as MEL estimators.

  12. Ocean subsurface particulate backscatter estimation from CALIPSO spaceborne lidar measurements

    Science.gov (United States)

    Chen, Peng; Pan, Delu; Wang, Tianyu; Mao, Zhihua

    2017-10-01

    A method for ocean subsurface particulate backscatter estimation from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite was demonstrated. The effects of the CALIOP receiver's transient response on the attenuated backscatter profile were first removed. The two-way transmittance of the overlying atmosphere was then estimated as the ratio of the measured ocean surface attenuated backscatter to the theoretical value computed from wind driven wave slope variance. Finally, particulate backscatter was estimated from the depolarization ratio as the ratio of the column-integrated cross-polarized and co-polarized channels. Statistical results show that the derived particulate backscatter by the method based on CALIOP data agree reasonably well with chlorophyll-a concentration using MODIS data. It indicates a potential use of space-borne lidar to estimate global primary productivity and particulate carbon stock.

  13. Dual states estimation of a subsurface flow-transport coupled model using ensemble Kalman filtering

    KAUST Repository

    El Gharamti, Mohamad; Hoteit, Ibrahim; Valstar, Johan R.

    2013-01-01

    Modeling the spread of subsurface contaminants requires coupling a groundwater flow model with a contaminant transport model. Such coupling may provide accurate estimates of future subsurface hydrologic states if essential flow and contaminant data

  14. A technique for estimating maximum harvesting effort in a stochastic ...

    Indian Academy of Sciences (India)

    Unknown

    Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.

  15. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  16. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  17. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  18. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  19. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    )-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...

  20. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  1. Maximum entropy estimation via Gauss-LP quadratures

    NARCIS (Netherlands)

    Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a

  2. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  3. Extracting volatility signal using maximum a posteriori estimation

    Science.gov (United States)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  4. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  5. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  6. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    Science.gov (United States)

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  7. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  8. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  9. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    Directory of Open Access Journals (Sweden)

    Xi Liu

    2016-09-01

    Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  10. Relative azimuth inversion by way of damped maximum correlation estimates

    Science.gov (United States)

    Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.

    2012-01-01

    Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

  11. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  12. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...

  13. Using Elementary Mechanics to Estimate the Maximum Range of ICBMs

    Science.gov (United States)

    Amato, Joseph

    2018-04-01

    North Korea's development of nuclear weapons and, more recently, intercontinental ballistic missiles (ICBMs) has added a grave threat to world order. The threat presented by these weapons depends critically on missile range, i.e., the ability to reach North America or Europe while carrying a nuclear warhead. Using the limited information available from near-vertical test flights, how do arms control experts estimate the maximum range of an ICBM? The purpose of this paper is to show, using mathematics and concepts appropriate to a first-year calculus-based mechanics class, how a missile's range can be estimated from the (observable) altitude attained during its test flights. This topic—while grim—affords an ideal opportunity to show students how the application of basic physical principles can inform and influence public policy. For students who are already familiar with Kepler's laws, it should be possible to present in a single class period.

  14. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  15. The estimation of probable maximum precipitation: the case of Catalonia.

    Science.gov (United States)

    Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel

    2008-12-01

    A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.

  16. A Mathematical Model for Estimation of Kelp Bed Productivity: Age Dependence and Contributions of Subsurface Kelp

    Science.gov (United States)

    Trumbo, S. K.; Palacios, S. L.; Zimmerman, R. C.; Kudela, R. M.

    2012-12-01

    Macrocystis pyrifera, giant kelp, is a major primary producer of the California coastal ocean that provides habitat for marine species through the formation of massive kelp beds. The estimation of primary productivity of these kelp beds is essential for a complete understanding of their health and of the biogeochemistry of the region. Current methods involve either the application of a proportionality constant to remotely sensed biomass or in situ frond density measurements. The purpose of this research was to improve upon conventional primary productivity estimates by developing a model which takes into account the spectral differences among juvenile, mature, and senescent tissues as well as the photosynthetic contributions of subsurface kelp. A modified version of a seagrass productivity model (Zimmerman 2006) was used to quantify carbon fixation. Inputs included estimates of the underwater light field as computed by solving the radiative transfer equation (with the Hydrolight(TM) software package) and biological parameters obtained from the literature. It was found that mature kelp is the most efficient primary producer, especially in light-limited environments, due to increased light absorptance. It was also found that incoming light attenuates below useful levels for photosynthesis more rapidly than has been previously accounted for in productivity estimates, with productivity dropping below half maximum at approximately 0.75 m. As a case study for comparison with the biomass method, the model was applied to Isla Vista kelp bed in Santa Barbara, using area estimates from the MODIS-ASTER Simulator (MASTER). A graphical user-interface was developed for users to provide inputs to run the kelp productivity model under varying conditions. Accurately quantifying kelp productivity is essential for understanding its interaction with offshore ecosystems as well as its contribution to the coastal carbon cycle.

  17. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    Science.gov (United States)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  18. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  19. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L.; DuFrain, R.J.

    1986-01-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  20. Maximum likelihood sequence estimation for optical complex direct modulation.

    Science.gov (United States)

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  1. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d 2 ], where t is the time and d is dose. The coefficient of the d 2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  2. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  3. Estimation of subsurface-fracture orientation with the three-component crack-wave measurement; Kiretsuha sanjiku keisoku ni yoru chika kiretsumen no hoko suitei

    Energy Technology Data Exchange (ETDEWEB)

    Nagano, K; Sato, K [Muroran Institute of Technology, Hokkaido (Japan); Niitsuma, H [Tohoku University, Sendai (Japan)

    1996-05-01

    This paper reports experiments carried out to estimate subsurface-fracture orientation with the three-component crack-wave measurement. The experiments were performed by using existing subsurface cracks and two wells in the experimental field. An air gun as a sound source was installed directly above a subsurface crack intersection in one of the wells, and a three-component elastic wave detector was fixed in the vicinity of a subsurface crack intersection in the other well. Crack waves from the sound source were measured in a frequency bandwidth from 150 to 300 Hz. A coherence matrix was constituted relative to triaxial components of vibration in the crack waves; a coherent vector was sought that corresponds to a maximum coherent value of the matrix; and the direction of the longer axis in an ellipse (the direction being perpendicular to the crack face) was approximated in particle motions of the crack waves by using the vector. The normal line direction of the crack face estimated by using the above method was found to agree nearly well with the direction of the minimum crust compression stress measured in the normal line direction of the crack face existed in core samples collected from the wells, and measured at nearly the same position as the subsurface crack. 5 refs., 4 figs.

  4. ESTIMATION OF NEAR SUBSURFACE COAL FIRE GAS EMISSIONS BASED ON GEOPHYSICAL INVESTIGATIONS

    Science.gov (United States)

    Chen-Brauchler, D.; Meyer, U.; Schlömer, S.; Kus, J.; Gundelach, V.; Wuttke, M.; Fischer, C.; Rueter, H.

    2009-12-01

    Spontaneous and industrially caused subsurface coal fires are worldwide disasters that destroy coal resources, cause air pollution and emit a large amount of green house gases. Especially in developing countries, such as China, India and Malaysia, this problem has intensified over the last 15 years. In China alone, 10 to 20 million tons of coal are believed to be lost in uncontrolled coal fires. The cooperation of developing countries and industrialized countries is needed to enforce internationally concerted approaches and political attention towards the problem. The Clean Development Mechanism (CDM) under the framework of the Kyoto Protocol may provide an international stage for financial investment needed to fight the disastrous situation. A Sino-German research project for coal fire exploration, monitoring and extinction applied several geophysical approaches in order to estimate the annual baseline especially of CO2 emissions from near subsurface coal fires. As a result of this project, we present verifiable methodologies that may be used in the CDM framework to estimate the amount of CO2 emissions from near subsurface coal fires. We developed three possibilities to approach the estimation based on (1) thermal energy release, (2) geological and geometrical determinations as well as (3) direct gas measurement. The studies involve the investigation of the physical property changes of the coal seam and bedrock during different burning stages of a underground coal fire. Various geophysical monitoring methods were applied from near surface to determine the coal volume, fire propagation, temperature anomalies, etc.

  5. The soil classification and the subsurface carbon stock estimation with a ground-penetrating radar

    International Nuclear Information System (INIS)

    Onishi, K.; Rokugawa, S.; Kato, Y.

    2002-01-01

    One of the serious problems of the Kyoto Protocol is that we have no effective method to estimate the carbon stock of the subsurface. To solve this problem, we propose the application of ground-penetrating radar (GPR) to the subsurface soil survey. As a result, it is shown that GPR can detect the soil horizons, stones and roots. The fluctuations of the soil horizons in the forest are cleanly indicated as the reflection pattern of the microwaves. Considering the fact that the physical, chemical, and biological characteristics of each soil layer is almost unique, GPR results can be used to estimate the carbon stock in soil by combining with the vertical soil sample survey at one site. Then as a trial, we demonstrate to estimate the carbon content fixed in soil layers based on the soil samples and GPR survey data. we also compare this result with the carbon stock for the flat horizon case. The advantages of GPR usage for this object are not only the reduction of uncertainty and the cost, but also the environmental friendliness of survey manner. Finally, we summarize the adaptabilities of various antennas having different predominant frequencies for the shallow subsurface zone. (author)

  6. Revision of regional maximum flood (RMF) estimation in Namibia ...

    African Journals Online (AJOL)

    Extreme flood hydrology in Namibia for the past 30 years has largely been based on the South African Department of Water Affairs Technical Report 137 (TR 137) of 1988. This report proposes an empirically established upper limit of flood peaks for regions called the regional maximum flood (RMF), which could be ...

  7. Multilevel maximum likelihood estimation with application to covariance matrices

    Czech Academy of Sciences Publication Activity Database

    Turčičová, Marie; Mandel, J.; Eben, Kryštof

    Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  8. Statistical Bias in Maximum Likelihood Estimators of Item Parameters.

    Science.gov (United States)

    1982-04-01

    34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC

  9. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2013-01-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates

  10. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  11. Bayesian and maximum likelihood estimation of genetic maps

    DEFF Research Database (Denmark)

    York, Thomas L.; Durrett, Richard T.; Tanksley, Steven

    2005-01-01

    There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... of genotyping errors. A similar advantage of the Bayesian method was not observed for missing data. We also re-analyse a recently published set of data from the eggplant and show that the use of the MCMC-based method leads to smaller estimates of genetic distances....

  12. Coupling diffusion and maximum entropy models to estimate thermal inertia

    Science.gov (United States)

    Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...

  13. Estimation of geochemical parameters for assessing subsurface transport at the Savannah River Plant: Environmental information document

    International Nuclear Information System (INIS)

    Looney, B.B.; Grant, M.W.; King, C.M.

    1987-03-01

    Geochemical parameter estimates to be used in assessing the subsurface transport of chemicals from Savannah River Plant (SRP) waste sites are presented. Specifically, reference values for soil-solution distribution coefficients, solubility, leach rates, and retardation coefficients are estimated for 31 inorganic chemicals (assuming speciation is governed by reasonable assumptions about controlling variables such as Eh and pH) and 36 organic compounds. Additionally, facilitated transport (the association of chemicals with inorganic and organic ligands or colloids resulting in relatively high mobility) was estimated using field data to derive a fraction of the disposal mass which was assumed to be mobile. Hydrologic parameters such as dispersion coefficient, average moisture content in vadose zone, bulk density, and effective porosity are also presented. The estimates are based on site-specific studies when available, combined with technical literature

  14. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  15. Unconventional energy resources in a crowded subsurface: Reducing uncertainty and developing a separation zone concept for resource estimation and deep 3D subsurface planning using legacy mining data.

    Science.gov (United States)

    Monaghan, Alison A

    2017-12-01

    Over significant areas of the UK and western Europe, anthropogenic alteration of the subsurface by mining of coal has occurred beneath highly populated areas which are now considering a multiplicity of 'low carbon' unconventional energy resources including shale gas and oil, coal bed methane, geothermal energy and energy storage. To enable decision making on the 3D planning, licensing and extraction of these resources requires reduced uncertainty around complex geology and hydrogeological and geomechanical processes. An exemplar from the Carboniferous of central Scotland, UK, illustrates how, in areas lacking hydrocarbon well production data and 3D seismic surveys, legacy coal mine plans and associated boreholes provide valuable data that can be used to reduce the uncertainty around geometry and faulting of subsurface energy resources. However, legacy coal mines also limit unconventional resource volumes since mines and associated shafts alter the stress and hydrogeochemical state of the subsurface, commonly forming pathways to the surface. To reduce the risk of subsurface connections between energy resources, an example of an adapted methodology is described for shale gas/oil resource estimation to include a vertical separation or 'stand-off' zone between the deepest mine workings, to ensure the hydraulic fracturing required for shale resource production would not intersect legacy coal mines. Whilst the size of such separation zones requires further work, developing the concept of 3D spatial separation and planning is key to utilising the crowded subsurface energy system, whilst mitigating against resource sterilisation and environmental impacts, and could play a role in positively informing public and policy debate. Copyright © 2017 British Geological Survey, a component institute of NERC. Published by Elsevier B.V. All rights reserved.

  16. Dual states estimation of a subsurface flow-transport coupled model using ensemble Kalman filtering

    KAUST Repository

    El Gharamti, Mohamad

    2013-10-01

    Modeling the spread of subsurface contaminants requires coupling a groundwater flow model with a contaminant transport model. Such coupling may provide accurate estimates of future subsurface hydrologic states if essential flow and contaminant data are assimilated in the model. Assuming perfect flow, an ensemble Kalman filter (EnKF) can be used for direct data assimilation into the transport model. This is, however, a crude assumption as flow models can be subject to many sources of uncertainty. If the flow is not accurately simulated, contaminant predictions will likely be inaccurate even after successive Kalman updates of the contaminant model with the data. The problem is better handled when both flow and contaminant states are concurrently estimated using the traditional joint state augmentation approach. In this paper, we introduce a dual estimation strategy for data assimilation into a one-way coupled system by treating the flow and the contaminant models separately while intertwining a pair of distinct EnKFs, one for each model. The presented strategy only deals with the estimation of state variables but it can also be used for state and parameter estimation problems. This EnKF-based dual state-state estimation procedure presents a number of novel features: (i) it allows for simultaneous estimation of both flow and contaminant states in parallel; (ii) it provides a time consistent sequential updating scheme between the two models (first flow, then transport); (iii) it simplifies the implementation of the filtering system; and (iv) it yields more stable and accurate solutions than does the standard joint approach. We conducted synthetic numerical experiments based on various time stepping and observation strategies to evaluate the dual EnKF approach and compare its performance with the joint state augmentation approach. Experimental results show that on average, the dual strategy could reduce the estimation error of the coupled states by 15% compared with the

  17. Existence and uniqueness of the maximum likelihood estimator for models with a Kronecker product covariance structure

    NARCIS (Netherlands)

    Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.

    2016-01-01

    This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator

  18. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  19. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  20. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  1. Estimate of subsurface formation temperature in the Tarim basin, northwest China

    Science.gov (United States)

    Liu, Shaowen; Lei, Xiao; Feng, Changge; Hao, Chunyan

    2015-04-01

    Subsurface formation temperature in the Tarim basin, the largest sedimentary basin in China, is significant for its hydrocarbon generation, preservation and geothermal energy potential assessment, but till now is not well understood, due to poor data coverage and a lack of highly accurate temperature data. Here, we combined recently acquired steady-state temperature logging data, drill stem test temperature data and measured rock thermal properties, to investigate the geothermal regime, and estimate the formation temperature at specific depths in the range 1000~5000 m in this basin. Results show that the heat flow of the Tarim basin ranges between 26.2 and 66.1 mW/m2, with a mean of 42.5±7.6 mW/m2; geothermal gradient at the depth of 3000 m varies from 14.9 to 30.2 °C/km, with a mean of 20.7±2.9 °C/km. Formation temperature at the depth of 1000 m is estimated to be between 29 °C and 41°C, with a mean of 35°C; whilst the temperature at 2000 m ranges from 46~71°C with an average of 59°C; 63~100°C is for that at the depth of 3000 m, and the mean is 82°C; the temperature at 4000 m varies from 80 to 130°C, with a mean of 105°C; 97~160°C is for the temperature at 5000 m depth. In addition, the general pattern of the subsurface formation temperatures at different depths is basically similar and is characterized by high temperatures in the uplift areas and low temperatures in the sags. Basement structure and lateral variations in thermal properties account for this pattern of the geo-temperature field in the Tarim basin.

  2. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Science.gov (United States)

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  3. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    Science.gov (United States)

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  4. The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference

    Directory of Open Access Journals (Sweden)

    Nicholas Scott Cardell

    2013-05-01

    Full Text Available Maximum entropy methods of parameter estimation are appealing because they impose no additional structure on the data, other than that explicitly assumed by the analyst. In this paper we prove that the data constrained GME estimator of the general linear model is consistent and asymptotically normal. The approach we take in establishing the asymptotic properties concomitantly identifies a new computationally efficient method for calculating GME estimates. Formulae are developed to compute asymptotic variances and to perform Wald, likelihood ratio, and Lagrangian multiplier statistical tests on model parameters. Monte Carlo simulations are provided to assess the performance of the GME estimator in both large and small sample situations. Furthermore, we extend our results to maximum cross-entropy estimators and indicate a variant of the GME estimator that is unbiased. Finally, we discuss the relationship of GME estimators to Bayesian estimators, pointing out the conditions under which an unbiased GME estimator would be efficient.

  5. A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology

    KAUST Repository

    Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim

    2016-01-01

    Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.

  6. A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2016-08-12

    Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model\\'s state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.

  7. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  8. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  9. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  10. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    Science.gov (United States)

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  11. Outlier identification procedures for contingency tables using maximum likelihood and $L_1$ estimates

    NARCIS (Netherlands)

    Kuhnt, S.

    2004-01-01

    Observed cell counts in contingency tables are perceived as outliers if they have low probability under an anticipated loglinear Poisson model. New procedures for the identification of such outliers are derived using the classical maximum likelihood estimator and an estimator based on the L1 norm.

  12. On the design of experimental separation processes for maximum accuracy in the estimation of their parameters

    International Nuclear Information System (INIS)

    Volkman, Y.

    1980-07-01

    The optimal design of experimental separation processes for maximum accuracy in the estimation of process parameters is discussed. The sensitivity factor correlates the inaccuracy of the analytical methods with the inaccuracy of the estimation of the enrichment ratio. It is minimized according to the design parameters of the experiment and the characteristics of the analytical method

  13. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    Science.gov (United States)

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  14. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  15. Estimation of a melting probe's penetration velocity range to reach icy moons' subsurface ocean

    Science.gov (United States)

    Erokhina, Olga; Chumachenko, Eugene

    2014-05-01

    In modern space science one of the actual branches is icy satellites explorations. The main interest is concentrated around Jovian's moons Europa and Ganymede, Saturn's moons Titan and Enceladus that are covered by thick icy layer according to "Voyager1", "Voyager2", "Galileo" and "Cassini" missions. There is a big possibility that under icy shell could be a deep ocean. Also conditions on these satellites allow speculating about possible habitability, and considering these moons from an astrobiological point of view. One of the possible tasks of planned missions is a subsurface study. For this goal it is necessary to design special equipment that could be suitable for planetary application. One of the possible means is to use a melting probe which operates by melting and moves by gravitational force. Such a probe should be relatively small, should not weight too much and should require not too much energy. In terrestrial case such kind of probe has been successfully used for glaciers study. And it is possible to extrapolate the usage of such probe to extraterrestrial application. One of the tasks is to estimate melting probe's penetration velocity. Although there are other unsolved problems such as analyzing how the probe will move in low gravity and low atmospheric pressure; knowing whether hole will be closed or not when probe penetrate thick enough; and considering what order could be a penetration velocity. This study explores two techniques of melting probe's movement. One of them based on elasto-plastic theory and so-called "solid water" theory, and other one takes phase changing into account. These two techniques allow estimating melting probe's velocity range and study whole process. Based on these technique several cases of melting probe movement were considered, melting probe's velocity range estimated, influence of different factors studied and discussed and an easy way to optimize parameters of the melting probe proposed.

  16. Subsurface Scattered Photons: Friend or Foe? Improving visible light laser altimeter elevation estimates, and measuring surface properties using subsurface scattered photons

    Science.gov (United States)

    Greeley, A.; Kurtz, N. T.; Neumann, T.; Cook, W. B.; Markus, T.

    2016-12-01

    Photon counting laser altimeters such as MABEL (Multiple Altimeter Beam Experimental Lidar) - a single photon counting simulator for ATLAS (Advanced Topographical Laser Altimeter System) - use individual photons with visible wavelengths to measure their range to target surfaces. ATLAS, the sole instrument on NASA's upcoming ICESat-2 mission, will provide scientists a view of Earth's ice sheets, glaciers, and sea ice with unprecedented detail. Precise calibration of these instruments is needed to understand rapidly changing parameters such as sea ice freeboard, and to measure optical properties of surfaces like snow covered ice sheets using subsurface scattered photons. Photons that travel through snow, ice, or water before scattering back to an altimeter receiving system travel farther than photons taking the shortest path between the observatory and the target of interest. These delayed photons produce a negative elevation bias relative to photons scattered directly off these surfaces. We use laboratory measurements of snow surfaces using a flight-tested laser altimeter (MABEL), and Monte Carlo simulations of backscattered photons from snow to estimate elevation biases from subsurface scattered photons. We also use these techniques to demonstrate the ability to retrieve snow surface properties like snow grain size.

  17. An Entropy-Based Propagation Speed Estimation Method for Near-Field Subsurface Radar Imaging

    Directory of Open Access Journals (Sweden)

    Pistorius Stephen

    2010-01-01

    Full Text Available During the last forty years, Subsurface Radar (SR has been used in an increasing number of noninvasive/nondestructive imaging applications, ranging from landmine detection to breast imaging. To properly assess the dimensions and locations of the targets within the scan area, SR data sets have to be reconstructed. This process usually requires the knowledge of the propagation speed in the medium, which is usually obtained by performing an offline measurement from a representative sample of the materials that form the scan region. Nevertheless, in some novel near-field SR scenarios, such as Microwave Wood Inspection (MWI and Breast Microwave Radar (BMR, the extraction of a representative sample is not an option due to the noninvasive requirements of the application. A novel technique to determine the propagation speed of the medium based on the use of an information theory metric is proposed in this paper. The proposed method uses the Shannon entropy of the reconstructed images as the focal quality metric to generate an estimate of the propagation speed in a given scan region. The performance of the proposed algorithm was assessed using data sets collected from experimental setups that mimic the dielectric contrast found in BMI and MWI scenarios. The proposed method yielded accurate results and exhibited an execution time in the order of seconds.

  18. An Entropy-Based Propagation Speed Estimation Method for Near-Field Subsurface Radar Imaging

    Science.gov (United States)

    Flores-Tapia, Daniel; Pistorius, Stephen

    2010-12-01

    During the last forty years, Subsurface Radar (SR) has been used in an increasing number of noninvasive/nondestructive imaging applications, ranging from landmine detection to breast imaging. To properly assess the dimensions and locations of the targets within the scan area, SR data sets have to be reconstructed. This process usually requires the knowledge of the propagation speed in the medium, which is usually obtained by performing an offline measurement from a representative sample of the materials that form the scan region. Nevertheless, in some novel near-field SR scenarios, such as Microwave Wood Inspection (MWI) and Breast Microwave Radar (BMR), the extraction of a representative sample is not an option due to the noninvasive requirements of the application. A novel technique to determine the propagation speed of the medium based on the use of an information theory metric is proposed in this paper. The proposed method uses the Shannon entropy of the reconstructed images as the focal quality metric to generate an estimate of the propagation speed in a given scan region. The performance of the proposed algorithm was assessed using data sets collected from experimental setups that mimic the dielectric contrast found in BMI and MWI scenarios. The proposed method yielded accurate results and exhibited an execution time in the order of seconds.

  19. Microarray background correction: maximum likelihood estimation for the normal-exponential convolution

    DEFF Research Database (Denmark)

    Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K

    2009-01-01

    exponentially distributed, representing background noise and signal, respectively. Using a saddle-point approximation, Ritchie and others (2007) found normexp to be the best background correction method for 2-color microarray data. This article develops the normexp method further by improving the estimation...... is developed for exact maximum likelihood estimation (MLE) using high-quality optimization software and using the saddle-point estimates as starting values. "MLE" is shown to outperform heuristic estimators proposed by other authors, both in terms of estimation accuracy and in terms of performance on real data...

  20. Estimation of subsurface hydrological parameters around Akwuke, Enugu, Nigeria using surface resistivity measurements

    International Nuclear Information System (INIS)

    Utom, Ahamefula U; Odoh, Benard I; Egboka, Boniface C E; Egboka, Nkechi E; Okeke, Harold C

    2013-01-01

    As few boreholes may be available and carrying out pumping tests can be expensive and time consuming, relationships between aquifer characteristics and the electrical parameters of different geoelectric layers exist. Data from 19 vertical electrical soundings (VESs; 13 of these selected for evaluation) was recorded with a Schlumberger electrode configuration in the area around Akwuke, Enugu, Nigeria. The data was interpreted by computer iterative modelling with curve matching for calibration purposes. Geoelectric cross-sections along a number of lines were prepared to ascertain the overall distribution of the resistivity responses of the subsurface lithology. Identified probable shallow aquifer resistivity, thickness and depth values are in the range of 28–527 Ωm, 2.1–22.5 m and 3.1–28.3 m respectively. As our aquifer system consists of fine-grained, clay–silty sand materials, a modification of the Archie equations (Waxman–Smits model) was adopted to determine the true formation factor using the relationship between the apparent formation factor and the pore water resistivity. This representation of the effects of a separate conducting path due to the presence of clay particles in the aquifer materials was used in making reliable estimations of aquifer properties. The average hydraulic conductivity of 8.96 × 10 −4 m s −1 transmissivity ranging between 1.88 × 10 −3 and 2.02 × 10 −3 m 2 s −1 estimated from surface resistivity measurements correlated well with the available field data. Results of the study also showed a direct relationship between aquifer transmissivity and modified transverse resistance (R 2 = 0.85). (paper)

  1. Maximum likelihood estimation of the position of a radiating source in a waveguide

    International Nuclear Information System (INIS)

    Hinich, M.J.

    1979-01-01

    An array of sensors is receiving radiation from a source of interest. The source and the array are in a one- or two-dimensional waveguide. The maximum-likelihood estimators of the coordinates of the source are analyzed under the assumptions that the noise field is Gaussian. The Cramer-Rao lower bound is of the order of the number of modes which define the source excitation function. The results show that the accuracy of the maximum likelihood estimator of source depth using a vertical array in a infinite horizontal waveguide (such as the ocean) is limited by the number of modes detected by the array regardless of the array size

  2. Deep subsurface structure modeling and site amplification factor estimation in Niigata plain for broadband strong motion prediction

    International Nuclear Information System (INIS)

    Sato, Hiroaki

    2009-01-01

    This report addresses a methodology of deep subsurface structure modeling in Niigata plain, Japan to estimate site amplification factor in the broadband frequency range for broadband strong motion prediction. In order to investigate deep S-wave velocity structures, we conduct microtremor array measurements at nine sites in Niigata plain, which are important to estimate both long- and short-period ground motion. The estimated depths of the top of the basement layer agree well with those of the Green tuff formation as well as the Bouguer anomaly distribution. Dispersion characteristics derived from the observed long-period ground motion records are well explained by the theoretical dispersion curves of Love wave group velocities calculated from the estimated subsurface structures. These results demonstrate the deep subsurface structures from microtremor array measurements make it possible to estimate long-period ground motions in Niigata plain. Moreover an applicability of microtremor array exploration for inclined basement structure like a folding structure is shown from the two dimensional finite difference numerical simulations. The short-period site amplification factors in Niigata plain are empirically estimated by the spectral inversion analysis from S-wave parts of strong motion data. The resultant characteristics of site amplification are relative large in the frequency range of about 1.5-5 Hz, and decay significantly with the frequency increasing over about 5 Hz. However, these features can't be explained by the calculations from the deep subsurface structures. The estimation of site amplification factors in the frequency range of about 1.5-5 Hz are improved by introducing a shallow detailed structure down to GL-20m depth at a site. We also propose to consider random fluctuation in a modeling of deep S-wave velocity structure for broadband site amplification factor estimation. The Site amplification in the frequency range higher than about 5 Hz are filtered

  3. Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar

    Directory of Open Access Journals (Sweden)

    Zhenxin Cao

    2018-02-01

    Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.

  4. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  5. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  6. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  7. A convenient method for estimating the contaminated zone of a subsurface aquifer resulting from radioactive waste disposal into ground

    International Nuclear Information System (INIS)

    Fukui, Masami; Katsurayama, Kousuke; Uchida, Shigeo.

    1981-01-01

    Studies were conducted to estimate the contamination spread resulting from the radioactive waste disposal into a subsurface aquifer. A general equation, expressing the contaminated zone as a function of radioactive decay, the physical and chemical parameters of soil is presented. A distribution coefficient was also formulated which can be used to judge the suitability of a site for waste disposal. Moreover, a method for predicting contaminant concentration in groundwater at a site boundary is suggested for a heterogeneous media where the subsurface aquifer has different values of porosity, density, flow velocity, distribution coefficient and so on. A general equation was also developed to predict the distribution of radionuclides resulting from the disposal of a solid waste material. The distributions of contamination was evaluated for 90 Sr and 239 Pu which obey a linear adsorption model and a first order kinetics respectively. These equations appear to have practical utility for easily estimating groundwater contamination. (author)

  8. Estimation of groundwater recharge from the subsurface to the rock mass. A case study of Tono Mine Area, Gifu Prefecture

    International Nuclear Information System (INIS)

    Kobayashi, Koichi; Nakano, Katushi; Koide, Kaoru

    1996-01-01

    The groundwater flow analysis involve the groundwater recharge from the subsurface to the rock mass. According to water balance method, annual groundwater recharge is calculated by the remainder of annual evapotranspirator and river flow from annual precipitation. In this estimation, hydrological and meteorological data observed for 5 years on the watershed in Tono mine area is used. Annual precipitation ranges from 1,000 to 1,900 mm and annual river flow ranges from 400 to 1,300 mm, then river flow depends critically on precipitation. Annual evapotranspiration calculated by Penman method ranges from 400 to 500 mm. It is less fluctuant than annual precipitation. As the result of examination of water balance in subsurface zone estimated, annual ground water recharge ranges from 10 to 200 mm in this watershed. (author)

  9. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  10. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  11. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  12. Estimativas de repetibilidade para caracteres forrageiros em Panicum maximum Repeatability estimates for forage characters in Panicum maximum

    Directory of Open Access Journals (Sweden)

    Francisco José da Silva Lédo

    2008-08-01

    Full Text Available Objetiva-se com este trabalho estimar a repetibilidade para caracteres forrageiros de Panicum, e determinar o número de cortes de avaliação necessários para a seleção de genótipos de Panicum, com confiabilidade. Utilizaram-se os dados de um ensaio conduzido no período de 21/11/2002 a 08/04/2005, no Campo Experimental da Embrapa Gado de Leite, localizado em Valença-RJ, onde foram realizados 15 cortes de avaliação. No ensaio, foram avaliados 23 genótipos de Panicum maximum, em parcelas experimentais, dispostas no delineamento de blocos casualizados, com três repetições. Foram estimados os coeficientes de repetibilidade para as características produção de matéria verde de forragem (PMV; produção de matéria seca de forragem (PMS e de folhas (PMSF; porcentagem de folhas na PMS (%FOL e altura da planta (AP, utilizando os métodos da análise de variância, componentes principais e análise estrutural. Para todas as características avaliadas os efeitos de genótipos, cortes e interação genótipos x cortes foram significativos (PThe objective of this work was to estimate the repeatability for forage characters of Panicum and to determinate the necessary number of evaluation cuts to select Panicum genotypes with confidence. Data of a trial with 15 cuts, carried out between 21/11/2002 and 08/04/2005 in the experimental station of Embrapa Gado de Leite located in Valença, RJ, Brazil, were used. In this study. 23 genotypes of " Panicum maximum" were evaluated, in a complete randomized block, with three replications. The coefficient of repeatability for fresh forage production (PMV, total plant dry matter production (PMS and leaves dry matter production (PMSF were recorded along with leaves percentage in PMS (%FOL and plant hight (AP, using the variance analysis, main components and structural analysis methods. For all evaluated parameters the effects of genotype, cut and genotype x cut interaction were significant (P<0.01. When

  13. How does subsurface retain and release stored water? An explicit estimation of young water fraction and mean transit time

    Science.gov (United States)

    Ameli, Ali; McDonnell, Jeffrey; Laudon, Hjalmar; Bishop, Kevin

    2017-04-01

    The stable isotopes of water have served science well as hydrological tracers which have demonstrated that there is often a large component of "old" water in stream runoff. It has been more problematic to define the full transit time distribution of that stream water. Non-linear mixing of previous precipitation signals that is stored for extended periods and slowly travel through the subsurface before reaching the stream results in a large range of possible transit times. It difficult to find tracers can represent this, especially if all that one has is data on the precipitation input and the stream runoff. In this paper, we explicitly characterize this "old water" displacement using a novel quasi-steady physically-based flow and transport model in the well-studied S-Transect hillslope in Sweden where the concentration of hydrological tracers in the subsurface and stream has been measured. We explore how subsurface conductivity profile impacts the characteristics of old water displacement, and then test these scenarios against the observed dynamics of conservative hydrological tracers in both the stream and subsurface. This work explores the efficiency of convolution-based approaches in the estimation of stream "young water" fraction and time-variant mean transit times. We also suggest how celerity and velocity differ with landscape structure

  14. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  15. A role of vertical mixing on nutrient supply into the subsurface chlorophyll maximum in the shelf region of the East China Sea

    Science.gov (United States)

    Lee, Keunjong; Matsuno, Takeshi; Endoh, Takahiro; Ishizaka, Joji; Zhu, Yuanli; Takeda, Shigenobu; Sukigara, Chiho

    2017-07-01

    In summer, Changjiang Diluted Water (CDW) expands over the shelf region of the northern East China Sea. Dilution of the low salinity water could be caused by vertical mixing through the halocline. Vertical mixing through the pycnocline can transport not only saline water, but also high nutrient water from deeper layers to the surface euphotic zone. It is therefore very important to quantitatively evaluate the vertical mixing to understand the process of primary production in the CDW region. We conducted extensive measurements in the region during the period 2009-2011. Detailed investigations of the relative relationship between the subsurface chlorophyll maximum (SCM) and the nitracline suggested that there were two patterns relating to the N/P ratio. Comparing the depths of the nitracline and SCM, it was found that the SCM was usually located from 20 to 40 m and just above the nitracline, where the N/P ratio within the nitracline was below 15, whereas it was located from 10 to 30 m and within the nitracline, where the N/P ratio was above 20. The large value of the N/P ratio in the latter case suggests the influence of CDW. Turbulence measurements showed that the vertical flux of nutrients with vertical mixing was large (small) where the N/P ratio was small (large). A comparison with a time series of primary production revealed a consistency with the pattern of snapshot measurements, suggesting that the nutrient supply from the lower layer contributes considerably to the maintenance of SCM.

  16. Maximum a posteriori covariance estimation using a power inverse wishart prior

    DEFF Research Database (Denmark)

    Nielsen, Søren Feodor; Sporring, Jon

    2012-01-01

    The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...

  17. Maximum likelihood estimation of ancestral codon usage bias parameters in Drosophila

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Bauer DuMont, Vanessa L; Hubisz, Melissa J

    2007-01-01

    : the selection coefficient for optimal codon usage (S), allowing joint maximum likelihood estimation of S and the dN/dS ratio. We apply the method to previously published data from Drosophila melanogaster, Drosophila simulans, and Drosophila yakuba and show, in accordance with previous results, that the D...

  18. PROFIT-PC: a program for estimating maximum net revenue from multiproduct harvests in Appalachian hardwoods

    Science.gov (United States)

    Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe

    1989-01-01

    PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...

  19. Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers

    DEFF Research Database (Denmark)

    Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk

    2014-01-01

    We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR...

  20. A simple route to maximum-likelihood estimates of two-locus

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Genetics; Volume 94; Issue 3. A simple route to maximum-likelihood estimates of two-locus recombination fractions under inequality restrictions. Iain L. Macdonald Philasande Nkalashe. Research Note Volume 94 Issue 3 September 2015 pp 479-481 ...

  1. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  2. Estimation of Road Vehicle Speed Using Two Omnidirectional Microphones: A Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    López-Valcarce Roberto

    2004-01-01

    Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.

  3. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Directory of Open Access Journals (Sweden)

    Shu Cai

    2016-12-01

    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  4. Maximum Likelihood PSD Estimation for Speech Enhancement in Reverberation and Noise

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Jensen, Søren Holdt

    2016-01-01

    In this contribution we focus on the problem of power spectral density (PSD) estimation from multiple microphone signals in reverberant and noisy environments. The PSD estimation method proposed in this paper is based on the maximum likelihood (ML) methodology. In particular, we derive a novel ML...... instrumental measures and is shown to be higher than when the competing estimator is used. Moreover, we perform a speech intelligibility test where we demonstrate that both the proposed and the competing PSD estimators lead to similar intelligibility improvements......., it is shown numerically that the mean squared estimation error achieved by the proposed method is near the limit set by the corresponding Cram´er-Rao lower bound. The speech dereverberation performance of a multi-channel Wiener filter (MWF) based on the proposed PSD estimators is measured using several...

  5. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  6. Estimating Impacts of Agricultural Subsurface Drainage on Evapotranspiration Using the Landsat Imagery-Based METRIC Model

    Directory of Open Access Journals (Sweden)

    Kul Khand

    2017-11-01

    Full Text Available Agricultural subsurface drainage changes the field hydrology and potentially the amount of water available to the crop by altering the flow path and the rate and timing of water removal. Evapotranspiration (ET is normally among the largest components of the field water budget, and the changes in ET from the introduction of subsurface drainage are likely to have a greater influence on the overall water yield (surface runoff plus subsurface drainage from subsurface drained (TD fields compared to fields without subsurface drainage (UD. To test this hypothesis, we examined the impact of subsurface drainage on ET at two sites located in the Upper Midwest (North Dakota-Site 1 and South Dakota-Site 2 using the Landsat imagery-based METRIC (Mapping Evapotranspiration at high Resolution with Internalized Calibration model. Site 1 was planted with corn (Zea mays L. and soybean (Glycine max L. during the 2009 and 2010 growing seasons, respectively. Site 2 was planted with corn for the 2013 growing season. During the corn growing seasons (2009 and 2013, differences between the total ET from TD and UD fields were less than 5 mm. For the soybean year (2010, ET from the UD field was 10% (53 mm greater than that from the TD field. During the peak ET period from June to September for all study years, ET differences from TD and UD fields were within 15 mm (<3%. Overall, differences between daily ET from TD and UD fields were not statistically significant (p > 0.05 and showed no consistent relationship.

  7. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  8. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  9. Measurement and estimation of maximum skin dose to the patient for different interventional procedures

    International Nuclear Information System (INIS)

    Cheng Yuxi; Liu Lantao; Wei Kedao; Yu Peng; Yan Shulin; Li Tianchang

    2005-01-01

    Objective: To determine the dose distribution and maximum skin dose to the patient for four interventional procedures: coronary angiography (CA), hepatic angiography (HA), radiofrequency ablation (RF) and cerebral angiography (CAG), and to estimate the definitive effect of radiation on skin. Methods: Skin dose was measured using LiF: Mg, Cu, P TLD chips. A total of 9 measuring points were chosen on the back of the patient with two TLDs placed at each point, for CA, HA and RF interventional procedures, whereas two TLDs were placed on one point each at the postero-anterior (PA) and lateral side (LAT) respectively, during the CAG procedure. Results: The results revealed that the maximum skin dose to the patient was 1683.91 mGy for the HA procedure with a mean value of 607.29 mGy. The maximum skin dose at the PA point was 959.3 mGy for the CAG with a mean value of 418.79 mGy; While the maximum and the mean doses at the LAT point were 704 mGy and 191.52 mGy, respectively. For the RF procedure the maximum dose was 853.82 mGy and the mean was 219.67 mGy. For the CA procedure the maximum dose was 456.1 mGy and the mean was 227.63 mGy. Conclusion: All the measured dose values in this study are estimated ones which could not provide the accurate maximum value because it is difficult to measure using a great deal of TLDs. On the other hand, the small area of skin exposed to high dose could be missed as the distribution of the dose is successive. (authors)

  10. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  11. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...

  12. Real time estimation of photovoltaic modules characteristics and its application to maximum power point operation

    Energy Technology Data Exchange (ETDEWEB)

    Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)

    2007-05-15

    In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)

  13. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    Directory of Open Access Journals (Sweden)

    Manuel Gil

    2014-09-01

    Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  14. A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2001-01-01

    and space. This paper presents a new estimator (STC-MLE), which incorporates the correlation property. It is an expansion of the maximum likelihood estimator (MLE) developed by Ferrara et al. With the MLE a cross-correlation analysis between consecutive RF-lines on complex form is carried out for a range...... of possible velocities. In the new estimator an additional similarity investigation for each evaluated velocity and the available velocity estimates in a temporal (between frames) and spatial (within frames) neighborhood is performed. An a priori probability density term in the distribution...... of the observations gives a probability measure of the correlation between the velocities. Both the MLE and the STC-MLE have been evaluated on simulated and in-vivo RF-data obtained from the carotid artery. Using the MLE 4.1% of the estimates deviate significantly from the true velocities, when the performance...

  15. Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.

    Science.gov (United States)

    Nguyen, Hien D; Wood, Ian A

    2016-04-01

    Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.

  16. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    Science.gov (United States)

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  17. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  18. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  19. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  20. On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution

    Directory of Open Access Journals (Sweden)

    Navid Feroze

    2015-12-01

    Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.

  1. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  2. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    Energy Technology Data Exchange (ETDEWEB)

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  3. A Fast Algorithm for Maximum Likelihood Estimation of Harmonic Chirp Parameters

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    . A statistically efficient estimator for extracting the parameters of the harmonic chirp model in additive white Gaussian noise is the maximum likelihood (ML) estimator which recently has been demonstrated to be robust to noise and accurate --- even when the model order is unknown. The main drawback of the ML......The analysis of (approximately) periodic signals is an important element in numerous applications. One generalization of standard periodic signals often occurring in practice are harmonic chirp signals where the instantaneous frequency increases/decreases linearly as a function of time...

  4. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  5. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  6. Estimating Probable Maximum Precipitation by Considering Combined Effect of Typhoon and Southwesterly Air Flow

    Directory of Open Access Journals (Sweden)

    Cheng-Chin Liu

    2016-01-01

    Full Text Available Typhoon Morakot hit southern Taiwan in 2009, bringing 48-hr of heavy rainfall [close to the Probable Maximum Precipitation (PMP] to the Tsengwen Reservoir catchment. This extreme rainfall event resulted from the combined (co-movement effect of two climate systems (i.e., typhoon and southwesterly air flow. Based on the traditional PMP estimation method (i.e., the storm transposition method, STM, two PMP estimation approaches, i.e., Amplification Index (AI and Independent System (IS approaches, which consider the combined effect are proposed in this work. The AI approach assumes that the southwesterly air flow precipitation in a typhoon event could reach its maximum value. The IS approach assumes that the typhoon and southwesterly air flow are independent weather systems. Based on these assumptions, calculation procedures for the two approaches were constructed for a case study on the Tsengwen Reservoir catchment. The results show that the PMP estimates for 6- to 60-hr durations using the two approaches are approximately 30% larger than the PMP estimates using the traditional STM without considering the combined effect. This work is a pioneer PMP estimation method that considers the combined effect of a typhoon and southwesterly air flow. Further studies on this issue are essential and encouraged.

  7. Particle-filtering-based estimation of maximum available power state in Lithium-Ion batteries

    International Nuclear Information System (INIS)

    Burgos-Mellado, Claudio; Orchard, Marcos E.; Kazerani, Mehrdad; Cárdenas, Roberto; Sáez, Doris

    2016-01-01

    Highlights: • Approach to estimate the state of maximum power available in Lithium-Ion battery. • Optimisation problem is formulated on the basis of a non-linear dynamic model. • Solutions of the optimisation problem are functions of state of charge estimates. • State of charge estimates computed using particle filter algorithms. - Abstract: Battery Energy Storage Systems (BESS) are important for applications related to both microgrids and electric vehicles. If BESS are used as the main energy source, then it is required to include adequate procedures for the estimation of critical variables such as the State of Charge (SoC) and the State of Health (SoH) in the design of Battery Management Systems (BMS). Furthermore, in applications where batteries are exposed to high charge and discharge rates it is also desirable to estimate the State of Maximum Power Available (SoMPA). In this regard, this paper presents a novel approach to the estimation of SoMPA in Lithium-Ion batteries. This method formulates an optimisation problem for the battery power based on a non-linear dynamic model, where the resulting solutions are functions of the SoC. In the battery model, the polarisation resistance is modelled using fuzzy rules that are function of both SoC and the discharge (charge) current. Particle filtering algorithms are used as an online estimation technique, mainly because these algorithms allow approximating the probability density functions of the SoC and SoMPA even in the case of non-Gaussian sources of uncertainty. The proposed method for SoMPA estimation is validated using the experimental data obtained from an experimental setup designed for charging and discharging the Lithium-Ion batteries.

  8. An estimation of the electrical characteristics of planetary shallow subsurfaces with TAPIR antennas

    Science.gov (United States)

    Le Gall, A.; Reineix, A.; Ciarletti, V.; Berthelier, J. J.; Ney, R.; Dolon, F.; Corbel, C.

    2006-06-01

    In the frame of the NETLANDER program, we have developed the Terrestrial And Planetary Investigation by Radar (TAPIR) imaging ground-penetrating radar to explore the Martian subsurface at kilometric depths and search for potential water reservoirs. This instrument which is to operate from a fixed lander is based on a new concept which allows one to image the various underground reflectors by determining the direction of propagation of the reflected waves. The electrical parameters of the shallow subsurface (permittivity and conductivity) need to be known to correctly determine the propagation vector. In addition, these electrical parameters can bring valuable information on the nature of the materials close to the surface. The electric antennas of the radar are 35 m long resistively loaded monopoles that are laid on the ground. Their impedance, measured during a dedicated mode of operation of the radar, depends on the electrical parameters of soil and is used to infer the permittivity and conductivity of the upper layer of the subsurface. This paper presents an experimental and theoretical study of the antenna impedance and shows that the frequency profile of the antenna complex impedance can be used to retrieve the geoelectrical characteristics of the soil. Comparisons between a numerical modeling and in situ measurements have been successfully carried over various soils, showing a very good agreement.

  9. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    Science.gov (United States)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  10. The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation

    Science.gov (United States)

    Felder, Guido; Zischg, Andreas; Weingartner, Rolf

    2017-07-01

    Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.

  11. An Estimation Of The Geoelectric Features Of Planetary Shallow Subsurfaces With TAPIR Antennae

    Science.gov (United States)

    Le Gall, A.; Reineix, A.; Ciarletti, V.; Jean-Jacques, B.; Ney, R.; Dolon, F.; Corbel, C.

    2005-12-01

    Exploring the interior of Mars and searching for water reservoirs, either in the form of ice or of liquid water, was one of the main scientific objectives of the NETLANDER project. In that frame, the CETP (Centre d'Etude des Environnements Terrestre et Planetaires) has developed an imaging ground penetrating radar (GPR), called TAPIR (Terrestrial And Planetary Investigation by Radar). Operating from a fixed position and at low frequencies (from 2 to 4MHz), this instrument allows to retrieve not only the distance but also the inclination of deep subsurface reflectors by measuring the two horizontal electrical components and the three magnetic components of the reflected waves. In 2004, ground tests have been successfully carried out on the Antarctic Continent; the bedrock, lying under a thick layer of ice (until 1200m), was detected and part of its relief was revealed. Yet, knowing the electric parameters of the close subsurface is required to correctly process the measured electric and magnetic components of the echoes and deduce their propagation vector. In addition, these electric parameters can bring a very interesting piece of information on the nature of the material in the shallow underground. We have therefore looked for a possible method (appropriate for a planetary mission) to evaluate them using a special mode of operation of the radar. This method relies on the fact that the electrical characteristics of the transmitting electric antennas (current along the antenna, driving-point impedance.) depend on the nature of the ground on which the radar is lying. If this dependency is significant enough, geological parameters of the subsurface can be deduced from the analysis of specific measurements. We have thus performed a detailed experimental and theoretical study of the TAPIR resistively loaded electrical dipoles to get a precise understanding of the radar transmission and assess the role of the electric parameters of the underground. In this poster, we

  12. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  13. Analysis of the maximum likelihood channel estimator for OFDM systems in the presence of unknown interference

    Science.gov (United States)

    Dermoune, Azzouz; Simon, Eric Pierre

    2017-12-01

    This paper is a theoretical analysis of the maximum likelihood (ML) channel estimator for orthogonal frequency-division multiplexing (OFDM) systems in the presence of unknown interference. The following theoretical results are presented. Firstly, the uniqueness of the ML solution for practical applications, i.e., when thermal noise is present, is analytically demonstrated when the number of transmitted OFDM symbols is strictly greater than one. The ML solution is then derived from the iterative conditional ML (CML) algorithm. Secondly, it is shown that the channel estimate can be described as an algebraic function whose inputs are the initial value and the means and variances of the received samples. Thirdly, it is theoretically demonstrated that the channel estimator is not biased. The second and the third results are obtained by employing oblique projection theory. Furthermore, these results are confirmed by numerical results.

  14. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  15. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  16. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  17. Complex step-based low-rank extended Kalman filtering for state-parameter estimation in subsurface transport models

    KAUST Repository

    El Gharamti, Mohamad; Hoteit, Ibrahim

    2014-01-01

    The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.

  18. Complex step-based low-rank extended Kalman filtering for state-parameter estimation in subsurface transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-02-01

    The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.

  19. Bearing Fault Detection Based on Maximum Likelihood Estimation and Optimized ANN Using the Bees Algorithm

    Directory of Open Access Journals (Sweden)

    Behrooz Attaran

    2015-01-01

    Full Text Available Rotating machinery is the most common machinery in industry. The root of the faults in rotating machinery is often faulty rolling element bearings. This paper presents a technique using optimized artificial neural network by the Bees Algorithm for automated diagnosis of localized faults in rolling element bearings. The inputs of this technique are a number of features (maximum likelihood estimation values, which are derived from the vibration signals of test data. The results shows that the performance of the proposed optimized system is better than most previous studies, even though it uses only two features. Effectiveness of the above method is illustrated using obtained bearing vibration data.

  20. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  1. Estimation of the Maximum Output Power of Double-Clad Photonic Crystal Fiber Laser

    International Nuclear Information System (INIS)

    Chen Yue-E; Wang Yong; Qu Xi-Long

    2012-01-01

    Compared with traditional optical fiber lasers, double-clad photonic crystal fiber (PCF) lasers have larger surface-area-to-volume ratios. With an increase of output power, thermal effects may severely restrict output power and deteriorate beam quality of fiber lasers. We utilize the heat-conduction equations to estimate the maximum output power of a double-clad PCF laser under natural-convection, air-cooling, and water-cooling conditions in terms of a certain surface-volume heat ratio of the PCF. The thermal effects hence define an upper power limit of double-clad PCF lasers when scaling output power. (fundamental areas of phenomenology(including applications))

  2. Evaluating Annual Maximum and Partial Duration Series for Estimating Frequency of Small Magnitude Floods

    Directory of Open Access Journals (Sweden)

    Fazlul Karim

    2017-06-01

    Full Text Available Understanding the nature of frequent floods is important for characterising channel morphology, riparian and aquatic habitat, and informing river restoration efforts. This paper presents results from an analysis on frequency estimates of low magnitude floods using the annual maximum and partial series data compared to actual flood series. Five frequency distribution models were fitted to data from 24 gauging stations in the Great Barrier Reef (GBR lagoon catchments in north-eastern Australia. Based on the goodness of fit test, Generalised Extreme Value, Generalised Pareto and Log Pearson Type 3 models were used to estimate flood frequencies across the study region. Results suggest frequency estimates based on a partial series are better, compared to an annual series, for small to medium floods, while both methods produce similar results for large floods. Although both methods converge at a higher recurrence interval, the convergence recurrence interval varies between catchments. Results also suggest frequency estimates vary slightly between two or more partial series, depending on flood threshold, and the differences are large for the catchments that experience less frequent floods. While a partial series produces better frequency estimates, it can underestimate or overestimate the frequency if the flood threshold differs largely compared to bankfull discharge. These results have significant implications in calculating the dependency of floodplain ecosystems on the frequency of flooding and their subsequent management.

  3. Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters

    CERN Document Server

    Aguglia, D; Martins, C.D.A.

    2014-01-01

    This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...

  4. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    Science.gov (United States)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  5. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  6. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  7. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    Science.gov (United States)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  8. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    Science.gov (United States)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  9. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    Science.gov (United States)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  10. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  11. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  12. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    Directory of Open Access Journals (Sweden)

    K. Yao

    2007-12-01

    Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.

  13. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    Science.gov (United States)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  14. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

    Directory of Open Access Journals (Sweden)

    Andrey eStepanyuk

    2014-10-01

    Full Text Available Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment.

  15. Estimating safe maximum levels of vitamins and minerals in fortified foods and food supplements.

    Science.gov (United States)

    Flynn, Albert; Kehoe, Laura; Hennessy, Áine; Walton, Janette

    2017-12-01

    To show how safe maximum levels (SML) of vitamins and minerals in fortified foods and supplements may be estimated in population subgroups. SML were estimated for adults and 7- to 10-year-old children for six nutrients (retinol, vitamins B6, D and E, folic acid, iron and calcium) using data on usual daily nutrient intakes from Irish national nutrition surveys. SML of nutrients in supplements were lower for children than for adults, except for calcium and iron. Daily energy intake from fortified foods in high consumers (95th percentile) varied by nutrient from 138 to 342 kcal in adults and 40-309 kcal in children. SML (/100 kcal) of nutrients in fortified food were lower for children than adults for vitamins B6 and D, higher for vitamin E, with little difference for other nutrients. Including 25 % 'overage' for nutrients in fortified foods and supplements had little effect on SML. Nutritionally significant amounts of these nutrients can be added safely to supplements and fortified foods for these population subgroups. The estimated SML of nutrients in fortified foods and supplements may be considered safe for these population subgroups over the long term given the food composition and dietary patterns prevailing in the respective dietary surveys. This risk assessment approach shows how nutrient intake data may be used to estimate, for population subgroups, the SML for vitamins and minerals in both fortified foods and supplements, separately, each taking into account the intake from other dietary sources.

  16. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B; Wahl, R L

    2011-01-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  17. Evaluation of surface nuclear magnetic resonance-estimated subsurface water content

    International Nuclear Information System (INIS)

    Mueller-Petke, M; Dlugosch, R; Yaramanci, U

    2011-01-01

    The technique of nuclear magnetic resonance (NMR) has found widespread use in geophysical applications for determining rock properties (e.g. porosity and permeability) and state variables (e.g. water content) or to distinguish between oil and water. NMR measurements are most commonly made in the laboratory and in boreholes. The technique of surface NMR (or magnetic resonance sounding (MRS)) also takes advantage of the NMR phenomenon, but by measuring subsurface rock properties from the surface using large coils of some tens of meters and reaching depths as much as 150 m. We give here a brief review of the current state of the art of forward modeling and inversion techniques. In laboratory NMR a calibration is used to convert measured signal amplitudes into water content. Surface NMR-measured amplitudes cannot be converted by a simple calibration. The water content is derived by comparing a measured amplitude with an amplitude calculated for a given subsurface water content model as input for a forward modeling that must account for all relevant physics. A convenient option to check whether the measured signals are reliable or the forward modeling accounts for all effects is to make measurements in a well-defined environment. Therefore, measurements on top of a frozen lake were made with the latest-generation surface NMR instruments. We found the measured amplitudes to be in agreement with the calculated amplitudes for a model of 100 % water content. Assuming then both the forward modeling and the measurement to be correct, the uncertainty of the model is calculated with only a few per cent based on the measurement uncertainty.

  18. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    Directory of Open Access Journals (Sweden)

    Yi-Ming Kuo

    2011-06-01

    Full Text Available Fine airborne particulate matter (PM2.5 has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS, the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME method. The resulting epistemic framework can assimilate knowledge bases including: (a empirical-based spatial trends of PM concentration based on landuse regression, (b the spatio-temporal dependence among PM observation information, and (c site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan from 2005–2007.

  19. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  20. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    Science.gov (United States)

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  1. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  2. Methodology to estimate the cost of the severe accidents risk / maximum benefit

    International Nuclear Information System (INIS)

    Mendoza, G.; Flores, R. M.; Vega, E.

    2016-09-01

    For programs and activities to manage aging effects, any changes to plant operations, inspections, maintenance activities, systems and administrative control procedures during the renewal period should be characterized, designed to manage the effects of aging as required by 10 Cfr Part 54 that could impact the environment. Environmental impacts significantly different from those described in the final environmental statement for the current operating license should be described in detail. When complying with the requirements of a license renewal application, the Severe Accident Mitigation Alternatives (SAMA) analysis is contained in a supplement to the environmental report of the plant that meets the requirements of 10 Cfr Part 51. In this paper, the methodology for estimating the cost of severe accidents risk is established and discussed, which is then used to identify and select the alternatives for severe accident mitigation, which are analyzed to estimate the maximum benefit that an alternative could achieve if this eliminate all risk. Using the regulatory analysis techniques of the US Nuclear Regulatory Commission (NRC) estimates the cost of severe accidents risk. The ultimate goal of implementing the methodology is to identify candidates for SAMA that have the potential to reduce the severe accidents risk and determine if the implementation of each candidate is cost-effective. (Author)

  3. Maximum likelihood estimation-based denoising of magnetic resonance images using restricted local neighborhoods

    International Nuclear Information System (INIS)

    Rajan, Jeny; Jeurissen, Ben; Sijbers, Jan; Verhoye, Marleen; Van Audekerke, Johan

    2011-01-01

    In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.

  4. A practical method for estimating maximum shear modulus of cemented sands using unconfined compressive strength

    Science.gov (United States)

    Choo, Hyunwook; Nam, Hongyeop; Lee, Woojin

    2017-12-01

    The composition of naturally cemented deposits is very complicated; thus, estimating the maximum shear modulus (Gmax, or shear modulus at very small strains) of cemented sands using the previous empirical formulas is very difficult. The purpose of this experimental investigation is to evaluate the effects of particle size and cement type on the Gmax and unconfined compressive strength (qucs) of cemented sands, with the ultimate goal of estimating Gmax of cemented sands using qucs. Two sands were artificially cemented using Portland cement or gypsum under varying cement contents (2%-9%) and relative densities (30%-80%). Unconfined compression tests and bender element tests were performed, and the results from previous studies of two cemented sands were incorporated in this study. The results of this study demonstrate that the effect of particle size on the qucs and Gmax of four cemented sands is insignificant, and the variation of qucs and Gmax can be captured by the ratio between volume of void and volume of cement. qucs and Gmax of sand cemented with Portland cement are greater than those of sand cemented with gypsum. However, the relationship between qucs and Gmax of the cemented sand is not affected by the void ratio, cement type and cement content, revealing that Gmax of the complex naturally cemented soils with unknown in-situ void ratio, cement type and cement content can be estimated using qucs.

  5. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  6. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    Science.gov (United States)

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  7. Maximum safe speed estimation using planar quintic Bezier curve with C2 continuity

    Science.gov (United States)

    Ibrahim, Mohamad Fakharuddin; Misro, Md Yushalify; Ramli, Ahmad; Ali, Jamaludin Md

    2017-08-01

    This paper describes an alternative way in estimating design speed or the maximum speed allowed for a vehicle to drive safely on a road using curvature information from Bezier curve fitting on a map. We had tested on some route in Tun Sardon Road, Balik Pulau, Penang, Malaysia. We had proposed to use piecewise planar quintic Bezier curve while satisfying the curvature continuity between joined curves in the process of mapping the road. By finding the derivatives of quintic Bezier curve, the value of curvature was calculated and design speed was derived. In this paper, a higher order of Bezier Curve had been used. A higher degree of curve will give more freedom for users to control the shape of the curve compared to curve in lower degree.

  8. Estimation method for first excursion probability of secondary system with impact and friction using maximum response

    International Nuclear Information System (INIS)

    Shigeru Aoki

    2005-01-01

    The secondary system such as pipings, tanks and other mechanical equipment is installed in the primary system such as building. The important secondary systems should be designed to maintain their function even if they are subjected to destructive earthquake excitations. The secondary system has many nonlinear characteristics. Impact and friction characteristic, which are observed in mechanical supports and joints, are common nonlinear characteristics. As impact damper and friction damper, impact and friction characteristic are used for reduction of seismic response. In this paper, analytical methods of the first excursion probability of the secondary system with impact and friction, subjected to earthquake excitation are proposed. By using the methods, the effects of impact force, gap size and friction force on the first excursion probability are examined. When the tolerance level is normalized by the maximum response of the secondary system without impact or friction characteristics, variation of the first excursion probability is very small for various values of the natural period. In order to examine the effectiveness of the proposed method, the obtained results are compared with those obtained by the simulation method. Some estimation methods for the maximum response of the secondary system with nonlinear characteristics have been developed. (author)

  9. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays

    International Nuclear Information System (INIS)

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. - Highlights: ► Ecotoxicological shows significant benefits for detecting on site contaminations. ► MaxEnt to rebuild qualitative link on concentration and ecotoxicological assays. ► MaxEnt shows similar pattern when compared with concentrations map of groundwater. ► MaxEnt is a valuable method especially when quantitative relation is not at hand. - A Maximum Entropy method to rebuild qualitative relationships between Benzene groundwater concentrations and their ecotoxicological effect.

  10. Coupled Land Surface-Subsurface Hydrogeophysical Inverse Modeling to Estimate Soil Organic Carbon Content in an Arctic Tundra

    Science.gov (United States)

    Tran, A. P.; Dafflon, B.; Hubbard, S.

    2017-12-01

    Soil organic carbon (SOC) is crucial for predicting carbon climate feedbacks in the vulnerable organic-rich Arctic region. However, it is challenging to achieve this property due to the general limitations of conventional core sampling and analysis methods. In this study, we develop an inversion scheme that uses single or multiple datasets, including soil liquid water content, temperature and ERT data, to estimate the vertical profile of SOC content. Our approach relies on the fact that SOC content strongly influences soil hydrological-thermal parameters, and therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. The scheme includes several advantages. First, this is the first time SOC content is estimated by using a coupled hydrogeophysical inversion. Second, by using the Community Land Model, we can account for the land surface dynamics (evapotranspiration, snow accumulation and melting) and ice/liquid phase transition. Third, we combine a deterministic and an adaptive Markov chain Monte Carlo optimization algorithm to better estimate the posterior distributions of desired model parameters. Finally, the simulated subsurface variables are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using synthetic experiments. The results show that compared to inversion of single dataset, joint inversion of these datasets significantly reduces parameter uncertainty. The joint inversion approach is able to estimate SOC content within the shallow active layer with high reliability. Next, we apply the scheme to estimate OC content along an intensive ERT transect in Barrow, Alaska using multiple datasets acquired in the 2013-2015 period. The preliminary results show a good agreement between modeled and measured soil temperature, thaw layer thickness and electrical resistivity. The accuracy of estimated SOC content

  11. Prediction of the maximum dosage to man from the fallout of nuclear devices V. Estimation of the maximum dose from internal emitters in aquatic food supply

    International Nuclear Information System (INIS)

    Tamplin, A.R.; Fisher, H.L.; Chapman, W.H.

    1968-01-01

    A method is described for estimating the maximum internal dose that could result from the radionuclides released to an aquatic environment. By means of this analysis one can identify the nuclides that could contribute most to the internal dose, and determine the contribution of each nuclide to the total dose. The calculations required to estimate the maximum dose to an infant's bone subsequent to the construction of a sea-level canal are presented to illustrate the overall method. The results are shown to serve the basic aims of preshot rad-safe analysis and of guidance for postshot documentation. The usefulness of the analysis in providing guidance for device design is further pointed out. (author)

  12. Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies

    Energy Technology Data Exchange (ETDEWEB)

    Zanca, F., E-mail: Federica.Zanca@med.kuleuven.be [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium and Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven (Belgium); Jacobs, A. [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); Crijns, W. [Department of Radiotherapy, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); De Wever, W. [Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven, Belgium and Department of Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium)

    2014-07-15

    Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure.

  13. Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies

    International Nuclear Information System (INIS)

    Zanca, F.; Jacobs, A.; Crijns, W.; De Wever, W.

    2014-01-01

    Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure

  14. Estimation of S-wave Velocity Structures by Using Microtremor Array Measurements for Subsurface Modeling in Jakarta

    Directory of Open Access Journals (Sweden)

    Mohamad Ridwan

    2014-12-01

    Full Text Available Jakarta is located on a thick sedimentary layer that potentially has a very high seismic wave amplification. However, the available information concerning the subsurface model and bedrock depth is insufficient for a seismic hazard analysis. In this study, a microtremor array method was applied to estimate the geometry and S-wave velocity of the sedimentary layer. The spatial autocorrelation (SPAC method was applied to estimate the dispersion curve, while the S-wave velocity was estimated using a genetic algorithm approach. The analysis of the 1D and 2D S-wave velocity profiles shows that along a north-south line, the sedimentary layer is thicker towards the north. It has a positive correlation with a geological cross section derived from a borehole down to a depth of about 300 m. The SPT data from the BMKG site were used to verify the 1D S-wave velocity profile. They show a good agreement. The microtremor analysis reached the engineering bedrock in a range from 359 to 608 m as depicted by a cross section in the north-south direction. The site class was also estimated at each site, based on the average S-wave velocity until 30 m depth. The sites UI to ISTN belong to class D (medium soil, while BMKG and ANCL belong to class E (soft soil.

  15. Bias correction for estimated QTL effects using the penalized maximum likelihood method.

    Science.gov (United States)

    Zhang, J; Yue, C; Zhang, Y-M

    2012-04-01

    A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.

  16. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    Science.gov (United States)

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  17. Estimation of subsurface formation temperature in the Tarim Basin, northwest China: implications for hydrocarbon generation and preservation

    Science.gov (United States)

    Liu, Shaowen; Lei, Xiao; Feng, Changge; Hao, Chunyan

    2016-07-01

    Subsurface formation temperature in the Tarim Basin, northwest China, is vital for assessment of hydrocarbon generation and preservation, and of geothermal energy potential. However, it has not previously been well understood, due to poor data coverage and a lack of highly accurate temperature data. Here, we combined recently acquired steady-state temperature logging data with drill stem test temperature data and measured rock thermal properties, to investigate the geothermal regime and estimate the subsurface formation temperature at depth in the range of 1000-5000 m, together with temperatures at the lower boundary of each of four major Lower Paleozoic marine source rocks buried in this basin. Results show that heat flow of the Tarim Basin ranges between 26.2 and 66.1 mW/m2, with a mean of 42.5 ± 7.6 mW/m2; the geothermal gradient at depth of 3000 m varies from 14.9 to 30.2 °C/km, with a mean of 20.7 ± 2.9 °C/km. Formation temperature estimated at the depth of 1000 m is between 29 and 41 °C, with a mean of 35 °C, while 63-100 °C is for the temperature at the depth of 3000 m with a mean of 82 °C. Temperature at 5000 m ranges from 97 to 160 °C, with a mean of 129 °C. Generally spatial patterns of the subsurface formation temperature at depth are basically similar, characterized by higher temperatures in the uplift areas and lower temperatures in the sags, which indicates the influence of basement structure and lateral variations in thermal properties on the geotemperature field. Using temperature to identify the oil window in the source rocks, most of the uplifted areas in the basin are under favorable condition for oil generation and/or preservation, whereas the sags with thick sediments are favorable for gas generation and/or preservation. We conclude that relatively low present-day geothermal regime and large burial depth of the source rocks in the Tarim Basin are favorable for hydrocarbon generation and preservation. In addition, it is found that the

  18. Simultaneous State and Parameter Estimation Using Maximum Relative Entropy with Nonhomogenous Differential Equation Constraints

    Directory of Open Access Journals (Sweden)

    Adom Giffin

    2014-09-01

    Full Text Available In this paper, we continue our efforts to show how maximum relative entropy (MrE can be used as a universal updating algorithm. Here, our purpose is to tackle a joint state and parameter estimation problem where our system is nonlinear and in a non-equilibrium state, i.e., perturbed by varying external forces. Traditional parameter estimation can be performed by using filters, such as the extended Kalman filter (EKF. However, as shown with a toy example of a system with first order non-homogeneous ordinary differential equations, assumptions made by the EKF algorithm (such as the Markov assumption may not be valid. The problem can be solved with exponential smoothing, e.g., exponentially weighted moving average (EWMA. Although this has been shown to produce acceptable filtering results in real exponential systems, it still cannot simultaneously estimate both the state and its parameters and has its own assumptions that are not always valid, for example when jump discontinuities exist. We show that by applying MrE as a filter, we can not only develop the closed form solutions, but we can also infer the parameters of the differential equation simultaneously with the means. This is useful in real, physical systems, where we want to not only filter the noise from our measurements, but we also want to simultaneously infer the parameters of the dynamics of a nonlinear and non-equilibrium system. Although there were many assumptions made throughout the paper to illustrate that EKF and exponential smoothing are special cases ofMrE, we are not “constrained”, by these assumptions. In other words, MrE is completely general and can be used in broader ways.

  19. [Estimation of maximum acceptable concentration of lead and cadmium in plants and their medicinal preparations].

    Science.gov (United States)

    Zitkevicius, Virgilijus; Savickiene, Nijole; Abdrachmanovas, Olegas; Ryselis, Stanislovas; Masteiková, Rūta; Chalupova, Zuzana; Dagilyte, Audrone; Baranauskas, Algirdas

    2003-01-01

    Heavy metals (lead, cadmium) are possible dashes which quantity is defined by the limiting acceptable contents. Different drugs preparations: infusions, decoctions, tinctures, extracts, etc. are produced using medicinal plants. The objective of this research was to study the impurities of heavy metals (lead, cadmium) in medicinal plants and some drug preparations. We investigated liquid extracts of fruits Crataegus monogyna Jacq. and herbs of Echinacea purpurea Moench., tinctures--of herbs Leonurus cardiaca L. The raw materials were imported from Poland. Investigations were carried out in cooperation with the Laboratory of Antropogenic Factors of the Institute for Biomedical Research. Amounts of lead and cadmium were established after "dry" mineralisation using "Perkin-Elmer Zeeman/3030" model electrothermic atomic absorption spectrophotometer (ETG AAS/Zeeman). It was established that lead is absorbed most efficiently after estimation of absorption capacity of cellular fibers. About 10.73% of lead crosses tinctures and extracts, better cadmium--49.63%. Herbs of Leonurus cardiaca L. are the best in holding back lead and cadmium. About 14.5% of lead and cadmium crosses the tincture of herbs Leonurus cardiaca L. We estimated the factors of heavy metals (lead, cadmium) in the liquid extracts of Crataegus monogyna Jacq. and Echinacea purpurea Moench., tincture of Leonurus cardiaca L. after investigations of heavy metals (lead, cadmium) in drugs and preparations of it. The amounts of heavy metals (lead, cadmium) don't exceed the allowable norms in fruits of Crataegus monogyna Jacq., herbs of Leonurus cardiaca L. and Echinacea purpurea Moench. after estimation of lead and cadmium extraction factors, the maximum of acceptable daily intake and the quantity of drugs consumption in day.

  20. Impacts of Land Cover and Seasonal Variation on Maximum Air Temperature Estimation Using MODIS Imagery

    Directory of Open Access Journals (Sweden)

    Yulin Cai

    2017-03-01

    Full Text Available Daily maximum surface air temperature (Tamax is a crucial factor for understanding complex land surface processes under rapid climate change. Remote detection of Tamax has widely relied on the empirical relationship between air temperature and land surface temperature (LST, a product derived from remote sensing. However, little is known about how such a relationship is affected by the high heterogeneity in landscapes and dynamics in seasonality. This study aims to advance our understanding of the roles of land cover and seasonal variation in the estimation of Tamax using the MODIS (Moderate Resolution Imaging Spectroradiometer LST product. We developed statistical models to link Tamax and LST in the middle and lower reaches of the Yangtze River in China for five major land-cover types (i.e., forest, shrub, water, impervious surface, cropland, and grassland and two seasons (i.e., growing season and non-growing season. Results show that the performance of modeling the Tamax-LST relationship was highly dependent on land cover and seasonal variation. Estimating Tamax over grasslands and water bodies achieved superior performance; while uncertainties were high over forested lands that contained extensive heterogeneity in species types, plant structure, and topography. We further found that all the land-cover specific models developed for the plant non-growing season outperformed the corresponding models developed for the growing season. Discrepancies in model performance mainly occurred in the vegetated areas (forest, cropland, and shrub, suggesting an important role of plant phenology in defining the statistical relationship between Tamax and LST. For impervious surfaces, the challenge of capturing the high spatial heterogeneity in urban settings using the low-resolution MODIS data made Tamax estimation a difficult task, which was especially true in the growing season.

  1. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  2. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic

  3. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  4. Estimating uncertainty in subsurface glider position using transmissions from fixed acoustic tomography sources.

    Science.gov (United States)

    Van Uffelen, Lora J; Nosal, Eva-Marie; Howe, Bruce M; Carter, Glenn S; Worcester, Peter F; Dzieciuch, Matthew A; Heaney, Kevin D; Campbell, Richard L; Cross, Patrick S

    2013-10-01

    Four acoustic Seagliders were deployed in the Philippine Sea November 2010 to April 2011 in the vicinity of an acoustic tomography array. The gliders recorded over 2000 broadband transmissions at ranges up to 700 km from moored acoustic sources as they transited between mooring sites. The precision of glider positioning at the time of acoustic reception is important to resolve the fundamental ambiguity between position and sound speed. The Seagliders utilized GPS at the surface and a kinematic model below for positioning. The gliders were typically underwater for about 6.4 h, diving to depths of 1000 m and traveling on average 3.6 km during a dive. Measured acoustic arrival peaks were unambiguously associated with predicted ray arrivals. Statistics of travel-time offsets between received arrivals and acoustic predictions were used to estimate range uncertainty. Range (travel time) uncertainty between the source and the glider position from the kinematic model is estimated to be 639 m (426 ms) rms. Least-squares solutions for glider position estimated from acoustically derived ranges from 5 sources differed by 914 m rms from modeled positions, with estimated uncertainty of 106 m rms in horizontal position. Error analysis included 70 ms rms of uncertainty due to oceanic sound-speed variability.

  5. Noise Attenuation Estimation for Maximum Length Sequences in Deconvolution Process of Auditory Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Xian Peng

    2017-01-01

    Full Text Available The use of maximum length sequence (m-sequence has been found beneficial for recovering both linear and nonlinear components at rapid stimulation. Since m-sequence is fully characterized by a primitive polynomial of different orders, the selection of polynomial order can be problematic in practice. Usually, the m-sequence is repetitively delivered in a looped fashion. Ensemble averaging is carried out as the first step and followed by the cross-correlation analysis to deconvolve linear/nonlinear responses. According to the classical noise reduction property based on additive noise model, theoretical equations have been derived in measuring noise attenuation ratios (NARs after the averaging and correlation processes in the present study. A computer simulation experiment was conducted to test the derived equations, and a nonlinear deconvolution experiment was also conducted using order 7 and 9 m-sequences to address this issue with real data. Both theoretical and experimental results show that the NAR is essentially independent of the m-sequence order and is decided by the total length of valid data, as well as stimulation rate. The present study offers a guideline for m-sequence selections, which can be used to estimate required recording time and signal-to-noise ratio in designing m-sequence experiments.

  6. Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Hoffman, E.J.; Nunez, J.; Coakley, K.J.

    1993-01-01

    The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome

  7. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  8. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    Science.gov (United States)

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  9. Method for retrospective estimation of absorbed dose in subsurface tissues when conducting works connected with the Chernobyl' NPP accident effect elimination (using experimental and calculated data)

    International Nuclear Information System (INIS)

    Panova, V.I.; Shaks, A.I.

    1992-01-01

    The method for retrospective estimation of doses in subsurface tissues at early time periods from the accident begin in the case, when gamma radiation dose rate values (radiation field cartogram) and a person irradiation conditions on contaminated territory (professional route) are known, is discussed

  10. Simplified Methodology to Estimate the Maximum Liquid Helium (LHe) Cryostat Pressure from a Vacuum Jacket Failure

    Science.gov (United States)

    Ungar, Eugene K.; Richards, W. Lance

    2015-01-01

    The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact

  11. Beyond QALYs: Multi-criteria based estimation of maximum willingness to pay for health technologies.

    Science.gov (United States)

    Nord, Erik

    2018-03-01

    The QALY is a useful outcome measure in cost-effectiveness analysis. But in determining the overall value of and societal willingness to pay for health technologies, gains in quality of life and length of life are prima facie separate criteria that need not be put together in a single concept. A focus on costs per QALY can also be counterproductive. One reason is that the QALY does not capture well the value of interventions in patients with reduced potentials for health and thus different reference points. Another reason is a need to separate losses of length of life and losses of quality of life when it comes to judging the strength of moral claims on resources in patients of different ages. An alternative to the cost-per-QALY approach is outlined, consisting in the development of two bivariate value tables that may be used in combination to estimate maximum cost acceptance for given units of treatment-for instance a surgical procedure, or 1 year of medication-rather than for 'obtaining one QALY.' The approach is a follow-up of earlier work on 'cost value analysis.' It draws on work in the QALY field insofar as it uses health state values established in that field. But it does not use these values to weight life years and thus avoids devaluing gained life years in people with chronic illness or disability. Real tables of the kind proposed could be developed in deliberative processes among policy makers and serve as guidance for decision makers involved in health technology assessment and appraisal.

  12. Estimation of subsurface formation temperature in the Yangtze area, South China: implications for shale gas generation and preservation

    Science.gov (United States)

    Liu, S.; Hao, C.; Li, X.; Xu, M.

    2015-12-01

    Temperature is one key parameter for hydrocarbon generation and preservation, also playing important role in geothermal energy assessment;however, accurate regional temperature pattern is still challenging, owing to a lack of data coverage and data quality as well. The Yangtze area, located in the South China, is considered as the most favorable target for shale gas resource exploration in China, and attracts more and more attention recently. Here we used the newly acquired steady-state temperature loggings, reliable Drilling Stem Test temperature data available and thermal properties, estimated the subsurface temperature-at-depth for the Yangtze area. Results show that the geothermal gradient ranges between 17 K/m and 74K/m, mainly falling into 20~30K/m, with a mean of 24 K/m; heat flow varies from 25 mW/m2 to 92 mW/m2, with a mean of 65 mW/m2. For the estimated temperature-at-depth, it is about 20~50 ℃ at the depth of 1000m, 50~80℃ for that at 2000m; while the highest temperature can be up to 110℃ at 3000m depth. Generally, the present-day geothermal regime of the Yangtze area is characterized by high in the northeast, low in the middle and localized high again in the southwest, and this pattern is well consistent with the tectono-thermal processes occurred in the area. Due to Cenozoic crustal extension in the northeastern Yangtze area, magmatism is prevailed, accounting for the high heat flow observed. Precambrian basement exists in the middle Yangtze area, such as the Xuefeng and Wuling Mountains, heat flow and subsurface temperature accordingly show relatively low as well. While for the southwestern Yangtze area, especially Yunnan and western Sichuan provinces, localized Cenozoic magmatism and tectonic activities are available, which is attributed to the high geothermal regime there. Considering the Paleozoic intensive tectonic deformation in the Yangtze area, tectonically stable area is prerequisite for shale gas preservation. Geothermal regime analysis

  13. Revisiting maximum-a-posteriori estimation in log-concave models: from differential geometry to decision theory

    OpenAIRE

    Pereyra, Marcelo

    2016-01-01

    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in many areas of data science such as mathematical imaging and machine learning, where high dimensionality is addressed by using models that are log-concave and whose posterior mode can be computed efficiently by using convex optimisation algorithms. However, despite its success and rapid adoption, MAP estimation is not theoretically well understood yet, and the prevalent view is that it is generally not proper ...

  14. Estimating Rhododendron maximum L. (Ericaceae) Canopy Cover Using GPS/GIS Technology

    Science.gov (United States)

    Tyler J. Tran; Katherine J. Elliott

    2012-01-01

    In the southern Appalachians, Rhododendron maximum L. (Ericaceae) is a key evergreen understory species, often forming a subcanopy in forest stands. Little is known about the significance of R. maximum cover in relation to other forest structural variables. Only recently have studies used Global Positioning System (GPS) technology...

  15. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    Science.gov (United States)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  16. Low-rank Kalman filtering for efficient state estimation of subsurface advective contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2012-04-01

    Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.

  17. Modeling of the Maximum Entropy Problem as an Optimal Control Problem and its Application to Pdf Estimation of Electricity Price

    Directory of Open Access Journals (Sweden)

    M. E. Haji Abadi

    2013-09-01

    Full Text Available In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is considered as a functional measure and the moment constraints are considered as the state equations. Therefore, the Pdf estimation problem can be reformulated as the optimal control problem. Finally, the proposed method is applied to estimate the Pdf of the hourly electricity prices of New England and Ontario electricity markets. Obtained results show the efficiency of the proposed method.

  18. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  19. Extended Kalman Filtering to estimate temperature and irradiation for maximum power point tracking of a photovoltaic module

    International Nuclear Information System (INIS)

    Docimo, D.J.; Ghanaatpishe, M.; Mamun, A.

    2017-01-01

    This paper develops an algorithm for estimating photovoltaic (PV) module temperature and effective irradiation level. The power output of a PV system depends directly on both of these states. Estimating the temperature and irradiation allows for improved state-based control methods while eliminating the need of additional sensors. Thermal models and irradiation estimators have been developed in the literature, but none incorporate feedback for estimation. This paper outlines an Extended Kalman Filter for temperature and irradiation estimation. These estimates are, in turn, used within a novel state-based controller that tracks the maximum power point of the PV system. Simulation results indicate this state-based controller provides up to an 8.5% increase in energy produced per day as compared to an impedance matching controller. A sensitivity analysis is provided to examine the impact state estimate errors have on the ability to find the optimal operating point of the PV system. - Highlights: • Developed a temperature and irradiation estimator for photovoltaic systems. • Designed an Extended Kalman Filter to handle model and measurement uncertainty. • Developed a state-based controller for maximum power point tracking (MPPT). • Validated combined estimator/controller algorithm for different weather conditions. • Algorithm increases energy captured up to 8.5% over traditional MPPT algorithms.

  20. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  1. Comparing fishers' and scientific estimates of size at maturity and maximum body size as indicators for overfishing.

    Science.gov (United States)

    Mclean, Elizabeth L; Forrester, Graham E

    2018-04-01

    We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more

  2. Estimation of Maximum Allowable PV Connection to LV Residential Power Networks

    DEFF Research Database (Denmark)

    Demirok, Erhan; Sera, Dezso; Teodorescu, Remus

    2011-01-01

    Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...... potential of geographic area due to power network limitations even though all rooftops are fully occupied with PV modules. Therefore, it becomes more of an issue to know what exactly limits higher PV penetration level and which solutions should be engaged efficiently such as over sizing distribution...

  3. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  4. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  5. Maximum likelihood PSD estimation for speech enhancement in reverberant and noisy conditions

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Jensen, Jesper

    2016-01-01

    of the estimator is in speech enhancement algorithms, such as the Multi-channel Wiener Filter (MWF) and the Minimum Variance Distortionless Response (MVDR) beamformer. We evaluate these two algorithms in a speech dereverberation task and compare the performance obtained using the proposed and a competing PSD...... estimator. Instrumental performance measures indicate an advantage of the proposed estimator over the competing one. In a speech intelligibility test all algorithms significantly improved the word intelligibility score. While the results suggest a minor advantage of using the proposed PSD estimator...

  6. Application of seismic interferometry to an exploration of subsurface structure by using microtremors. Estimation of deep ground structures in the Wakasa bay region

    International Nuclear Information System (INIS)

    Sato, Hiroaki; Kuriyama, Masayuki; Higashi, Sadanori; Shiba, Yoshiaki; Okazaki, Atsushi

    2015-01-01

    We carried out continuous measurements of microtremors to synthesize Green's function based on seismic interferometry in order to estimate deep subsurface structures of the Ohshima peninsula (OSM) and the Otomi peninsula (OTM) in the Wakasa bay region. Using more than 80 days of data, dispersive waveforms in the cross correlations were identified as a Green's function based on seismic interferometry. Rayleigh-wave phase velocities at OSM and OTM were estimated by two different method using microtremors: first, by analyzing microtremor array data, and second, by applying the f-k spectral analysis to synthesized Green's functions derived from cross-correlation with a common observation station. Relatively longer period of phase velocities were estimated by the f-k spectral analysis using the synthesized Green's functions with a common observation station. This suggests that the synthesized Green's functions from seismic interferometry can provide a valuable data for phase velocity inversion to estimate a deep subsurface structure. By identifying deep subsurface structures at OSM and OTM based on an inversion of phase velocity from both methods, the depth of S wave velocity of about 3.5 km/s, considered as a top of seismogenic layer, were determined to be 3.8 - 4.0 km at OSM and 4.4 - 4.6 km at OTM, respectively. Love- and Rayleigh-wave group velocities were estimated from the multiple filtering analysis of the synthesized Green's functions. From the comparison of observed surface wave group velocities and theoretical group velocities of OSM and OTM, we demonstrated that the observed group velocities were in good agreement with the average of theoretical group velocities calculated by identified deep subsurface structures at OSM and OTM. It is suggested that the deep subsurface structure of the shallow sea region between two peninsulas is continuous structure from OSM to OTM and that Love- and Rayleigh-wave group velocities using

  7. Wetland methane emissions during the Last Glacial Maximum estimated from PMIP2 simulations: climate, vegetation and geographic controls

    NARCIS (Netherlands)

    Weber, S.L.; Drury, A.J.; Toonen, W.H.J.; Weele, M. van

    2010-01-01

    It is an open question to what extent wetlands contributed to the interglacial‐glacial decrease in atmospheric methane concentration. Here we estimate methane emissions from glacial wetlands, using newly available PMIP2 simulations of the Last Glacial Maximum (LGM) climate from coupled

  8. GIS-based maps and area estimates of Northern Hemisphere permafrost extent during the Last Glacial Maximum

    NARCIS (Netherlands)

    Lindgren, A.; Hugelius, G.; Kuhry, P.; Christensen, T.R.; Vandenberghe, J.F.

    2016-01-01

    This study presents GIS-based estimates of permafrost extent in the northern circumpolar region during the Last Glacial Maximum (LGM), based on a review of previously published maps and compilations of field evidence in the form of ice-wedge pseudomorphs and relict sand wedges. We focus on field

  9. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1991-01-01

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual

  10. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1992-01-01

    In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual

  11. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  12. Speed Estimation in Geared Wind Turbines Using the Maximum Correlation Coefficient

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Marhadi, Kun S.; Jensen, Bogi Bech

    2015-01-01

    to overcome the above mentioned issues. The high speed stage shaft angular velocity is calculated based on the maximum correlation coefficient between the 1 st gear mesh frequency of the last gearbox stage and a pure sinus tone of known frequency and phase. The proposed algorithm utilizes vibration signals...

  13. Nitrogen Removal in a Horizontal Subsurface Flow Constructed Wetland Estimated Using the First-Order Kinetic Model

    Directory of Open Access Journals (Sweden)

    Lijuan Cui

    2016-11-01

    Full Text Available We monitored the water quality and hydrological conditions of a horizontal subsurface constructed wetland (HSSF-CW in Beijing, China, for two years. We simulated the area-based constant and the temperature coefficient with the first-order kinetic model. We examined the relationships between the nitrogen (N removal rate, N load, seasonal variations in the N removal rate, and environmental factors—such as the area-based constant, temperature, and dissolved oxygen (DO. The effluent ammonia (NH4+-N and nitrate (NO3−-N concentrations were significantly lower than the influent concentrations (p < 0.01, n = 38. The NO3−-N load was significantly correlated with the removal rate (R2 = 0.96, p < 0.01, but the NH4+-N load was not correlated with the removal rate (R2 = 0.02, p > 0.01. The area-based constants of NO3−-N and NH4+-N at 20 °C were 27 ± 26 (mean ± SD and 14 ± 10 m∙year−1, respectively. The temperature coefficients for NO3−-N and NH4+-N were estimated at 1.004 and 0.960, respectively. The area-based constants for NO3−-N and NH4+-N were not correlated with temperature (p > 0.01. The NO3−-N area-based constant was correlated with the corresponding load (R2 = 0.96, p < 0.01. The NH4+-N area rate was correlated with DO (R2 = 0.69, p < 0.01, suggesting that the factors that influenced the N removal rate in this wetland met Liebig’s law of the minimum.

  14. Estimation of oceanic subsurface mixing under a severe cyclonic storm using a coupled atmosphere–ocean–wave model

    Directory of Open Access Journals (Sweden)

    K. R. Prakash

    2018-04-01

    Full Text Available A coupled atmosphere–ocean–wave model was used to examine mixing in the upper-oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB during 10–14 October 2013. The coupled model was found to improve the sea surface temperature over the uncoupled model. Model simulations highlight the prominent role of cyclone-induced near-inertial oscillations in subsurface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played a central role in the deepening of the thermocline and mixed layer depth by 40 and 15 m, respectively. For the first time over the BoB, a detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out using an atmosphere–ocean–wave coupled model during a cyclone. A quantitative estimate of kinetic energy in the oceanic water column, its propagation, and its dissipation mechanisms were explained using the coupled atmosphere–ocean–wave model. The large shear generated by the inertial oscillations was found to overcome the stratification and initiate mixing at the base of the mixed layer. Greater mixing was found at the depths where the eddy kinetic diffusivity was large. The baroclinic current, holding a larger fraction of kinetic energy than the barotropic current, weakened rapidly after the passage of the cyclone. The shear induced by inertial oscillations was found to decrease rapidly with increasing depth below the thermocline. The dampening of the mixing process below the thermocline was explained through the enhanced dissipation rate of turbulent kinetic energy upon approaching the thermocline layer. The wave–current interaction and nonlinear wave–wave interaction were found to affect the process of downward mixing and cause the dissipation of inertial oscillations.

  15. Estimation of oceanic subsurface mixing under a severe cyclonic storm using a coupled atmosphere-ocean-wave model

    Science.gov (United States)

    Prakash, Kumar Ravi; Nigam, Tanuja; Pant, Vimlesh

    2018-04-01

    A coupled atmosphere-ocean-wave model was used to examine mixing in the upper-oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB) during 10-14 October 2013. The coupled model was found to improve the sea surface temperature over the uncoupled model. Model simulations highlight the prominent role of cyclone-induced near-inertial oscillations in subsurface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played a central role in the deepening of the thermocline and mixed layer depth by 40 and 15 m, respectively. For the first time over the BoB, a detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out using an atmosphere-ocean-wave coupled model during a cyclone. A quantitative estimate of kinetic energy in the oceanic water column, its propagation, and its dissipation mechanisms were explained using the coupled atmosphere-ocean-wave model. The large shear generated by the inertial oscillations was found to overcome the stratification and initiate mixing at the base of the mixed layer. Greater mixing was found at the depths where the eddy kinetic diffusivity was large. The baroclinic current, holding a larger fraction of kinetic energy than the barotropic current, weakened rapidly after the passage of the cyclone. The shear induced by inertial oscillations was found to decrease rapidly with increasing depth below the thermocline. The dampening of the mixing process below the thermocline was explained through the enhanced dissipation rate of turbulent kinetic energy upon approaching the thermocline layer. The wave-current interaction and nonlinear wave-wave interaction were found to affect the process of downward mixing and cause the dissipation of inertial oscillations.

  16. Estimate of annual daily maximum rainfall and intense rain equation for the Formiga municipality, MG, Brazil

    Directory of Open Access Journals (Sweden)

    Giovana Mara Rodrigues Borges

    2016-11-01

    Full Text Available Knowledge of the probabilistic behavior of rainfall is extremely important to the design of drainage systems, dam spillways, and other hydraulic projects. This study therefore examined statistical models to predict annual daily maximum rainfall as well as models of heavy rain for the city of Formiga - MG. To do this, annual maximum daily rainfall data were ranked in decreasing order that best describes the statistical distribution by exceedance probability. Daily rainfall disaggregation methodology was used for the intense rain model studies and adjusted with Intensity-Duration-Frequency (IDF and Exponential models. The study found that the Gumbel model better adhered to the data regarding observed frequency as indicated by the Chi-squared test, and that the exponential model best conforms to the observed data to predict intense rains.

  17. Estimation of Financial Agent-Based Models with Simulated Maximum Likelihood

    Czech Academy of Sciences Publication Activity Database

    Kukačka, Jiří; Baruník, Jozef

    2017-01-01

    Roč. 85, č. 1 (2017), s. 21-45 ISSN 0165-1889 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : heterogeneous agent model, * simulated maximum likelihood * switching Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 1.000, year: 2016 http://library.utia.cas.cz/separaty/2017/E/kukacka-0478481.pdf

  18. Monodimensional estimation of maximum Reynolds shear stress in the downstream flow field of bileaflet valves.

    Science.gov (United States)

    Grigioni, Mauro; Daniele, Carla; D'Avenio, Giuseppe; Barbaro, Vincenzo

    2002-05-01

    Turbulent flow generated by prosthetic devices at the bloodstream level may cause mechanical stress on blood particles. Measurement of the Reynolds stress tensor and/or some of its components is a mandatory step to evaluate the mechanical load on blood components exerted by fluid stresses, as well as possible consequent blood damage (hemolysis or platelet activation). Because of the three-dimensional nature of turbulence, in general, a three-component anemometer should be used to measure all components of the Reynolds stress tensor, but this is difficult, especially in vivo. The present study aimed to derive the maximum Reynolds shear stress (RSS) in three commercially available prosthetic heart valves (PHVs) of wide diffusion, starting with monodimensional data provided in vivo by echo Doppler. Accurate measurement of PHV flow field was made using laser Doppler anemometry; this provided the principal turbulence quantities (mean velocity, root-mean-square value of velocity fluctuations, average value of cross-product of velocity fluctuations in orthogonal directions) needed to quantify the maximum turbulence-related shear stress. The recorded data enabled determination of the relationship, the Reynolds stresses ratio (RSR) between maximum RSS and Reynolds normal stress in the main flow direction. The RSR was found to be dependent upon the local structure of the flow field. The reported RSR profiles, which permit a simple calculation of maximum RSS, may prove valuable during the post-implantation phase, when an assessment of valve function is made echocardiographically. Hence, the risk of damage to blood constituents associated with bileaflet valve implantation may be accurately quantified in vivo.

  19. Pilot power optimization for AF relaying using maximum likelihood channel estimation

    KAUST Repository

    Wang, Kezhi

    2014-09-01

    Bit error rates (BERs) for amplify-and-forward (AF) relaying systems with two different pilot-symbol-aided channel estimation methods, disintegrated channel estimation (DCE) and cascaded channel estimation (CCE), are derived in Rayleigh fading channels. Based on these BERs, the pilot powers at the source and at the relay are optimized when their total transmitting powers are fixed. Numerical results show that the optimized system has a better performance than other conventional nonoptimized allocation systems. They also show that the optimal pilot power in variable gain is nearly the same as that in fixed gain for similar system settings. andcopy; 2014 IEEE.

  20. Least squares autoregressive (maximum entropy) spectral estimation for Fourier spectroscopy and its application to the electron cyclotron emission from plasma

    International Nuclear Information System (INIS)

    Iwama, N.; Inoue, A.; Tsukishima, T.; Sato, M.; Kawahata, K.

    1981-07-01

    A new procedure for the maximum entropy spectral estimation is studied for the purpose of data processing in Fourier transform spectroscopy. The autoregressive model fitting is examined under a least squares criterion based on the Yule-Walker equations. An AIC-like criterion is suggested for selecting the model order. The principal advantage of the new procedure lies in the enhanced frequency resolution particularly for small values of the maximum optical path-difference of the interferogram. The usefulness of the procedure is ascertained by some numerical simulations and further by experiments with respect to a highly coherent submillimeter wave and the electron cyclotron emission from a stellarator plasma. (author)

  1. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  2. Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors

    Energy Technology Data Exchange (ETDEWEB)

    Bomble, Yannick J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); St. John, Peter C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Crowley, Michael F [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-18

    A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.

  3. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    Science.gov (United States)

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub

  5. Estimating Water Demand in Urban Indonesia: A Maximum Likelihood Approach to block Rate Pricing Data

    NARCIS (Netherlands)

    Rietveld, Piet; Rouwendal, Jan; Zwart, Bert

    1997-01-01

    In this paper the Burtless and Hausman model is used to estimate water demand in Salatiga, Indonesia. Other statistical models, as OLS and IV, are found to be inappropiate. A topic, which does not seem to appear in previous studies, is the fact that the density function of the loglikelihood can be

  6. Directional maximum likelihood self-estimation of the path-loss exponent

    NARCIS (Netherlands)

    Hu, Y.; Leus, G.J.T.; Dong, Min; Zheng, Thomas Fang

    2016-01-01

    The path-loss exponent (PLE) is a key parameter in wireless propagation channels. Therefore, obtaining the knowledge of the PLE is rather significant for assisting wireless communications and networking to achieve a better performance. Most existing methods for estimating the PLE not only require

  7. Noise removal in multichannel image data by a parametric maximum noise fraction estimator

    DEFF Research Database (Denmark)

    Conradsen, Knut; Ersbøll, Bjarne Kjær; Nielsen, Allan Aasbjerg

    1991-01-01

    Some approaches to noise removal in multispectral imagery are presented. The primary contribution of the present work is the establishment of several ways of estimating the noise covariance matrix from image data and a comparison of the noise separation performances. A case study with Landsat MSS...

  8. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are

  9. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub

  10. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Gopich, Irina V. [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  11. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  13. Theoretical estimates of maximum fields in superconducting resonant radio frequency cavities: stability theory, disorder, and laminates

    Science.gov (United States)

    Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.

    2017-03-01

    Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.

  14. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  15. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  16. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  17. Estimating total maximum daily loads with the Stochastic Empirical Loading and Dilution Model

    Science.gov (United States)

    Granato, Gregory; Jones, Susan Cheung

    2017-01-01

    The Massachusetts Department of Transportation (DOT) and the Rhode Island DOT are assessing and addressing roadway contributions to total maximum daily loads (TMDLs). Example analyses for total nitrogen, total phosphorus, suspended sediment, and total zinc in highway runoff were done by the U.S. Geological Survey in cooperation with FHWA to simulate long-term annual loads for TMDL analyses with the stochastic empirical loading and dilution model known as SELDM. Concentration statistics from 19 highway runoff monitoring sites in Massachusetts were used with precipitation statistics from 11 long-term monitoring sites to simulate long-term pavement yields (loads per unit area). Highway sites were stratified by traffic volume or surrounding land use to calculate concentration statistics for rural roads, low-volume highways, high-volume highways, and ultraurban highways. The median of the event mean concentration statistics in each traffic volume category was used to simulate annual yields from pavement for a 29- or 30-year period. Long-term average yields for total nitrogen, phosphorus, and zinc from rural roads are lower than yields from the other categories, but yields of sediment are higher than for the low-volume highways. The average yields of the selected water quality constituents from high-volume highways are 1.35 to 2.52 times the associated yields from low-volume highways. The average yields of the selected constituents from ultraurban highways are 1.52 to 3.46 times the associated yields from high-volume highways. Example simulations indicate that both concentration reduction and flow reduction by structural best management practices are crucial for reducing runoff yields.

  18. Parameter-free bearing fault detection based on maximum likelihood estimation and differentiation

    International Nuclear Information System (INIS)

    Bozchalooi, I Soltani; Liang, Ming

    2009-01-01

    Bearing faults can lead to malfunction and ultimately complete stall of many machines. The conventional high-frequency resonance (HFR) method has been commonly used for bearing fault detection. However, it is often very difficult to obtain and calibrate bandpass filter parameters, i.e. the center frequency and bandwidth, the key to the success of the HFR method. This inevitably undermines the usefulness of the conventional HFR technique. To avoid such difficulties, we propose parameter-free, versatile yet straightforward techniques to detect bearing faults. We focus on two types of measured signals frequently encountered in practice: (1) a mixture of impulsive faulty bearing vibrations and intrinsic background noise and (2) impulsive faulty bearing vibrations blended with intrinsic background noise and vibration interferences. To design a proper signal processing technique for each case, we analyze the effects of intrinsic background noise and vibration interferences on amplitude demodulation. For the first case, a maximum likelihood-based fault detection method is proposed to accommodate the Rician distribution of the amplitude-demodulated signal mixture. For the second case, we first illustrate that the high-amplitude low-frequency vibration interferences can make the amplitude demodulation ineffective. Then we propose a differentiation method to enhance the fault detectability. It is shown that the iterative application of a differentiation step can boost the relative strength of the impulsive faulty bearing signal component with respect to the vibration interferences. This preserves the effectiveness of amplitude demodulation and hence leads to more accurate fault detection. The proposed approaches are evaluated on simulated signals and experimental data acquired from faulty bearings

  19. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  20. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    Science.gov (United States)

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  1. Maximum likelihood estimation of dose-response parameters for therapeutic operating characteristic (TOC) analysis of carcinoma of the nasopharynx

    International Nuclear Information System (INIS)

    Metz, C.E.; Tokars, R.P.; Kronman, H.B.; Griem, M.L.

    1982-01-01

    A Therapeutic Operating Characteristic (TOC) curve for radiation therapy plots, for all possible treatment doses, the probability of tumor ablation as a function of the probability of radiation-induced complication. Application of this analysis to actual therapeutic situation requires that dose-response curves for ablation and for complication be estimated from clinical data. We describe an approach in which ''maximum likelihood estimates'' of these dose-response curves are made, and we apply this approach to data collected on responses to radiotherapy for carcinoma of the nasopharynx. TOC curves constructed from the estimated dose-response curves are subject to moderately large uncertainties because of the limitations of available data.These TOC curves suggest, however, that treatment doses greater than 1800 rem may substantially increase the probability of tumor ablation with little increase in the risk of radiation-induced cervical myelopathy, especially for T1 and T2 tumors

  2. Climate sensitivity estimated from temperature reconstructions of the Last Glacial Maximum

    Science.gov (United States)

    Schmittner, A.; Urban, N.; Shakun, J. D.; Mahowald, N. M.; Clark, P. U.; Bartlein, P. J.; Mix, A. C.; Rosell-Melé, A.

    2011-12-01

    In 1959 IJ Good published the discussion "Kinds of Probability" in Science. Good identified (at least) five kinds. The need for (at least) a sixth kind of probability when quantifying uncertainty in the context of climate science is discussed. This discussion brings out the differences in weather-like forecasting tasks and climate-links tasks, with a focus on the effective use both of science and of modelling in support of decision making. Good also introduced the idea of a "Dynamic probability" a probability one expects to change without any additional empirical evidence; the probabilities assigned by a chess playing program when it is only half thorough its analysis being an example. This case is contrasted with the case of "Mature probabilities" where a forecast algorithm (or model) has converged on its asymptotic probabilities and the question hinges in whether or not those probabilities are expected to change significantly before the event in question occurs, even in the absence of new empirical evidence. If so, then how might one report and deploy such immature probabilities in scientific-support of decision-making rationally? Mature Probability is suggested as a useful sixth kind, although Good would doubtlessly argue that we can get by with just one, effective communication with decision makers may be enhanced by speaking as if the others existed. This again highlights the distinction between weather-like contexts and climate-like contexts. In the former context one has access to a relevant climatology (a relevant, arguably informative distribution prior to any model simulations), in the latter context that information is not available although one can fall back on the scientific basis upon which the model itself rests, and estimate the probability that the model output is in fact misinformative. This subjective "probability of a big surprise" is one way to communicate the probability of model-based information holding in practice, the probability that the

  3. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    Science.gov (United States)

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  4. Maximum wind radius estimated by the 50 kt radius: improvement of storm surge forecasting over the western North Pacific

    Science.gov (United States)

    Takagi, Hiroshi; Wu, Wenjie

    2016-03-01

    Even though the maximum wind radius (Rmax) is an important parameter in determining the intensity and size of tropical cyclones, it has been overlooked in previous storm surge studies. This study reviews the existing estimation methods for Rmax based on central pressure or maximum wind speed. These over- or underestimate Rmax because of substantial variations in the data, although an average radius can be estimated with moderate accuracy. As an alternative, we propose an Rmax estimation method based on the radius of the 50 kt wind (R50). Data obtained by a meteorological station network in the Japanese archipelago during the passage of strong typhoons, together with the JMA typhoon best track data for 1990-2013, enabled us to derive the following simple equation, Rmax = 0.23 R50. Application to a recent strong typhoon, the 2015 Typhoon Goni, confirms that the equation provides a good estimation of Rmax, particularly when the central pressure became considerably low. Although this new method substantially improves the estimation of Rmax compared to the existing models, estimation errors are unavoidable because of fundamental uncertainties regarding the typhoon's structure or insufficient number of available typhoon data. In fact, a numerical simulation for the 2013 Typhoon Haiyan as well as 2015 Typhoon Goni demonstrates a substantial difference in the storm surge height for different Rmax. Therefore, the variability of Rmax should be taken into account in storm surge simulations (e.g., Rmax = 0.15 R50-0.35 R50), independently of the model used, to minimize the risk of over- or underestimating storm surges. The proposed method is expected to increase the predictability of major storm surges and to contribute to disaster risk management, particularly in the western North Pacific, including countries such as Japan, China, Taiwan, the Philippines, and Vietnam.

  5. Estimating daily minimum, maximum, and mean near surface air temperature using hybrid satellite models across Israel.

    Science.gov (United States)

    Rosenfeld, Adar; Dorman, Michael; Schwartz, Joel; Novack, Victor; Just, Allan C; Kloog, Itai

    2017-11-01

    Meteorological stations measure air temperature (Ta) accurately with high temporal resolution, but usually suffer from limited spatial resolution due to their sparse distribution across rural, undeveloped or less populated areas. Remote sensing satellite-based measurements provide daily surface temperature (Ts) data in high spatial and temporal resolution and can improve the estimation of daily Ta. In this study we developed spatiotemporally resolved models which allow us to predict three daily parameters: Ta Max (day time), 24h mean, and Ta Min (night time) on a fine 1km grid across the state of Israel. We used and compared both the Aqua and Terra MODIS satellites. We used linear mixed effect models, IDW (inverse distance weighted) interpolations and thin plate splines (using a smooth nonparametric function of longitude and latitude) to first calibrate between Ts and Ta in those locations where we have available data for both and used that calibration to fill in neighboring cells without surface monitors or missing Ts. Out-of-sample ten-fold cross validation (CV) was used to quantify the accuracy of our predictions. Our model performance was excellent for both days with and without available Ts observations for both Aqua and Terra (CV Aqua R 2 results for min 0.966, mean 0.986, and max 0.967; CV Terra R 2 results for min 0.965, mean 0.987, and max 0.968). Our research shows that daily min, mean and max Ta can be reliably predicted using daily MODIS Ts data even across Israel, with high accuracy even for days without Ta or Ts data. These predictions can be used as three separate Ta exposures in epidemiology studies for better diurnal exposure assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Room at the Mountain: Estimated Maximum Amounts of Commercial Spent Nuclear Fuel Capable of Disposal in a Yucca Mountain Repository

    International Nuclear Information System (INIS)

    Kessler, John H.; Kemeny, John; King, Fraser; Ross, Alan M.; Ross, Benjamen

    2006-01-01

    The purpose of this paper is to present an initial analysis of the maximum amount of commercial spent nuclear fuel (CSNF) that could be emplaced into a geological repository at Yucca Mountain. This analysis identifies and uses programmatic, material, and geological constraints and factors that affect this estimation of maximum amount of CSNF for disposal. The conclusion of this initial analysis is that the current legislative limit on Yucca Mountain disposal capacity, 63,000 MTHM of CSNF, is a small fraction of the available physical capacity of the Yucca Mountain system assuming the current high-temperature operating mode (HTOM) design. EPRI is confident that at least four times the legislative limit for CSNF (∼260,000 MTHM) can be emplaced in the Yucca Mountain system. It is possible that with additional site characterization, upwards of nine times the legislative limit (∼570,000 MTHM) could be emplaced. (authors)

  7. Deep subsurface microbial processes

    Science.gov (United States)

    Lovley, D.R.; Chapelle, F.H.

    1995-01-01

    Information on the microbiology of the deep subsurface is necessary in order to understand the factors controlling the rate and extent of the microbially catalyzed redox reactions that influence the geophysical properties of these environments. Furthermore, there is an increasing threat that deep aquifers, an important drinking water resource, may be contaminated by man's activities, and there is a need to predict the extent to which microbial activity may remediate such contamination. Metabolically active microorganisms can be recovered from a diversity of deep subsurface environments. The available evidence suggests that these microorganisms are responsible for catalyzing the oxidation of organic matter coupled to a variety of electron acceptors just as microorganisms do in surface sediments, but at much slower rates. The technical difficulties in aseptically sampling deep subsurface sediments and the fact that microbial processes in laboratory incubations of deep subsurface material often do not mimic in situ processes frequently necessitate that microbial activity in the deep subsurface be inferred through nonmicrobiological analyses of ground water. These approaches include measurements of dissolved H2, which can predict the predominant microbially catalyzed redox reactions in aquifers, as well as geochemical and groundwater flow modeling, which can be used to estimate the rates of microbial processes. Microorganisms recovered from the deep subsurface have the potential to affect the fate of toxic organics and inorganic contaminants in groundwater. Microbial activity also greatly influences 1 the chemistry of many pristine groundwaters and contributes to such phenomena as porosity development in carbonate aquifers, accumulation of undesirably high concentrations of dissolved iron, and production of methane and hydrogen sulfide. Although the last decade has seen a dramatic increase in interest in deep subsurface microbiology, in comparison with the study of

  8. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    Science.gov (United States)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  9. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio

    2015-11-10

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.

  10. Estimations of One Repetition Maximum and Isometric Peak Torque in Knee Extension Based on the Relationship Between Force and Velocity.

    Science.gov (United States)

    Sugiura, Yoshito; Hatanaka, Yasuhiko; Arai, Tomoaki; Sakurai, Hiroaki; Kanada, Yoshikiyo

    2016-04-01

    We aimed to investigate whether a linear regression formula based on the relationship between joint torque and angular velocity measured using a high-speed video camera and image measurement software is effective for estimating 1 repetition maximum (1RM) and isometric peak torque in knee extension. Subjects comprised 20 healthy men (mean ± SD; age, 27.4 ± 4.9 years; height, 170.3 ± 4.4 cm; and body weight, 66.1 ± 10.9 kg). The exercise load ranged from 40% to 150% 1RM. Peak angular velocity (PAV) and peak torque were used to estimate 1RM and isometric peak torque. To elucidate the relationship between force and velocity in knee extension, the relationship between the relative proportion of 1RM (% 1RM) and PAV was examined using simple regression analysis. The concordance rate between the estimated value and actual measurement of 1RM and isometric peak torque was examined using intraclass correlation coefficients (ICCs). Reliability of the regression line of PAV and % 1RM was 0.95. The concordance rate between the actual measurement and estimated value of 1RM resulted in an ICC(2,1) of 0.93 and that of isometric peak torque had an ICC(2,1) of 0.87 and 0.86 for 6 and 3 levels of load, respectively. Our method for estimating 1RM was effective for decreasing the measurement time and reducing patients' burden. Additionally, isometric peak torque can be estimated using 3 levels of load, as we obtained the same results as those reported previously. We plan to expand the range of subjects and examine the generalizability of our results.

  11. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R., E-mail: asolekar@iitb.ac.in

    2012-11-01

    The 180 ship recycling yards located on Alang-Sosiya beach in the State of Gujarat on the west coast of India is the world's largest cluster engaged in dismantling. Yearly 350 ships have been dismantled (avg. 10,000 ton steel/ship) with the involvement of about 60,000 workers. Cutting and scrapping of plates or scraping of painted metal surfaces happens to be the commonly performed operation during ship breaking. The pollutants released from a typical plate-cutting operation can potentially either affect workers directly by contaminating the breathing zone (air pollution) or can potentially add pollution load into the intertidal zone and contaminate sediments when pollutants get emitted in the secondary working zone and gets subjected to tidal forces. There was a two-pronged purpose behind the mathematical modeling exercise performed in this study. First, to estimate the zone of influence up to which the effect of plume would extend. Second, to estimate the cumulative maximum concentration of heavy metals that can potentially occur in ambient atmosphere of a given yard. The cumulative maximum heavy metal concentration was predicted by the model to be between 113 {mu}g/Nm{sup 3} and 428 {mu}g/Nm{sup 3} (at 4 m/s and 1 m/s near-ground wind speeds, respectively). For example, centerline concentrations of lead (Pb) in the yard could be placed between 8 and 30 {mu}g/Nm{sup 3}. These estimates are much higher than the Indian National Ambient Air Quality Standards (NAAQS) for Pb (0.5 {mu}g/Nm{sup 3}). This research has already become the critical science and technology inputs for formulation of policies for eco-friendly dismantling of ships, formulation of ideal procedure and corresponding health, safety, and environment provisions. The insights obtained from this research are also being used in developing appropriate technologies for minimizing exposure to workers and minimizing possibilities of causing heavy metal pollution in the intertidal zone of ship recycling

  12. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India

    International Nuclear Information System (INIS)

    Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R.

    2012-01-01

    The 180 ship recycling yards located on Alang–Sosiya beach in the State of Gujarat on the west coast of India is the world's largest cluster engaged in dismantling. Yearly 350 ships have been dismantled (avg. 10,000 ton steel/ship) with the involvement of about 60,000 workers. Cutting and scrapping of plates or scraping of painted metal surfaces happens to be the commonly performed operation during ship breaking. The pollutants released from a typical plate-cutting operation can potentially either affect workers directly by contaminating the breathing zone (air pollution) or can potentially add pollution load into the intertidal zone and contaminate sediments when pollutants get emitted in the secondary working zone and gets subjected to tidal forces. There was a two-pronged purpose behind the mathematical modeling exercise performed in this study. First, to estimate the zone of influence up to which the effect of plume would extend. Second, to estimate the cumulative maximum concentration of heavy metals that can potentially occur in ambient atmosphere of a given yard. The cumulative maximum heavy metal concentration was predicted by the model to be between 113 μg/Nm 3 and 428 μg/Nm 3 (at 4 m/s and 1 m/s near-ground wind speeds, respectively). For example, centerline concentrations of lead (Pb) in the yard could be placed between 8 and 30 μg/Nm 3 . These estimates are much higher than the Indian National Ambient Air Quality Standards (NAAQS) for Pb (0.5 μg/Nm 3 ). This research has already become the critical science and technology inputs for formulation of policies for eco-friendly dismantling of ships, formulation of ideal procedure and corresponding health, safety, and environment provisions. The insights obtained from this research are also being used in developing appropriate technologies for minimizing exposure to workers and minimizing possibilities of causing heavy metal pollution in the intertidal zone of ship recycling yards in India. -- Highlights

  13. Subsurface Contamination Control

    Energy Technology Data Exchange (ETDEWEB)

    Y. Yuan

    2001-12-12

    subsurface repository; (2) provides a table of derived LRCL for nuclides of radiological importance; (3) Provides an as low as is reasonably achievable (ALARA) evaluation of the derived LRCL by comparing potential onsite and offsite doses to documented ALARA requirements; (4) Provides a method for estimating potential releases from a defective WP; (5) Provides an evaluation of potential radioactive releases from a defective WP that may become airborne and result in contamination of the subsurface facility; and (6) Provides a preliminary analysis of the detectability of a potential WP leak to support the design of an airborne release monitoring system.

  14. Coupled land surface-subsurface hydrogeophysical inverse modeling to estimate soil organic carbon content and explore associated hydrological and thermal dynamics in the Arctic tundra

    Science.gov (United States)

    Phuong Tran, Anh; Dafflon, Baptiste; Hubbard, Susan S.

    2017-09-01

    Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface-subsurface hydrological-thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon-climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological-thermal processes associated with annual freeze-thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets - including soil liquid water content, temperature and electrical resistivity tomography (ERT) data - to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological-thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface-subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice-liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological-thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the

  15. Coupled land surface–subsurface hydrogeophysical inverse modeling to estimate soil organic carbon content and explore associated hydrological and thermal dynamics in the Arctic tundra

    Directory of Open Access Journals (Sweden)

    A. P. Tran

    2017-09-01

    Full Text Available Quantitative characterization of soil organic carbon (OC content is essential due to its significant impacts on surface–subsurface hydrological–thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon–climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological–thermal processes associated with annual freeze–thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets – including soil liquid water content, temperature and electrical resistivity tomography (ERT data – to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological–thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface–subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting and ice–liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological–thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and

  16. Comparative Study of Regional Estimation Methods for Daily Maximum Temperature (A Case Study of the Isfahan Province

    Directory of Open Access Journals (Sweden)

    Ghamar Fadavi

    2016-02-01

    Full Text Available Introduction: As the statistical time series are in short period and the meteorological station are not distributed well in mountainous area determining of climatic criteria are complex. Therefore, in recent years interpolation methods for establishment of continuous climatic data have been considered. Continuous daily maximum temperature data are a key factor for climate-crop modeling which is fundamental for water resources management, drought, and optimal use from climatic potentials of different regions. The main objective of this study is to evaluate different interpolation methods for estimation of regional maximum temperature in the Isfahan province. Materials and Methods: Isfahan province has about 937,105 square kilometers, between 30 degree and 43 minutes to 34 degree and 27 minutes North latitude equator line and 49 degree and 36 minutes to 55 degree and 31 minutes east longitude Greenwich. It is located in the center of Iran and it's western part extend to eastern footage of the Zagros mountain range. It should be mentioned that elevation range of meteorological stations are between 845 to 2490 in the study area. This study was done using daily maximum temperature data of 1992 and 2007 years of synoptic and climatology stations of I.R. of Iran meteorological organization (IRIMO. In order to interpolate temperature data, two years including 1992 and 2007 with different number of meteorological stations have been selected the temperature data of thirty meteorological stations (17 synoptic and 13 climatologically stations for 1992 year and fifty four meteorological stations (31 synoptic and 23 climatologically stations for 2007 year were used from Isfahan province and neighboring provinces. In order to regionalize the point data of daily maximum temperature, the interpolation methods, including inverse distance weighted (IDW, Kriging, Co-Kriging, Kriging-Regression, multiple regression and Spline were used. Therefore, for this allocated

  17. BER and optimal power allocation for amplify-and-forward relaying using pilot-aided maximum likelihood estimation

    KAUST Repository

    Wang, Kezhi

    2014-10-01

    Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB\\'s in effective signal-to-noise ratio.

  18. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    Energy Technology Data Exchange (ETDEWEB)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico; Reichardt, Christian L. [School of Physics, University of Melbourne, 313 David Caro building, Swanston St and Tin Alley, Parkville VIC 3010 (Australia); Baxter, Eric J. [Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104 (United States); Bleem, Lindsey E. [Argonne National Laboratory, High-Energy Physics Division, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Crawford, Thomas M. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Holder, Gilbert P. [Department of Astronomy and Department of Physics, University of Illinois, 1002 West Green St., Urbana, IL 61801 (United States); Manzotti, Alessandro, E-mail: srinivasan.raghunathan@unimelb.edu.au, E-mail: s.patil2@student.unimelb.edu.au, E-mail: ebax@sas.upenn.edu, E-mail: federico.bianchini@unimelb.edu.au, E-mail: bleeml@uchicago.edu, E-mail: tcrawfor@kicp.uchicago.edu, E-mail: gholder@illinois.edu, E-mail: manzotti@uchicago.edu, E-mail: christian.reichardt@unimelb.edu.au [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  19. BER and optimal power allocation for amplify-and-forward relaying using pilot-aided maximum likelihood estimation

    KAUST Repository

    Wang, Kezhi; Chen, Yunfei; Alouini, Mohamed-Slim; Xu, Feng

    2014-01-01

    Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB's in effective signal-to-noise ratio.

  20. Subsurface probing

    International Nuclear Information System (INIS)

    Lytle, R.J.

    1978-01-01

    Imaging techniques that can be used to translate seismic and electromagnetic wave signals into visual representation are briefly discussed. The application of these techniques is illustrated on the example of determining the subsurface structure of a proposed power plant. Imaging makes the wave signals intelligible to the non-geologists. R and D work needed in this area are tabulated

  1. Performance analysis of the lineal model for estimating the maximum power of a HCPV module in different climate conditions

    Science.gov (United States)

    Fernández, Eduardo F.; Almonacid, Florencia; Sarmah, Nabin; Mallick, Tapas; Sanchez, Iñigo; Cuadra, Juan M.; Soria-Moya, Alberto; Pérez-Higueras, Pedro

    2014-09-01

    A model based on easily obtained atmospheric parameters and on a simple lineal mathematical expression has been developed at the Centre of Advanced Studies in Energy and Environment in southern Spain. The model predicts the maximum power of a HCPV module as a function of direct normal irradiance, air temperature and air mass. Presently, the proposed model has only been validated in southern Spain and its performance in locations with different atmospheric conditions still remains unknown. In order to address this issue, several HCPV modules have been measured in two different locations with different climate conditions than the south of Spain: the Environment and Sustainability Institute in southern UK and the National Renewable Energy Center in northern Spain. Results show that the model has an adequate match between actual and estimated data with a RMSE lower than 3.9% at locations with different climate conditions.

  2. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  3. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  4. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  5. Ecosystem approach to fisheries: Exploring environmental and trophic effects on Maximum Sustainable Yield (MSY reference point estimates.

    Directory of Open Access Journals (Sweden)

    Rajeev Kumar

    Full Text Available We present a comprehensive analysis of estimation of fisheries Maximum Sustainable Yield (MSY reference points using an ecosystem model built for Mille Lacs Lake, the second largest lake within Minnesota, USA. Data from single-species modelling output, extensive annual sampling for species abundances, annual catch-survey, stomach-content analysis for predatory-prey interactions, and expert opinions were brought together within the framework of an Ecopath with Ecosim (EwE ecosystem model. An increase in the lake water temperature was observed in the last few decades; therefore, we also incorporated a temperature forcing function in the EwE model to capture the influences of changing temperature on the species composition and food web. The EwE model was fitted to abundance and catch time-series for the period 1985 to 2006. Using the ecosystem model, we estimated reference points for most of the fished species in the lake at single-species as well as ecosystem levels with and without considering the influence of temperature change; therefore, our analysis investigated the trophic and temperature effects on the reference points. The paper concludes that reference points such as MSY are not stationary, but change when (1 environmental conditions alter species productivity and (2 fishing on predators alters the compensatory response of their prey. Thus, it is necessary for the management to re-estimate or re-evaluate the reference points when changes in environmental conditions and/or major shifts in species abundance or community structure are observed.

  6. Estimation of daily maximum and minimum air temperatures in urban landscapes using MODIS time series satellite data

    Science.gov (United States)

    Yoo, Cheolhee; Im, Jungho; Park, Seonyoung; Quackenbush, Lindi J.

    2018-03-01

    Urban air temperature is considered a significant variable for a variety of urban issues, and analyzing the spatial patterns of air temperature is important for urban planning and management. However, insufficient weather stations limit accurate spatial representation of temperature within a heterogeneous city. This study used a random forest machine learning approach to estimate daily maximum and minimum air temperatures (Tmax and Tmin) for two megacities with different climate characteristics: Los Angeles, USA, and Seoul, South Korea. This study used eight time-series land surface temperature (LST) data from Moderate Resolution Imaging Spectroradiometer (MODIS), with seven auxiliary variables: elevation, solar radiation, normalized difference vegetation index, latitude, longitude, aspect, and the percentage of impervious area. We found different relationships between the eight time-series LSTs with Tmax/Tmin for the two cities, and designed eight schemes with different input LST variables. The schemes were evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE) from 10-fold cross-validation. The best schemes produced R2 of 0.850 and 0.777 and RMSE of 1.7 °C and 1.2 °C for Tmax and Tmin in Los Angeles, and R2 of 0.728 and 0.767 and RMSE of 1.1 °C and 1.2 °C for Tmax and Tmin in Seoul, respectively. LSTs obtained the day before were crucial for estimating daily urban air temperature. Estimated air temperature patterns showed that Tmax was highly dependent on the geographic factors (e.g., sea breeze, mountains) of the two cities, while Tmin showed marginally distinct temperature differences between built-up and vegetated areas in the two cities.

  7. Subsurface chlorophyll maxima in the north-western Bay of Bengal

    Digital Repository Service at National Institute of Oceanography (India)

    Sarma, V.V.; Aswanikumar, V.

    of thermocline suggests that the formation of the subsurface maximum is influencEd. by the presence of seasonal thermocline. Further the subsurface chlorophyll maximum is noticed within the depth ranges of ammonium maximum and nitracline, suggesting...

  8. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    Science.gov (United States)

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  9. Modelling shallow landslide susceptibility by means of a subsurface flow path connectivity index and estimates of soil depth spatial distribution

    Directory of Open Access Journals (Sweden)

    C. Lanni

    2012-11-01

    Full Text Available Topographic index-based hydrological models have gained wide use to describe the hydrological control on the triggering of rainfall-induced shallow landslides at the catchment scale. A common assumption in these models is that a spatially continuous water table occurs simultaneously across the catchment. However, during a rainfall event isolated patches of subsurface saturation form above an impeding layer and their hydrological connectivity is a necessary condition for lateral flow initiation at a point on the hillslope.

    Here, a new hydrological model is presented, which allows us to account for the concept of hydrological connectivity while keeping the simplicity of the topographic index approach. A dynamic topographic index is used to describe the transient lateral flow that is established at a hillslope element when the rainfall amount exceeds a threshold value allowing for (a development of a perched water table above an impeding layer, and (b hydrological connectivity between the hillslope element and its own upslope contributing area. A spatially variable soil depth is the main control of hydrological connectivity in the model. The hydrological model is coupled with the infinite slope stability model and with a scaling model for the rainfall frequency–duration relationship to determine the return period of the critical rainfall needed to cause instability on three catchments located in the Italian Alps, where a survey of soil depth spatial distribution is available. The model is compared with a quasi-dynamic model in which the dynamic nature of the hydrological connectivity is neglected. The results show a better performance of the new model in predicting observed shallow landslides, implying that soil depth spatial variability and connectivity bear a significant control on shallow landsliding.

  10. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    Directory of Open Access Journals (Sweden)

    Junguo Hu

    Full Text Available Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK and Co-Kriging (Co-OK methods. The results indicated that the root mean squared errors (RMSEs and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193 were less than those for the OK method (1.146 and 1.539 when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  11. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    Science.gov (United States)

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  12. MLE [Maximum Likelihood Estimator] reconstruction of a brain phantom using a Monte Carlo transition matrix and a statistical stopping rule

    International Nuclear Information System (INIS)

    Veklerov, E.; Llacer, J.; Hoffman, E.J.

    1987-10-01

    In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data

  13. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  14. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2016-06-01

    Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  15. ROC [Receiver Operating Characteristics] study of maximum likelihood estimator human brain image reconstructions in PET [Positron Emission Tomography] clinical practice

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Nolan, D.; Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J.

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of 18 F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab

  16. Subsurface Facility System Description Document

    International Nuclear Information System (INIS)

    Eric Loros

    2001-01-01

    The Subsurface Facility System encompasses the location, arrangement, size, and spacing of the underground openings. This subsurface system includes accesses, alcoves, and drifts. This system provides access to the underground, provides for the emplacement of waste packages, provides openings to allow safe and secure work conditions, and interfaces with the natural barrier. This system includes what is now the Exploratory Studies Facility. The Subsurface Facility System physical location and general arrangement help support the long-term waste isolation objectives of the repository. The Subsurface Facility System locates the repository openings away from main traces of major faults, away from exposure to erosion, above the probable maximum flood elevation, and above the water table. The general arrangement, size, and spacing of the emplacement drifts support disposal of the entire inventory of waste packages based on the emplacement strategy. The Subsurface Facility System provides access ramps to safely facilitate development and emplacement operations. The Subsurface Facility System supports the development and emplacement operations by providing subsurface space for such systems as ventilation, utilities, safety, monitoring, and transportation

  17. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    Science.gov (United States)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field

  18. A smoothed maximum score estimator for the binary choice panel data model with individual fixed effects and applications to labour force participation

    NARCIS (Netherlands)

    Charlier, G.W.P.

    1994-01-01

    In a binary choice panel data model with individual effects and two time periods, Manski proposed the maximum score estimator, based on a discontinuous objective function, and proved its consistency under weak distributional assumptions. However, the rate of convergence of this estimator is low (N)

  19. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    Science.gov (United States)

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  20. PROCOV: maximum likelihood estimation of protein phylogeny under covarion models and site-specific covarion pattern analysis

    Directory of Open Access Journals (Sweden)

    Wang Huai-Chun

    2009-09-01

    Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.

  1. Estimation of Land Surface Temperature through Blending MODIS and AMSR-E Data with the Bayesian Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Xiaokang Kou

    2016-01-01

    Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.

  2. Estimation of flashover voltage probability of overhead line insulators under industrial pollution, based on maximum likelihood method

    International Nuclear Information System (INIS)

    Arab, M.N.; Ayaz, M.

    2004-01-01

    The performance of transmission line insulator is greatly affected by dust, fumes from industrial areas and saline deposit near the coast. Such pollutants in the presence of moisture form a coating on the surface of the insulator, which in turn allows the passage of leakage current. This leakage builds up to a point where flashover develops. The flashover is often followed by permanent failure of insulation resulting in prolong outages. With the increase in system voltage owing to the greater demand of electrical energy over the past few decades, the importance of flashover due to pollution has received special attention. The objective of the present work was to study the performance of overhead line insulators in the presence of contaminants such as induced salts. A detailed review of the literature and the mechanisms of insulator flashover due to the pollution are presented. Experimental investigations on the behavior of overhead line insulators under industrial salt contamination are carried out. A special fog chamber was designed in which the contamination testing of insulators was carried out. Flashover behavior under various degrees of contamination of insulators with the most common industrial fume components such as Nitrate and Sulphate compounds was studied. Substituting the normal distribution parameter in the probability distribution function based on maximum likelihood develops a statistical method. The method gives a high accuracy in the estimation of the 50% flashover voltage, which is then used to evaluate the critical flashover index at various contamination levels. The critical flashover index is a valuable parameter in insulation design for numerous applications. (author)

  3. Enhancing resolution and contrast in second-harmonic generation microscopy using an advanced maximum likelihood estimation restoration method

    Science.gov (United States)

    Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.

    2017-02-01

    Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.

  4. Estimating the Causal Impact of Proximity to Gold and Copper Mines on Respiratory Diseases in Chilean Children: An Application of Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Ronald Herrera

    2017-12-01

    Full Text Available In a town located in a desert area of Northern Chile, gold and copper open-pit mining is carried out involving explosive processes. These processes are associated with increased dust exposure, which might affect children’s respiratory health. Therefore, we aimed to quantify the causal attributable risk of living close to the mines on asthma or allergic rhinoconjunctivitis risk burden in children. Data on the prevalence of respiratory diseases and potential confounders were available from a cross-sectional survey carried out in 2009 among 288 (response: 69 % children living in the community. The proximity of the children’s home addresses to the local gold and copper mine was calculated using geographical positioning systems. We applied targeted maximum likelihood estimation to obtain the causal attributable risk (CAR for asthma, rhinoconjunctivitis and both outcomes combined. Children living more than the first quartile away from the mines were used as the unexposed group. Based on the estimated CAR, a hypothetical intervention in which all children lived at least one quartile away from the copper mine would decrease the risk of rhinoconjunctivitis by 4.7 percentage points (CAR: − 4.7 ; 95 % confidence interval ( 95 % CI: − 8.4 ; − 0.11 ; and 4.2 percentage points (CAR: − 4.2 ; 95 % CI: − 7.9 ; − 0.05 for both outcomes combined. Overall, our results suggest that a hypothetical intervention intended to increase the distance between the place of residence of the highest exposed children would reduce the prevalence of respiratory disease in the community by around four percentage points. This approach could help local policymakers in the development of efficient public health strategies.

  5. Estimating the Causal Impact of Proximity to Gold and Copper Mines on Respiratory Diseases in Chilean Children: An Application of Targeted Maximum Likelihood Estimation.

    Science.gov (United States)

    Herrera, Ronald; Berger, Ursula; von Ehrenstein, Ondine S; Díaz, Iván; Huber, Stella; Moraga Muñoz, Daniel; Radon, Katja

    2017-12-27

    In a town located in a desert area of Northern Chile, gold and copper open-pit mining is carried out involving explosive processes. These processes are associated with increased dust exposure, which might affect children's respiratory health. Therefore, we aimed to quantify the causal attributable risk of living close to the mines on asthma or allergic rhinoconjunctivitis risk burden in children. Data on the prevalence of respiratory diseases and potential confounders were available from a cross-sectional survey carried out in 2009 among 288 (response: 69 % ) children living in the community. The proximity of the children's home addresses to the local gold and copper mine was calculated using geographical positioning systems. We applied targeted maximum likelihood estimation to obtain the causal attributable risk (CAR) for asthma, rhinoconjunctivitis and both outcomes combined. Children living more than the first quartile away from the mines were used as the unexposed group. Based on the estimated CAR, a hypothetical intervention in which all children lived at least one quartile away from the copper mine would decrease the risk of rhinoconjunctivitis by 4.7 percentage points (CAR: - 4.7 ; 95 % confidence interval ( 95 % CI): - 8.4 ; - 0.11 ); and 4.2 percentage points (CAR: - 4.2 ; 95 % CI: - 7.9 ; - 0.05 ) for both outcomes combined. Overall, our results suggest that a hypothetical intervention intended to increase the distance between the place of residence of the highest exposed children would reduce the prevalence of respiratory disease in the community by around four percentage points. This approach could help local policymakers in the development of efficient public health strategies.

  6. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    Science.gov (United States)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  7. Subsurface contaminants focus area

    International Nuclear Information System (INIS)

    1996-08-01

    The US Department of Enregy (DOE) Subsurface Contaminants Focus Area is developing technologies to address environmental problems associated with hazardous and radioactive contaminants in soil and groundwater that exist throughout the DOE complex, including radionuclides, heavy metals; and dense non-aqueous phase liquids (DNAPLs). More than 5,700 known DOE groundwater plumes have contaminated over 600 billion gallons of water and 200 million cubic meters of soil. Migration of these plumes threatens local and regional water sources, and in some cases has already adversely impacted off-site rsources. In addition, the Subsurface Contaminants Focus Area is responsible for supplying technologies for the remediation of numerous landfills at DOE facilities. These landfills are estimated to contain over 3 million cubic meters of radioactive and hazardous buried Technology developed within this specialty area will provide efective methods to contain contaminant plumes and new or alternative technologies for development of in situ technologies to minimize waste disposal costs and potential worker exposure by treating plumes in place. While addressing contaminant plumes emanating from DOE landfills, the Subsurface Contaminants Focus Area is also working to develop new or alternative technologies for the in situ stabilization, and nonintrusive characterization of these disposal sites

  8. Subsurface contaminants focus area

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-08-01

    The US Department of Enregy (DOE) Subsurface Contaminants Focus Area is developing technologies to address environmental problems associated with hazardous and radioactive contaminants in soil and groundwater that exist throughout the DOE complex, including radionuclides, heavy metals; and dense non-aqueous phase liquids (DNAPLs). More than 5,700 known DOE groundwater plumes have contaminated over 600 billion gallons of water and 200 million cubic meters of soil. Migration of these plumes threatens local and regional water sources, and in some cases has already adversely impacted off-site rsources. In addition, the Subsurface Contaminants Focus Area is responsible for supplying technologies for the remediation of numerous landfills at DOE facilities. These landfills are estimated to contain over 3 million cubic meters of radioactive and hazardous buried Technology developed within this specialty area will provide efective methods to contain contaminant plumes and new or alternative technologies for development of in situ technologies to minimize waste disposal costs and potential worker exposure by treating plumes in place. While addressing contaminant plumes emanating from DOE landfills, the Subsurface Contaminants Focus Area is also working to develop new or alternative technologies for the in situ stabilization, and nonintrusive characterization of these disposal sites.

  9. Estimating the spatial distribution of soil moisture based on Bayesian maximum entropy method with auxiliary data from remote sensing

    Science.gov (United States)

    Gao, Shengguo; Zhu, Zhongli; Liu, Shaomin; Jin, Rui; Yang, Guangchao; Tan, Lei

    2014-10-01

    Soil moisture (SM) plays a fundamental role in the land-atmosphere exchange process. Spatial estimation based on multi in situ (network) data is a critical way to understand the spatial structure and variation of land surface soil moisture. Theoretically, integrating densely sampled auxiliary data spatially correlated with soil moisture into the procedure of spatial estimation can improve its accuracy. In this study, we present a novel approach to estimate the spatial pattern of soil moisture by using the BME method based on wireless sensor network data and auxiliary information from ASTER (Terra) land surface temperature measurements. For comparison, three traditional geostatistic methods were also applied: ordinary kriging (OK), which used the wireless sensor network data only, regression kriging (RK) and ordinary co-kriging (Co-OK) which both integrated the ASTER land surface temperature as a covariate. In Co-OK, LST was linearly contained in the estimator, in RK, estimator is expressed as the sum of the regression estimate and the kriged estimate of the spatially correlated residual, but in BME, the ASTER land surface temperature was first retrieved as soil moisture based on the linear regression, then, the t-distributed prediction interval (PI) of soil moisture was estimated and used as soft data in probability form. The results indicate that all three methods provide reasonable estimations. Co-OK, RK and BME can provide a more accurate spatial estimation by integrating the auxiliary information Compared to OK. RK and BME shows more obvious improvement compared to Co-OK, and even BME can perform slightly better than RK. The inherent issue of spatial estimation (overestimation in the range of low values and underestimation in the range of high values) can also be further improved in both RK and BME. We can conclude that integrating auxiliary data into spatial estimation can indeed improve the accuracy, BME and RK take better advantage of the auxiliary

  10. The maximum ground level concentration of air pollutant and the effect of plume rise on concentration estimates

    International Nuclear Information System (INIS)

    Mayhoub, A.B.; Azzam, A.

    1991-01-01

    The emission of an air pollutant from an elevated point source according to Gaussian plume model has been presented. An elementary theoretical treatment for both the highest possible ground-level concentration and the downwind distance at which this maximum occurs for different stability classes has been constructed. The effective height release modification was taken into consideration. An illustrative case study, namely, the emission from the research reactor in Inchas, has been studied. The results of these analytical treatments and of the derived semi-empirical formulae are discussed and presented in few illustrative diagrams

  11. Estimated radiological doses to the maximumly exposed individual and downstream populations from releases of tritium, strontium-90, ruthenium-106, and cesium-137 from White Oak Dam

    International Nuclear Information System (INIS)

    Little, C.A.; Cotter, S.J.

    1980-01-01

    Concentrations of tritium, 90 Sr, 106 Ru, and 137 Cs in the Clinch River for 1978 were estimated by using the known 1978 releases of these nuclides from the White Oak Dam and diluting them by the integrated annual flow rate of the Clinch River. Estimates of 50-year dose commitment to a maximumly exposed individual were calculated for both aquatic and terestrial pathways of exposure. The maximumly exposed individual was assumed to reside at the mouth of White Oak Creek where it enters the Clinch River and obtain all foodstuffs and drinking water at that location. The estimated total-body dose from all pathways to the maximumly exposed individual as a result of 1978 releases was less than 1% of the dose expected from natural background. Using appropriate concentrations of to subject radionuclides diluted downstream, the doses to populations residing at Harriman, Kingston, Rockwood, Spring City, Soddy-Daisy, and Chattanooga were calculated for aquatic exposure pathways. The total-body dose estimated for aquatic pathways for the six cities was about 0.0002 times the expected dose from natural background. For the pathways considered in this report, the nuclide which contributed the largest fraction of dose was 90 Sr. The largest dose delivered by 90 Sr was to the bone of the subject individual or community

  12. Quasi-Maximum Likelihood Estimation and Bootstrap Inference in Fractional Time Series Models with Heteroskedasticity of Unknown Form

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert

    We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...

  13. The maximum Number of parameters for the Hausman Test When the Estimators are from Different Sets of Equations

    NARCIS (Netherlands)

    K. Nawata (Kazumitsu); M.J. McAleer (Michael)

    2013-01-01

    markdownabstract__Abstract__ Hausman (1978) developed a widely-used model specification test that has passed the test of time. The test is based on two estimators, one being consistent under the null hypothesis but inconsistent under the alternative, and the other being consistent under both the

  14. The Maximum Number of Parameters for the Hausman Test when the Estimators are from Different Sets of Equations

    NARCIS (Netherlands)

    K. Nawata (Kazumitsu); M.J. McAleer (Michael)

    2013-01-01

    markdownabstract__Abstract__ Hausman (1978) developed a widely-used model specification test that has passed the test of time. The test is based on two estimators, one being consistent under the null hypothesis but inconsistent under the alternative, and the other being consistent under both the

  15. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    Science.gov (United States)

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Estimation of fracture conditions of ceramics by thermal shock with laser beams based on the maximum compressive stress criterion

    International Nuclear Information System (INIS)

    Akiyama, Shigeru; Amada, Shigeyasu.

    1992-01-01

    Structural ceramics are attracting attention in the development of space planes, aircraft and nuclear fusion reactors because they have excellent wear-resistant and heat-resistant characteristics. However, in some applications it is anticipated that they will be exposed to very-high-temperature environments of the order of thousands of degrees. Therefore, it is very important to investigate their thermal shock characteristics. In this report, the distributions of temperatures and thermal stresses of cylindrically shaped ceramics under irradiation by laser beams are discussed using the finite-element computer code (MARC) with arbitrary quadrilateral axisymmetric ring elements. The relationships between spot diameters of laser beams and maximum values of compressive thermal stresses are derived for various power densities. From these relationships, a critical fracture curve is obtained, and it is compared with the experimental results. (author)

  17. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  18. Estimation of the players maximum heart rate in real game situations in team sports: a practical propose

    Directory of Open Access Journals (Sweden)

    Jorge Cuadrado Reyes

    2011-05-01

    Full Text Available Abstract   This  research developed a logarithms  for calculating the maximum heart rate (max. HR for players in team sports in  game situations. The sample was made of  thirteen players (aged 24 ± 3   to a  Division Two Handball team. HR was initially measured by Course Navette test.  Later, twenty one training sessions were conducted  in which HR and Rate of Perceived Exertion (RPE, were  continuously monitored, in each task. A lineal regression analysis was done  to help find a max. HR prediction equation from the max. HR of the three highest intensity sessions. Results from  this equation correlate significantly with data obtained in the Course Navette test and with those obtained by other indirect methods. The conclusion of this research is that this equation provides a very useful and easy way to measure the max. HR in real game situations, avoiding non-specific analytical tests and, therefore laboratory testing..   Key words: workout control, functional evaluation, prediction equation.

  19. Application of asymptotic expansions for maximum likelihood estimators' errors to gravitational waves from inspiraling binary systems: The network case

    International Nuclear Information System (INIS)

    Vitale, Salvatore; Zanolin, Michele

    2011-01-01

    This paper describes the most accurate analytical frequentist assessment to date of the uncertainties in the estimation of physical parameters from gravitational waves generated by nonspinning binary systems and Earth-based networks of laser interferometers. The paper quantifies how the accuracy in estimating the intrinsic parameters mostly depends on the network signal to noise ratio (SNR), but the resolution in the direction of arrival also strongly depends on the network geometry. We compare results for six different existing and possible global networks and two different choices of the parameter space. We show how the fraction of the sky where the one sigma angular resolution is below 2 square degrees increases about 3 times when transitioning from the Hanford (USA), Livingston (USA) and Cascina (Italy) network to a network made of five interferometers (while keeping the network SNR fixed). The technique adopted here is an asymptotic expansion of the uncertainties in inverse powers of the SNR where the first order is the inverse Fisher information matrix. We show that the commonly employed approach of using a simplified parameter spaces and only the Fisher information matrix can largely underestimate the uncertainties (the combined effect would lead to a factor 7 for the one sigma sky uncertainty in square degrees at a network SNR of 15).

  20. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    Science.gov (United States)

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  1. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    Science.gov (United States)

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  2. Subsurface Biogeochemistry of Actinides

    Energy Technology Data Exchange (ETDEWEB)

    Kersting, Annie B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Univ. Relations and Science Education; Zavarin, Mavrik [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Glenn T. Seaborg Inst.

    2016-06-29

    A major scientific challenge in environmental sciences is to identify the dominant processes controlling actinide transport in the environment. It is estimated that currently, over 2200 metric tons of plutonium (Pu) have been deposited in the subsurface worldwide, a number that increases yearly with additional spent nuclear fuel (Ewing et al., 2010). Plutonium has been shown to migrate on the scale of kilometers, giving way to a critical concern that the fundamental biogeochemical processes that control its behavior in the subsurface are not well understood (Kersting et al., 1999; Novikov et al., 2006; Santschi et al., 2002). Neptunium (Np) is less prevalent in the environment; however, it is predicted to be a significant long-term dose contributor in high-level nuclear waste. Our focus on Np chemistry in this Science Plan is intended to help formulate a better understanding of Pu redox transformations in the environment and clarify the differences between the two long-lived actinides. The research approach of our Science Plan combines (1) Fundamental Mechanistic Studies that identify and quantify biogeochemical processes that control actinide behavior in solution and on solids, (2) Field Integration Studies that investigate the transport characteristics of Pu and test our conceptual understanding of actinide transport, and (3) Actinide Research Capabilities that allow us to achieve the objectives of this Scientific Focus Area (SFA and provide new opportunities for advancing actinide environmental chemistry. These three Research Thrusts form the basis of our SFA Science Program (Figure 1).

  3. Can diligent and extensive mapping of faults provide reliable estimates of the expected maximum earthquakes at these faults? No. (Invited)

    Science.gov (United States)

    Bird, P.

    2010-12-01

    The hope expressed in the title question above can be contradicted in 5 ways, listed below. To summarize, an earthquake rupture can be larger than anticipated either because the fault system has not been fully mapped, or because the rupture is not limited to the pre-existing fault network. 1. Geologic mapping of faults is always incomplete due to four limitations: (a) Map-scale limitation: Faults below a certain (scale-dependent) apparent offset are omitted; (b) Field-time limitation: The most obvious fault(s) get(s) the most attention; (c) Outcrop limitation: You can't map what you can't see; and (d) Lithologic-contrast limitation: Intra-formation faults can be tough to map, so they are often assumed to be minor and omitted. If mapping is incomplete, fault traces may be longer and/or better-connected than we realize. 2. Fault trace “lengths” are unreliable guides to maximum magnitude. Fault networks have multiply-branching, quasi-fractal shapes, so fault “length” may be meaningless. Naming conventions for main strands are unclear, and rarely reviewed. Gaps due to Quaternary alluvial cover may not reflect deeper seismogenic structure. Mapped kinks and other “segment boundary asperities” may be only shallow structures. Also, some recent earthquakes have jumped and linked “separate” faults (Landers, California 1992; Denali, Alaska, 2002) [Wesnousky, 2006; Black, 2008]. 3. Distributed faulting (“eventually occurring everywhere”) is predicted by several simple theories: (a) Viscoelastic stress redistribution in plate/microplate interiors concentrates deviatoric stress upward until they fail by faulting; (b) Unstable triple-junctions (e.g., between 3 strike-slip faults) in 2-D plate theory require new faults to form; and (c) Faults which appear to end (on a geologic map) imply distributed permanent deformation. This means that all fault networks evolve and that even a perfect fault map would be incomplete for future ruptures. 4. A recent attempt

  4. A new method for estimating the probable maximum hail loss of a building portfolio based on hailfall intensity determined by radar measurements

    Science.gov (United States)

    Aller, D.; Hohl, R.; Mair, F.; Schiesser, H.-H.

    2003-04-01

    Extreme hailfall can cause massive damage to building structures. For the insurance and reinsurance industry it is essential to estimate the probable maximum hail loss of their portfolio. The probable maximum loss (PML) is usually defined with a return period of 1 in 250 years. Statistical extrapolation has a number of critical points, as historical hail loss data are usually only available from some events while insurance portfolios change over the years. At the moment, footprints are derived from historical hail damage data. These footprints (mean damage patterns) are then moved over a portfolio of interest to create scenario losses. However, damage patterns of past events are based on the specific portfolio that was damaged during that event and can be considerably different from the current spread of risks. A new method for estimating the probable maximum hail loss to a building portfolio is presented. It is shown that footprints derived from historical damages are different to footprints of hail kinetic energy calculated from radar reflectivity measurements. Based on the relationship between radar-derived hail kinetic energy and hail damage to buildings, scenario losses can be calculated. A systematic motion of the hail kinetic energy footprints over the underlying portfolio creates a loss set. It is difficult to estimate the return period of losses calculated with footprints derived from historical damages being moved around. To determine the return periods of the hail kinetic energy footprints over Switzerland, 15 years of radar measurements and 53 years of agricultural hail losses are available. Based on these data, return periods of several types of hailstorms were derived for different regions in Switzerland. The loss set is combined with the return periods of the event set to obtain an exceeding frequency curve, which can be used to derive the PML.

  5. Absorption and scattering coefficients estimation in two-dimensional participating media using the generalized maximum entropy and Levenberg-Marquardt methods

    International Nuclear Information System (INIS)

    Berrocal T, Mariella J.; Roberty, Nilson C.; Silva Neto, Antonio J.; Universidade Federal, Rio de Janeiro, RJ

    2002-01-01

    The solution of inverse problems in participating media where there is emission, absorption and dispersion of the radiation possesses several applications in engineering and medicine. The objective of this work is to estimative the coefficients of absorption and dispersion in two-dimensional heterogeneous participating media, using in independent form the Generalized Maximum Entropy and Levenberg Marquardt methods. Both methods are based on the solution of the direct problem that is modeled by the Boltzmann equation in cartesian geometry. Some cases testes are presented. (author)

  6. User's guide: Nimbus-7 Earth radiation budget narrow-field-of-view products. Scene radiance tape products, sorting into angular bins products, and maximum likelihood cloud estimation products

    Science.gov (United States)

    Kyle, H. Lee; Hucek, Richard R.; Groveman, Brian; Frey, Richard

    1990-01-01

    The archived Earth radiation budget (ERB) products produced from the Nimbus-7 ERB narrow field-of-view scanner are described. The principal products are broadband outgoing longwave radiation (4.5 to 50 microns), reflected solar radiation (0.2 to 4.8 microns), and the net radiation. Daily and monthly averages are presented on a fixed global equal area (500 sq km), grid for the period May 1979 to May 1980. Two independent algorithms are used to estimate the outgoing fluxes from the observed radiances. The algorithms are described and the results compared. The products are divided into three subsets: the Scene Radiance Tapes (SRT) contain the calibrated radiances; the Sorting into Angular Bins (SAB) tape contains the SAB produced shortwave, longwave, and net radiation products; and the Maximum Likelihood Cloud Estimation (MLCE) tapes contain the MLCE products. The tape formats are described in detail.

  7. Towards the prediction of pre-mining stresses in the European continent. [Estimates of vertical and probable maximum lateral stress in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Blackwood, R. L.

    1980-05-15

    There are now available sufficient data from in-situ, pre-mining stress measurements to allow a first attempt at predicting the maximum stress magnitudes likely to occur in a given mining context. The sub-horizontal (lateral) stress generally dominates the stress field, becoming critical to stope stability in many cases. For cut-and-fill mining in particular, where developed fill pressures are influenced by lateral displacement of pillars or stope backs, extraction maximization planning by mathematical modelling techniques demands the best available estimate of pre-mining stresses. While field measurements are still essential for this purpose, in the present paper it is suggested that the worst stress case can be predicted for preliminary design or feasibility study purposes. In the Eurpoean continent the vertical component of pre-mining stress may be estimated by adding 2 MPa to the pressure due to overburden weight. The maximum lateral stress likely to be encountered is about 57 MPa at depths of some 800m to 1000m below the surface.

  8. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  9. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  10. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    Science.gov (United States)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  11. Estimating Daily Maximum and Minimum Land Air Surface Temperature Using MODIS Land Surface Temperature Data and Ground Truth Data in Northern Vietnam

    Directory of Open Access Journals (Sweden)

    Phan Thanh Noi

    2016-12-01

    Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.

  12. Methodology to estimate the cost of the severe accidents risk / maximum benefit; Metodologia para estimar el costo del riesgo de accidentes severos / beneficio maximo

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, G.; Flores, R. M.; Vega, E., E-mail: gozalo.mendoza@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2016-09-15

    For programs and activities to manage aging effects, any changes to plant operations, inspections, maintenance activities, systems and administrative control procedures during the renewal period should be characterized, designed to manage the effects of aging as required by 10 Cfr Part 54 that could impact the environment. Environmental impacts significantly different from those described in the final environmental statement for the current operating license should be described in detail. When complying with the requirements of a license renewal application, the Severe Accident Mitigation Alternatives (SAMA) analysis is contained in a supplement to the environmental report of the plant that meets the requirements of 10 Cfr Part 51. In this paper, the methodology for estimating the cost of severe accidents risk is established and discussed, which is then used to identify and select the alternatives for severe accident mitigation, which are analyzed to estimate the maximum benefit that an alternative could achieve if this eliminate all risk. Using the regulatory analysis techniques of the US Nuclear Regulatory Commission (NRC) estimates the cost of severe accidents risk. The ultimate goal of implementing the methodology is to identify candidates for SAMA that have the potential to reduce the severe accidents risk and determine if the implementation of each candidate is cost-effective. (Author)

  13. Project W-026, Waste Receiving and Processing (WRAP) Facility Module 1: Maximum possible fire loss (MPFL) decontamination and cleanup estimates. Revision 1

    International Nuclear Information System (INIS)

    Hinkle, A.W.; Jacobsen, P.H.; Lucas, D.R.

    1994-01-01

    Project W-026, Waste Receiving and Processing (WRAP) Facility Module 1, a 1991 Line Item, is planned for completion and start of operations in the spring of 1997. WRAP Module 1 will have the capability to characterize and repackage newly generated, retrieved and stored transuranic (TRU), TRU mixed, and suspect TRU waste for shipment to the Waste isolation Pilot Plant (WIPP). In addition, the WRAP Facility Module 1 will have the capability to characterize low-level mixed waste for treatment in WRAP Module 2A. This report documents the assumptions and cost estimates for decontamination and clean-up of a maximum possible fire loss (MPFL) as defined by DOE Order 5480.7A, FIRE PROTECTION. The Order defines MPFL as the value of property, excluding land, within a fire area, unless a fire hazards analysis demonstrates a lesser (or greater) loss potential. This assumes failure of both automatic fire suppression systems and manual fire fighting efforts. Estimates were developed for demolition, disposal, decontamination, and rebuilding. Total costs were estimated to be approximately $98M

  14. Numerical estimates of the maximum sustainable pore pressure in anticline formations using the tensor based concept of pore pressure-stress coupling

    Directory of Open Access Journals (Sweden)

    Andreas Eckert

    2015-02-01

    Full Text Available The advanced tensor based concept of pore pressure-stress coupling is used to provide pre-injection analytical estimates of the maximum sustainable pore pressure change, ΔPc, for fluid injection scenarios into generic anticline geometries. The heterogeneous stress distribution for different prevailing stress regimes in combination with the Young's modulus (E contrast between the injection layer and the cap rock and the interbedding friction coefficient, μ, may result in large spatial and directional differences of ΔPc. A single value characterizing the cap rock as for horizontal layered injection scenarios is not obtained. It is observed that a higher Young's modulus in the cap rock and/or a weak mechanical coupling between layers amplifies the maximum and minimum ΔPc values in the valley and limb, respectively. These differences in ΔPc imposed by E and μ are further amplified by different stress regimes. The more compressional the stress regime is, the larger the differences between the maximum and minimum ΔPc values become. The results of this study show that, in general compressional stress regimes yield the largest magnitudes of ΔPc and extensional stress regimes provide the lowest values of ΔPc for anticline formations. Yet this conclusion has to be considered with care when folded anticline layers are characterized by flexural slip and the friction coefficient between layers is low, i.e. μ = 0.1. For such cases of weak mechanical coupling, ΔPc magnitudes may range from 0 MPa to 27 MPa, indicating imminent risk of fault reactivation in the cap rock.

  15. Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?

    Science.gov (United States)

    Staatz, Christine E; Tett, Susan E

    2011-12-01

    This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared

  16. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  17. Quantifying the Strength of General Factors in Psychopathology: A Comparison of CFA with Maximum Likelihood Estimation, BSEM, and ESEM/EFA Bifactor Approaches.

    Science.gov (United States)

    Murray, Aja Louise; Booth, Tom; Eisner, Manuel; Obsuth, Ingrid; Ribeaud, Denis

    2018-05-22

    Whether or not importance should be placed on an all-encompassing general factor of psychopathology (or p factor) in classifying, researching, diagnosing, and treating psychiatric disorders depends (among other issues) on the extent to which comorbidity is symptom-general rather than staying largely within the confines of narrower transdiagnostic factors such as internalizing and externalizing. In this study, we compared three methods of estimating p factor strength. We compared omega hierarchical and explained common variance calculated from confirmatory factor analysis (CFA) bifactor models with maximum likelihood (ML) estimation, from exploratory structural equation modeling/exploratory factor analysis models with a bifactor rotation, and from Bayesian structural equation modeling (BSEM) bifactor models. Our simulation results suggested that BSEM with small variance priors on secondary loadings might be the preferred option. However, CFA with ML also performed well provided secondary loadings were modeled. We provide two empirical examples of applying the three methodologies using a normative sample of youth (z-proso, n = 1,286) and a university counseling sample (n = 359).

  18. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    Directory of Open Access Journals (Sweden)

    Edwin J. Niklitschek

    2016-10-01

    Full Text Available Background Mixture models (MM can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM, under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011, from four distinct nursery habitats. (Mediterranean lagoons Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI and uncertainty (SE were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06 when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0

  19. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    Science.gov (United States)

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI nursery signatures improved reliability

  20. Subsurface temperature estimation from climatology and satellite SST for the sea around Korean Peninsula 1Bong-Guk, Kim, 1Yang-Ki, Cho, 1Bong-Gwan, Kim, 1Young-Gi, Kim, 1Ji-Hoon, Jung 1School of Earth and Environmental Sciences, Seoul National University

    Science.gov (United States)

    Kim, Bong-Guk; Cho, Yang-Ki; Kim, Bong-Gwan; Kim, Young-Gi; Jung, Ji-Hoon

    2015-04-01

    Subsurface temperature plays an important role in determining heat contents in the upper ocean which are crucial in long-term and short-term weather systems. Furthermore, subsurface temperature affects significantly ocean ecology. In this study, a simple and practical algorithm has proposed. If we assume that subsurface temperature changes are proportional to surface heating or cooling, subsurface temperature at each depth (Sub_temp) can be estimated as follows PIC whereiis depth index, Clm_temp is temperature from climatology, dif0 is temperature difference between satellite and climatology in the surface, and ratio is ratio of temperature variability in each depth to surface temperature variability. Subsurface temperatures using this algorithm from climatology (WOA2013) and satellite SST (OSTIA) where calculated in the sea around Korean peninsula. Validation result with in-situ observation data show good agreement in the upper 50 m layer with RMSE (root mean square error) less than 2 K. The RMSE is smallest with less than 1 K in winter when surface mixed layer is thick, and largest with about 2~3 K in summer when surface mixed layer is shallow. The strong thermocline and large variability of the mixed layer depth might result in large RMSE in summer. Applying of mixed layer depth information for the algorithm may improve subsurface temperature estimation in summer. Spatial-temporal details on the improvement and its causes will be discussed.

  1. Electrical Subsurface Grounding Analysis

    International Nuclear Information System (INIS)

    J.M. Calle

    2000-01-01

    The purpose and objective of this analysis is to determine the present grounding requirements of the Exploratory Studies Facility (ESF) subsurface electrical system and to verify that the actual grounding system and devices satisfy the requirements

  2. Evaluation of Maximum a Posteriori Estimation as Data Assimilation Method for Forecasting Infiltration-Inflow Affected Urban Runoff with Radar Rainfall Input

    Directory of Open Access Journals (Sweden)

    Jonas W. Pedersen

    2016-09-01

    Full Text Available High quality on-line flow forecasts are useful for real-time operation of urban drainage systems and wastewater treatment plants. This requires computationally efficient models, which are continuously updated with observed data to provide good initial conditions for the forecasts. This paper presents a way of updating conceptual rainfall-runoff models using Maximum a Posteriori estimation to determine the most likely parameter constellation at the current point in time. This is done by combining information from prior parameter distributions and the model goodness of fit over a predefined period of time that precedes the forecast. The method is illustrated for an urban catchment, where flow forecasts of 0–4 h are generated by applying a lumped linear reservoir model with three cascading reservoirs. Radar rainfall observations are used as input to the model. The effects of different prior standard deviations and lengths of the auto-calibration period on the resulting flow forecast performance are evaluated. We were able to demonstrate that, if properly tuned, the method leads to a significant increase in forecasting performance compared to a model without continuous auto-calibration. Delayed responses and erratic behaviour in the parameter variations are, however, observed and the choice of prior distributions and length of auto-calibration period is not straightforward.

  3. Site Recommendation Subsurface Layout

    International Nuclear Information System (INIS)

    C.L. Linden

    2000-01-01

    The purpose of this analysis is to develop a Subsurface Facility layout that is capable of accommodating the statutory capacity of 70,000 metric tons of uranium (MTU), as well as an option to expand the inventory capacity, if authorized, to 97,000 MTU. The layout configuration also requires a degree of flexibility to accommodate potential changes in site conditions or program requirements. The objective of this analysis is to provide a conceptual design of the Subsurface Facility sufficient to support the development of the Subsurface Facility System Description Document (CRWMS M andO 2000e) and the ''Emplacement Drift System Description Document'' (CRWMS M andO 2000i). As well, this analysis provides input to the Site Recommendation Consideration Report. The scope of this analysis includes: (1) Evaluation of the existing facilities and their integration into the Subsurface Facility design. (2) Identification and incorporation of factors influencing Subsurface Facility design, such as geological constraints, thermal loading, constructibility, subsurface ventilation, drainage control, radiological considerations, and the Test and Evaluation Facilities. (3) Development of a layout showing an available area in the primary area sufficient to support both the waste inventories and individual layouts showing the emplacement area required for 70,000 MTU and, if authorized, 97,000 MTU

  4. Applying tracer techniques to NPP liquid effluents for estimating the maximum concentration of soluble pollutants in a man-made canal

    International Nuclear Information System (INIS)

    Varlam, Carmen; Stefanescu, Ioan; Varlam, Mihai; Raceanu, Mircea; Enache, Adrian; Faurescu, Ionut; Patrascu, Vasile; Bucur, Cristina

    2006-01-01

    Full text: The possibility of a contamination agent being accidentally or intentionally spilled upstream from a water supply is a constant concern to those diverting and using water from a channel. A method of rapidly estimating the travel-time or dispersion is needed for pollution control or warning system on channels where data are scarce. Travel-time and mixing of water within a stream are basic streamflow characteristics needed in order to predict the rate of movement and dilution of pollutants that could be introduced in the stream. In this study we propose using tritiated liquid effluents from CANDU type nuclear power plant as a tracer, to study hydrodynamics on Danube-Black Sea Canal. This canal is ideal for this kind of study, because wastewater evacuations occur occasionally due to technical operations of nuclear power plant. Tritiated water can be used to simulate the transport and dispersion of solutes in Danube-Black Sea Canal because they have the same physical characteristics as the water. Measured tracer-response curves produced from injection of a known amount of soluble tracer provide an efficient method of obtaining the necessary data. This method can estimate: (1) the rate of movement of a solute through the canal reach: (2) the rate of peak attenuation concentration of a conservative solute in time; and (3) the length of time required for the solute plume to pass a point in the canal. This paper presents the mixing length calculation for particular conditions (lateral branch of the canal, and lateral injection of wastewater from the nuclear power plant). A study of published experimentally-obtained formulas was used to determine proper mixing length. Simultaneous measurements in different locations of the canal confirm the beginning of the experiment. Another result used in a further experiment concerns the tritium level along the Danube-Black Sea Canal. We measured tritium activity concentration in water sampled along the Canal between July

  5. The Serpentinite Subsurface Microbiome

    Science.gov (United States)

    Schrenk, M. O.; Nelson, B. Y.; Brazelton, W. J.

    2011-12-01

    Microbial habitats hosted in ultramafic rocks constitute substantial, globally-distributed portions of the subsurface biosphere, occurring both on the continents and beneath the seafloor. The aqueous alteration of ultramafics, in a process known as serpentinization, creates energy rich, high pH conditions, with low concentrations of inorganic carbon which place fundamental constraints upon microbial metabolism and physiology. Despite their importance, very few studies have attempted to directly access and quantify microbial activities and distributions in the serpentinite subsurface microbiome. We have initiated microbiological studies of subsurface seeps and rocks at three separate continental sites of serpentinization in Newfoundland, Italy, and California and compared these results to previous analyses of the Lost City field, near the Mid-Atlantic Ridge. In all cases, microbial cell densities in seep fluids are extremely low, ranging from approximately 100,000 to less than 1,000 cells per milliliter. Culture-independent analyses of 16S rRNA genes revealed low-diversity microbial communities related to Gram-positive Firmicutes and hydrogen-oxidizing bacteria. Interestingly, unlike Lost City, there has been little evidence for significant archaeal populations in the continental subsurface to date. Culturing studies at the sites yielded numerous alkaliphilic isolates on nutrient-rich agar and putative iron-reducing bacteria in anaerobic incubations, many of which are related to known alkaliphilic and subsurface isolates. Finally, metagenomic data reinforce the culturing results, indicating the presence of genes associated with organotrophy, hydrogen oxidation, and iron reduction in seep fluid samples. Our data provide insight into the lifestyles of serpentinite subsurface microbial populations and targets for future quantitative exploration using both biochemical and geochemical approaches.

  6. Estimating the Reactivation Potential of Pre-Existing Fractures in Subsurface Granitoids from Outcrop Analogues and in-Situ Stress Modeling: Implications for EGS Reservoir Stimulation with an Example from Thuringia (Central Germany)

    Science.gov (United States)

    Kasch, N.; Ustaszewski, K. M.; Siegburg, M.; Navabpour, P.; Hesse, G.

    2014-12-01

    The Mid-German Crystalline Rise (MGCR) in Thuringia (central Germany) is part of the European Variscan orogen and hosts large extents of Visean granites (c. 350 Ma), locally overlain by up to 3 km of Early Permian to Mid-Triassic volcanic and sedimentary rocks. A geothermal gradient of 36°C km-1 suggests that such subsurface granites form an economically viable hot dry rock reservoir at > 4 km depth. In order to assess the likelihood of reactivating any pre-existing fractures during hydraulic reservoir stimulation, slip and dilation tendency analyses (Morris et al. 1996) were carried out. For this purpose, we determined orientations of pre-existing fractures in 14 granite exposures along the southern border fault of an MGCR basement high. Additionally, the strike of 192 Permian magmatic dikes affecting the granite was considered. This analysis revealed a prevalence of NW-SE-striking fractures (mainly joints, extension veins, dikes and subordinately brittle faults) with a maximum at 030/70 (dip azimuth/dip). Borehole data and earthquake focal mechanisms reveal a maximum horizontal stress SHmax trending N150°E and a strike-slip regime. Effective in-situ stress magnitudes at 4.5 km depth, assuming hydrostatic conditions and frictional equilibrium along pre-existing fractures with a friction coefficient of 0.85 yielded 230 and 110 MPa for SHmax and Shmin, respectively. In this stress field, fractures with the prevailing orientations show a high tendency of becoming reactivated as dextral strike-slip faults if stimulated hydraulically. To ensure that a stimulation well creates fluid connectivity on a reservoir volume as large as possible rather than dissipating fluids along existing fractures, it should follow a trajectory at the highest possible angle to the orientation of prevailing fractures, i.e. subhorizontal and NE-SW-oriented. References: Morris, A., D. A. Ferrill, and D. B. Henderson (1996), Slip-tendency analysis and fault reactivation, Geology, 24, 275-278.

  7. Regional Inversion of the Maximum Carboxylation Rate (Vcmax) through the Sunlit Light Use Efficiency Estimated Using the Corrected Photochemical Reflectance Ratio Derived from MODIS Data

    Science.gov (United States)

    Zheng, T.; Chen, J. M.

    2016-12-01

    The maximum carboxylation rate (Vcmax), despite its importance in terrestrial carbon cycle modelling, remains challenging to obtain for large scales. In this study, an attempt has been made to invert the Vcmax using the gross primary productivity from sunlit leaves (GPPsun) with the physiological basis that the photosynthesis rate for leaves exposed to high solar radiation is mainly determined by the Vcmax. Since the GPPsun can be calculated through the sunlit light use efficiency (ɛsun), the main focus becomes the acquisition of ɛsun. Previous studies using site level reflectance observations have shown the ability of the photochemical reflectance ratio (PRR, defined as the ratio between the reflectance from an effective band centered around 531nm and a reference band) in tracking the variation of ɛsun for an evergreen coniferous stand and a deciduous broadleaf stand separately and the potential of a NDVI corrected PRR (NPRR, defined as the product of NDVI and PRR) in producing a general expression to describe the NPRR-ɛsun relationship across different plant function types. In this study, a significant correlation (R2 = 0.67, p<0.001) between the MODIS derived NPRR and the site level ɛsun calculated using flux data for four Canadian flux sites has been found for the year 2010. For validation purpose, the ɛsun in 2009 for the same sites are calculated using the MODIS NPRR and the expression from 2010. The MODIS derived ɛsun matches well with the flux calculated ɛsun (R2 = 0.57, p<0.001). Same expression has then been applied over a 217 × 193 km area in Saskatchewan, Canada to obtain the ɛsun and thus GPPsun for the region during the growing season in 2008 (day 150 to day 260). The Vcmax for the region is inverted using the GPPsun and the result is validated at three flux sites inside the area. The results show that the approach is able to obtain good estimations of Vcmax values with R2 = 0.68 and RMSE = 8.8 μmol m-2 s-1.

  8. SUBSURFACE EMPLACEMENT TRANSPORTATION SYSTEM

    International Nuclear Information System (INIS)

    Wilson, T.; Novotny, R.

    1999-01-01

    The objective of this analysis is to identify issues and criteria that apply to the design of the Subsurface Emplacement Transportation System (SET). The SET consists of the track used by the waste package handling equipment, the conductors and related equipment used to supply electrical power to that equipment, and the instrumentation and controls used to monitor and operate those track and power supply systems. Major considerations of this analysis include: (1) Operational life of the SET; (2) Geometric constraints on the track layout; (3) Operating loads on the track; (4) Environmentally induced loads on the track; (5) Power supply (electrification) requirements; and (6) Instrumentation and control requirements. This analysis will provide the basis for development of the system description document (SDD) for the SET. This analysis also defines the interfaces that need to be considered in the design of the SET. These interfaces include, but are not limited to, the following: (1) Waste handling building; (2) Monitored Geologic Repository (MGR) surface site layout; (3) Waste Emplacement System (WES); (4) Waste Retrieval System (WRS); (5) Ground Control System (GCS); (6) Ex-Container System (XCS); (7) Subsurface Electrical Distribution System (SED); (8) MGR Operations Monitoring and Control System (OMC); (9) Subsurface Facility System (SFS); (10) Subsurface Fire Protection System (SFR); (11) Performance Confirmation Emplacement Drift Monitoring System (PCM); and (12) Backfill Emplacement System (BES)

  9. Ocean (de)oxygenation from the Last Glacial Maximum to the twenty-first century: insights from Earth System models

    Science.gov (United States)

    Bopp, L.; Resplandy, L.; Untersee, A.; Le Mezo, P.; Kageyama, M.

    2017-08-01

    All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O2sat) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters. This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'.

  10. Ocean (de)oxygenation from the Last Glacial Maximum to the twenty-first century: insights from Earth System models.

    Science.gov (United States)

    Bopp, L; Resplandy, L; Untersee, A; Le Mezo, P; Kageyama, M

    2017-09-13

    All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O 2sat ) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O 2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O 2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters.This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'. © 2017 The Author(s).

  11. THE ESTIMATION OF SOME CHANGES OF SOIL PHYSICAL STATE UNDER THE EFFECT OF LAND RECLAMATION TECHNOLOGIES, IN THE CONDITION OF SUBSURFACE DRAINAGE IN BAIA-MOLDOVA DEPPRESSION

    Directory of Open Access Journals (Sweden)

    V. Moca

    2006-10-01

    Full Text Available In the pedo-climatic conditions of Suceava County that extends on a total surface of 855 300 ha, the balance of agricultural land affected by humidity excess with temporar or permanent character is differenciated from south to north and from east to west, between 30 % till 40%, which means almost 100 000 ha. On these soils with underground water or pluvial excess hydro ameliorative drainage systems have been installed, associated to a complex agroameliorative works. For long effect estimation of the underground drainage asociated with the agropedoameliorative works upon the some physical and hydrophysical characteristics, there were analyzed the soil and the environment conditions from Baia field. For this reason, we analyzed the agrophysical conditions for luvisol albic pseudogleic (SRCS-1980, respectively luvosol albic stagnic-glosic (SRTS-2003 albic luvosoil drained and cultivated, after a period of 28 years (1978-2006 use. The obtained data regarding to te water balance and the evolution of the major physical properties of soil, under the influence of drainage and amelioration works, put into evidence in the first stage (1978-1986 a general improvement of the aerohidrycal state and physical-chemical conditioning. In the next two experimental cycles of 10 years each, have been noticed a increased of compaction degree of soil drained and cultivated on 0-30 cm depth, from weak loose to moderately compaction depending on the remanence of the reclamation technologies.

  12. SOLID OXYGEN SOURCE FOR BIOREMEDIATION IN SUBSURFACE SOILS

    Science.gov (United States)

    Sodium percarbonate was encapsulated in poly(vinylidene chloride) to determine its potential as a slow-release oxygen source for biodegradation of contaminan ts in subsurface soils. In laboratory studies under aqueous conditions, the encapsulated sodium percarbonate was estimate...

  13. Population pharmacokinetics and maximum a posteriori probability Bayesian estimator of abacavir: application of individualized therapy in HIV-infected infants and toddlers.

    NARCIS (Netherlands)

    Zhao, W.; Cella, M.; Pasqua, O. Della; Burger, D.M.; Jacqz-Aigrain, E.

    2012-01-01

    WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT: Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in

  14. Subsurface quality assurance practices

    International Nuclear Information System (INIS)

    1987-08-01

    This report addresses only the concept of applying Nuclear Quality Assurance (NQA) practices to repository shaft and subsurface design and construction; how NQA will be applied; and the level of detail required in the documentation for construction of a shaft and subsurface repository in contrast to the level of detail required in the documentation for construction of a traditional mine. This study determined that NQA practices are viable, attainable, as well as required. The study identified the appropriate NQA criteria and the repository's major structures, systems, items, and activities to which the criteria are applicable. A QA plan, for design and construction, and a list of documentation, for construction, are presented. 7 refs., 1 fig., 18 tabs

  15. A comparison of maximum entropy and maximum likelihood estimation

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.

    1999-01-01

    Gegevens betreffende het ondernemerschap op Nederlandse akkerbouwbedrijven zijn in 2 benaderingsmethodes verwerkt, welke onderling op voorspellende nauwkeurigheid en op prijs-elasticiteit zijn vergeleken

  16. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    The objective is to minimize the processing time and computer memory required. 10 to carry out inversion .... to the mainland by two long bridges. .... term. In this approach, the model converges when the squared sum of the differences. 143.

  17. Method of estimating maximum VOC concentration in void volume of vented waste drums using limited sampling data: Application in transuranic waste drums

    International Nuclear Information System (INIS)

    Liekhus, K.J.; Connolly, M.J.

    1995-01-01

    A test program has been conducted at the Idaho National Engineering Laboratory to demonstrate that the concentration of volatile organic compounds (VOCs) within the innermost layer of confinement in a vented waste drum can be estimated using a model incorporating diffusion and permeation transport principles as well as limited waste drum sampling data. The model consists of a series of material balance equations describing steady-state VOC transport from each distinct void volume in the drum. The primary model input is the measured drum headspace VOC concentration. Model parameters are determined or estimated based on available process knowledge. The model effectiveness in estimating VOC concentration in the headspace of the innermost layer of confinement was examined for vented waste drums containing different waste types and configurations. This paper summarizes the experimental measurements and model predictions in vented transuranic waste drums containing solidified sludges and solid waste

  18. Lower limb muscle volume estimation from maximum cross-sectional area and muscle length in cerebral palsy and typically developing individuals.

    Science.gov (United States)

    Vanmechelen, Inti M; Shortland, Adam P; Noble, Jonathan J

    2018-01-01

    Deficits in muscle volume may be a significant contributor to physical disability in young people with cerebral palsy. However, 3D measurements of muscle volume using MRI or 3D ultrasound may be difficult to make routinely in the clinic. We wished to establish whether accurate estimates of muscle volume could be made from a combination of anatomical cross-sectional area and length measurements in samples of typically developing young people and young people with bilateral cerebral palsy. Lower limb MRI scans were obtained from the lower limbs of 21 individuals with cerebral palsy (14.7±3years, 17 male) and 23 typically developing individuals (16.8±3.3years, 16 male). The volume, length and anatomical cross-sectional area were estimated from six muscles of the left lower limb. Analysis of Covariance demonstrated that the relationship between the length*cross-sectional area and volume was not significantly different depending on the subject group. Linear regression analysis demonstrated that the product of anatomical cross-sectional area and length bore a strong and significant relationship to the measured muscle volume (R 2 values between 0.955 and 0.988) with low standard error of the estimates of 4.8 to 8.9%. This study demonstrates that muscle volume may be estimated accurately in typically developing individuals and individuals with cerebral palsy by a combination of anatomical cross-sectional area and muscle length. 2D ultrasound may be a convenient method of making these measurements routinely in the clinic. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Subsurface structures of buried features in the lunar Procellarum region

    Science.gov (United States)

    Wang, Wenrui; Heki, Kosuke

    2017-07-01

    The Gravity Recovery and Interior Laboratory (GRAIL) mission unraveled numbers of features showing strong gravity anomalies without prominent topographic signatures in the lunar Procellarum region. These features, located in different geologic units, are considered to have complex subsurface structures reflecting different evolution processes. By using the GRAIL level-1 data, we estimated the free-air and Bouguer gravity anomalies in several selected regions including such intriguing features. With the three-dimensional inversion technique, we recovered subsurface density structures in these regions.

  20. Development of an Anisotropic Geological-Based Land Use Regression and Bayesian Maximum Entropy Model for Estimating Groundwater Radon across Northing Carolina

    Science.gov (United States)

    Messier, K. P.; Serre, M. L.

    2015-12-01

    Radon (222Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium (238U), which is ubiquitous in rocks and soils worldwide. Exposure to 222Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater 222Rn with anisotropic geological and 238U based explanatory variables is developed, which helps elucidate the factors contributing to elevated 222Rn across North Carolina. Geological and uranium based variables are constructed in elliptical buffers surrounding each observation such that they capture the lateral geometric anisotropy present in groundwater 222Rn. Moreover, geological features are defined at three different geological spatial scales to allow the model to distinguish between large area and small area effects of geology on groundwater 222Rn. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater 222Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater 222Rn results in a leave-one out cross-validation of 0.46 (Pearson correlation coefficient= 0.68), effectively predicting within the spatial covariance range. Modeled results of 222Rn concentrations show variability among Intrusive Felsic geological formations likely due to average bedrock 238U defined on the basis of overlying stream-sediment 238U concentrations that is a widely distributed consistently analyzed point-source data.

  1. Mechanistic assessment of hillslope transpiration controls of diel subsurface flow: a steady-state irrigation approach

    Science.gov (United States)

    H.R. Barnard; C.B. Graham; W.J. van Verseveld; J.R. Brooks; B.J. Bond; J.J. McDonnell

    2010-01-01

    Mechanistic assessment of how transpiration influences subsurface flow is necessary to advance understanding of catchment hydrology. We conducted a 24-day, steady-state irrigation experiment to quantify the relationships among soil moisture, transpiration and hillslope subsurface flow. Our objectives were to: (1) examine the time lag between maximum transpiration and...

  2. MetaPIGA v2.0: maximum likelihood large phylogeny estimation using the metapopulation genetic algorithm and other stochastic heuristics.

    Science.gov (United States)

    Helaers, Raphaël; Milinkovitch, Michel C

    2010-07-15

    The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2.0 gives access both to high

  3. MetaPIGA v2.0: maximum likelihood large phylogeny estimation using the metapopulation genetic algorithm and other stochastic heuristics

    Directory of Open Access Journals (Sweden)

    Milinkovitch Michel C

    2010-07-01

    Full Text Available Abstract Background The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Results Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood, including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. Conclusions The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these

  4. Subsurface Ventilation System Description Document

    Energy Technology Data Exchange (ETDEWEB)

    Eric Loros

    2001-07-25

    The Subsurface Ventilation System supports the construction and operation of the subsurface repository by providing air for personnel and equipment and temperature control for the underground areas. Although the system is located underground, some equipment and features may be housed or located above ground. The system ventilates the underground by providing ambient air from the surface throughout the subsurface development and emplacement areas. The system provides fresh air for a safe work environment and supports potential retrieval operations by ventilating and cooling emplacement drifts. The system maintains compliance within the limits established for approved air quality standards. The system maintains separate ventilation between the development and waste emplacement areas. The system shall remove a portion of the heat generated by the waste packages during preclosure to support thermal goals. The system provides temperature control by reducing drift temperature to support potential retrieval operations. The ventilation system has the capability to ventilate selected drifts during emplacement and retrieval operations. The Subsurface Facility System is the main interface with the Subsurface Ventilation System. The location of the ducting, seals, filters, fans, emplacement doors, regulators, and electronic controls are within the envelope created by the Ground Control System in the Subsurface Facility System. The Subsurface Ventilation System also interfaces with the Subsurface Electrical System for power, the Monitored Geologic Repository Operations Monitoring and Control System to ensure proper and safe operation, the Safeguards and Security System for access to the emplacement drifts, the Subsurface Fire Protection System for fire safety, the Emplacement Drift System for repository performance, and the Backfill Emplacement and Subsurface Excavation Systems to support ventilation needs.

  5. Subsurface Ventilation System Description Document

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-10-12

    The Subsurface Ventilation System supports the construction and operation of the subsurface repository by providing air for personnel and equipment and temperature control for the underground areas. Although the system is located underground, some equipment and features may be housed or located above ground. The system ventilates the underground by providing ambient air from the surface throughout the subsurface development and emplacement areas. The system provides fresh air for a safe work environment and supports potential retrieval operations by ventilating and cooling emplacement drifts. The system maintains compliance within the limits established for approved air quality standards. The system maintains separate ventilation between the development and waste emplacement areas. The system shall remove a portion of the heat generated by the waste packages during preclosure to support thermal goals. The system provides temperature control by reducing drift temperature to support potential retrieval operations. The ventilation system has the capability to ventilate selected drifts during emplacement and retrieval operations. The Subsurface Facility System is the main interface with the Subsurface Ventilation System. The location of the ducting, seals, filters, fans, emplacement doors, regulators, and electronic controls are within the envelope created by the Ground Control System in the Subsurface Facility System. The Subsurface Ventilation System also interfaces with the Subsurface Electrical System for power, the Monitored Geologic Repository Operations Monitoring and Control System to ensure proper and safe operation, the Safeguards and Security System for access to the emplacement drifts, the Subsurface Fire Protection System for fire safety, the Emplacement Drift System for repository performance, and the Backfill Emplacement and Subsurface Excavation Systems to support ventilation needs.

  6. Yucca Mountain Project Subsurface Facilities Design

    International Nuclear Information System (INIS)

    Linden, A.; Saunders, R.S.; Boutin, R.J.; Harrington, P.G.; Lachman, K.D.; Trautner, L.J.

    2002-01-01

    Four units of the Topopah Springs formation (volcanic tuff) are considered for the proposed repository: the upper lithophysal, the middle non-lithophysal, the lower lithophysal, and the lower non-lithophysal. Yucca Mountain was recently designated the site for a proposed repository to dispose of spent nuclear fuel and high-level radioactive waste. Work is proceeding to advance the design of subsurface facilities to accommodate emplacing waste packages in the proposed repository. This paper summarized recent progress in the design of subsurface layout of the proposed repository. The original Site Recommendation (SR) concept for the subsurface design located the repository largely within the lower lithophysal zone (approximately 73%) of the Topopah The Site Recommendation characterized area suitable for emplacement consisted of the primary upper block, the lower block and the southern upper block extension. The primary upper block accommodated the mandated 70,000 metric tons of heavy metal (MTHM) at a 1.45 kW/m hear heat load. Based on further study of the Site Recommendation concept, the proposed repository siting area footprint was modified to make maximum use of available site characterization data, and thus, reduce uncertainties associated with performance assessment. As a result of this study, a modified repository footprint has been proposed and is presently being review for acceptance by the DOE. A panel design concept was developed to reduce overall costs and reduce the overall emplacement schedule. This concept provides flexibility to adjust the proposed repository subsurface layout with time, as it makes it unnecessary to ''commit'' to development of a large single panel at the earliest stages of construction. A description of the underground layout configuration and influencing factors that affect the layout configuration are discussed in the report

  7. Biogenic Carbon on Mars: A Subsurface Chauvinistic Viewpoint

    Science.gov (United States)

    Onstott, T. C.; Lau, C. Y. M.; Magnabosco, C.; Harris, R.; Chen, Y.; Slater, G.; Sherwood Lollar, B.; Kieft, T. L.; van Heerden, E.; Borgonie, G.; Dong, H.

    2015-12-01

    A review of 150 publications on the subsurface microbiology of the continental subsurface provides ~1,400 measurements of cellular abundances down to 4,800 meter depth. These data suggest that the continental subsurface biomass is comprised of ~1016-17 grams of carbon, which is higher than the most recent estimates of ~1015 grams of carbon (1 Gt) for the marine deep biosphere. If life developed early in Martian history and Mars sustained an active hydrological cycle during its first 500 million years, then is it possible that Mars could have developed a subsurface biomass of comparable size to that of Earth? Such a biomass would comprise a much larger fraction of the total known Martian carbon budget than does the subsurface biomass on Earth. More importantly could a remnant of this subsurface biosphere survive to the present day? To determine how sustainable subsurface life could be in isolation from the surface we have been studying subsurface fracture fluids from the Precambrian Shields in South Africa and Canada. In these environments the energetically efficient and deeply rooted acetyl-CoA pathway for carbon fixation plays a central role for chemolithoautotrophic primary producers that form the base of the biomass pyramid. These primary producers appear to be sustained indefinitely by H2 generated through serpentinization and radiolytic reactions. Carbon isotope data suggest that in some subsurface locations a much larger population of secondary consumers are sustained by the primary production of biogenic CH4 from a much smaller population of methanogens. These inverted biomass and energy pyramids sustained by the cycling of CH4 could have been and could still be active on Mars. The C and H isotopic signatures of Martian CH4 remain key tools in identifying potential signatures of an extant Martian biosphere. Based upon our results to date cavity ring-down spectroscopic technologies provide an option for making these measurements on future rover missions.

  8. Subsurface remote sensing

    International Nuclear Information System (INIS)

    Schweitzer, Jeffrey S.; Groves, Joel L.

    2002-01-01

    Subsurface remote sensing measurements are widely used for oil and gas exploration, for oil and gas production monitoring, and for basic studies in the earth sciences. Radiation sensors, often including small accelerator sources, are used to obtain bulk properties of the surrounding strata as well as to provide detailed elemental analyses of the rocks and fluids in rock pores. Typically, instrument packages are lowered into a borehole at the end of a long cable, that may be as long as 10 km, and two-way data and instruction telemetry allows a single radiation instrument to operate in different modes and to send the data to a surface computer. Because these boreholes are often in remote locations throughout the world, the data are frequently transmitted by satellite to various locations around the world for almost real-time analysis and incorporation with other data. The complete system approach that permits rapid and reliable data acquisition, remote analysis and transmission to those making decisions is described

  9. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  10. The Mojave vadose zone: a subsurface biosphere analogue for Mars.

    Science.gov (United States)

    Abbey, William; Salas, Everett; Bhartia, Rohit; Beegle, Luther W

    2013-07-01

    If life ever evolved on the surface of Mars, it is unlikely that it would still survive there today, but as Mars evolved from a wet planet to an arid one, the subsurface environment may have presented a refuge from increasingly hostile surface conditions. Since the last glacial maximum, the Mojave Desert has experienced a similar shift from a wet to a dry environment, giving us the opportunity to study here on Earth how subsurface ecosystems in an arid environment adapt to increasingly barren surface conditions. In this paper, we advocate studying the vadose zone ecosystem of the Mojave Desert as an analogue for possible subsurface biospheres on Mars. We also describe several examples of Mars-like terrain found in the Mojave region and discuss ecological insights that might be gained by a thorough examination of the vadose zone in these specific terrains. Examples described include distributary fans (deltas, alluvial fans, etc.), paleosols overlain by basaltic lava flows, and evaporite deposits.

  11. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  12. Subsurface Geotechnical Parameters Report

    International Nuclear Information System (INIS)

    Rigby, D.; Mrugala, M.; Shideler, G.; Davidsavor, T.; Leem, J.; Buesch, D.; Sun, Y.; Potyondy, D.; Christianson, M.

    2003-01-01

    The Yucca Mountain Project is entering a the license application (LA) stage in its mission to develop the nation's first underground nuclear waste repository. After a number of years of gathering data related to site characterization, including activities ranging from laboratory and site investigations, to numerical modeling of processes associated with conditions to be encountered in the future repository, the Project is realigning its activities towards the License Application preparation. At the current stage, the major efforts are directed at translating the results of scientific investigations into sets of data needed to support the design, and to fulfill the licensing requirements and the repository design activities. This document addresses the program need to address specific technical questions so that an assessment can be made about the suitability and adequacy of data to license and construct a repository at the Yucca Mountain Site. In July 2002, the U.S. Nuclear Regulatory Commission (NRC) published an Integrated Issue Resolution Status Report (NRC 2002). Included in this report were the Repository Design and Thermal-Mechanical Effects (RDTME) Key Technical Issues (KTI). Geotechnical agreements were formulated to resolve a number of KTI subissues, in particular, RDTME KTIs 3.04, 3.05, 3.07, and 3.19 relate to the physical, thermal and mechanical properties of the host rock (NRC 2002, pp. 2.1.1-28, 2.1.7-10 to 2.1.7-21, A-17, A-18, and A-20). The purpose of the Subsurface Geotechnical Parameters Report is to present an accounting of current geotechnical information that will help resolve KTI subissues and some other project needs. The report analyzes and summarizes available qualified geotechnical data. It evaluates the sufficiency and quality of existing data to support engineering design and performance assessment. In addition, the corroborative data obtained from tests performed by a number of research organizations is presented to reinforce

  13. Subsurface Geotechnical Parameters Report

    Energy Technology Data Exchange (ETDEWEB)

    D. Rigby; M. Mrugala; G. Shideler; T. Davidsavor; J. Leem; D. Buesch; Y. Sun; D. Potyondy; M. Christianson

    2003-12-17

    The Yucca Mountain Project is entering a the license application (LA) stage in its mission to develop the nation's first underground nuclear waste repository. After a number of years of gathering data related to site characterization, including activities ranging from laboratory and site investigations, to numerical modeling of processes associated with conditions to be encountered in the future repository, the Project is realigning its activities towards the License Application preparation. At the current stage, the major efforts are directed at translating the results of scientific investigations into sets of data needed to support the design, and to fulfill the licensing requirements and the repository design activities. This document addresses the program need to address specific technical questions so that an assessment can be made about the suitability and adequacy of data to license and construct a repository at the Yucca Mountain Site. In July 2002, the U.S. Nuclear Regulatory Commission (NRC) published an Integrated Issue Resolution Status Report (NRC 2002). Included in this report were the Repository Design and Thermal-Mechanical Effects (RDTME) Key Technical Issues (KTI). Geotechnical agreements were formulated to resolve a number of KTI subissues, in particular, RDTME KTIs 3.04, 3.05, 3.07, and 3.19 relate to the physical, thermal and mechanical properties of the host rock (NRC 2002, pp. 2.1.1-28, 2.1.7-10 to 2.1.7-21, A-17, A-18, and A-20). The purpose of the Subsurface Geotechnical Parameters Report is to present an accounting of current geotechnical information that will help resolve KTI subissues and some other project needs. The report analyzes and summarizes available qualified geotechnical data. It evaluates the sufficiency and quality of existing data to support engineering design and performance assessment. In addition, the corroborative data obtained from tests performed by a number of research organizations is presented to reinforce

  14. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  15. SUBSURFACE CONSTRUCTION AND DEVELOPMENT ANALYSIS

    International Nuclear Information System (INIS)

    N.E. Kramer

    1998-01-01

    The purpose of this analysis is to identify appropriate construction methods and develop a feasible approach for construction and development of the repository subsurface facilities. The objective of this analysis is to support development of the subsurface repository layout for License Application (LA) design. The scope of the analysis for construction and development of the subsurface Repository facilities covers: (1) Excavation methods, including application of knowledge gained from construction of the Exploratory Studies Facility (ESF). (2) Muck removal from excavation headings to the surface. This task will examine ways of preventing interference with other subsurface construction activities. (3) The logistics and equipment for the construction and development rail haulage systems. (4) Impact of ground support installation on excavation and other construction activities. (5) Examination of how drift mapping will be accomplished. (6) Men and materials handling. (7) Installation and removal of construction utilities and ventilation systems. (8) Equipping and finishing of the emplacement drift mains and access ramps to fulfill waste emplacement operational needs. (9) Emplacement drift and access mains and ramps commissioning prior to handover for emplacement operations. (10) Examination of ways to structure the contracts for construction of the repository. (11) Discussion of different construction schemes and how to minimize the schedule risks implicit in those schemes. (12) Surface facilities needed for subsurface construction activities

  16. Program overview: Subsurface science program

    International Nuclear Information System (INIS)

    1994-03-01

    The OHER Subsurface Science Program is DOE's core basic research program concerned with subsoils and groundwater. These practices have resulted in contamination by mixtures of organic chemicals, inorganic chemicals, and radionuclides. A primary long-term goal is to provide a foundation of knowledge that will lead to the reduction of environmental risks and to cost-effective cleanup strategies. Since the Program was initiated in 1985, a substantial amount of research in hydrogeology, subsurface microbiology, and the geochemistry of organically complexed radionuclides has been completed, leading to a better understanding of contaminant transport in groundwater and to new insights into microbial distribution and function in the subsurface environments. The Subsurface Science Program focuses on achieving long-term scientific advances that will assist DOE in the following key areas: providing the scientific basis for innovative in situ remediation technologies that are based on a concept of decontamination through benign manipulation of natural systems; understanding the complex mechanisms and process interactions that occur in the subsurface; determining the influence of chemical and geochemical-microbial processes on co-contaminant mobility to reduce environmental risks; improving predictions of contaminant transport that draw on fundamental knowledge of contaminant behavior in the presence of physical and chemical heterogeneities to improve cleanup effectiveness and to predict environmental risks

  17. Subsurface microbial habitats on Mars

    Science.gov (United States)

    Boston, P. J.; Mckay, C. P.

    1991-01-01

    We developed scenarios for shallow and deep subsurface cryptic niches for microbial life on Mars. Such habitats could have considerably prolonged the persistence of life on Mars as surface conditions became increasingly inhospitable. The scenarios rely on geothermal hot spots existing below the near or deep subsurface of Mars. Recent advances in the comparatively new field of deep subsurface microbiology have revealed previously unsuspected rich aerobic and anaerobic microbal communities far below the surface of the Earth. Such habitats, protected from the grim surface conditions on Mars, could receive warmth from below and maintain water in its liquid state. In addition, geothermally or volcanically reduced gases percolating from below through a microbiologically active zone could provide the reducing power needed for a closed or semi-closed microbial ecosystem to thrive.

  18. Subsurface Fire Hazards Technical Report

    International Nuclear Information System (INIS)

    Logan, R.C.

    1999-01-01

    The results from this report are preliminary and cannot be used as input into documents supporting procurement, fabrication, or construction. This technical report identifies fire hazards and proposes their mitigation for the subsurface repository fire protection system. The proposed mitigation establishes the minimum level of fire protection to meet NRC regulations, DOE fire protection orders, that ensure fire containment, adequate life safety provisions, and minimize property loss. Equipment requiring automatic fire suppression systems is identified. The subsurface fire hazards that are identified can be adequately mitigated

  19. Strategic planning features of subsurface management in Kemerovo Oblast

    Science.gov (United States)

    Romanyuk, V.; Grinkevich, A.; Akhmadeev, K.; Pozdeeva, G.

    2016-09-01

    The article discusses the strategic planning features of regional development based on the production and subsurface management in Kemerovo Oblast. The modern approach - SWOT analysis was applied to assess the regional development strategy. The estimation of regional development plan implementation was given for the foreseeable future.

  20. Using Muons to Image the Subsurface.

    Energy Technology Data Exchange (ETDEWEB)

    Bonal, Nedra [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cashion, Avery Ted [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cieslewski, Grzegorz [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dorsey, Daniel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Foris, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miller, Timothy J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roberts, Barry L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Su, Jiann-Cherng [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dreesen, Wendi [NSTec, Livermore, CA (United States); Green, J. Andrew [NSTec, Livermore, CA (United States); Schwellenbach, David [NSTec, Livermore, CA (United States)

    2016-11-01

    Muons are subatomic particles that can penetrate the earth 's crust several kilometers and may be useful for subsurface characterization . The absorption rate of muons depends on the density of the materials through which they pass. Muons are more sensitive to density variation than other phenomena, including gravity, making them beneficial for subsurface investigation . Measurements of muon flux rate at differing directions provide density variations of the materials between the muon source (cosmic rays and neutrino interactions) and the detector, much like a CAT scan. Currently, muon tomography can resolve features to the sub-meter scale. This work consists of three parts to address the use of muons for subsurface characterization : 1) assess the use of muon scattering for estimating density differences of common rock types, 2 ) using muon flux to detect a void in rock, 3) measure muon direction by designing a new detector. Results from this project lay the groundwork for future directions in this field. Low-density objects can be detected by muons even when enclosed in high-density material like lead, and even small changes in density (e.g. changes due to fracturing of material) can be detected. Rock density has a linear relationship with muon scattering density per rock volume when this ratio is greater than 0.10 . Limitations on using muon scattering to assess density changes among common rock types have been identified. However, other analysis methods may show improved results for these relatively low density materials. Simulations show that muons can be used to image void space (e.g. tunnels) within rock but experimental results have been ambiguous. Improvements are suggested to improve imaging voids such as tunnels through rocks. Finally, a muon detector has been designed and tested to measure muon direction, which will improve signal-to-noise ratio and help address fundamental questions about the source of upgoing muons .

  1. Feasibility of a subsurface storage

    International Nuclear Information System (INIS)

    1998-11-01

    This report analyses the notion of subsurface storage under the scientifical, technical and legal aspects. This reflection belongs to the studies about long duration storage carried out in the framework of the axis 3 of the December 30, 1991 law. The report comprises 3 parts. The first part is a synthesis of the complete subsurface storage study: definitions, aim of the report, very long duration storage paradigm, description files of concepts, thematic synthesis (legal aspects, safety, monitoring, sites, seismicity, heat transfers, corrosion, concretes, R and works, handling, tailings and dismantlement, economy..), multi-criteria/multi-concept cross-analysis. The second part deals with the technical aspects of the subsurface storage: safety approach (long duration impact, radiation protection, mastery of effluents), monitoring strategy, macroscopic inventory of B-type waste packages, inventory of spent fuels, glasses, hulls and nozzles, geological contexts in the French territory (sites selection and characterization), on-site activities, hydrogeological and geochemical aspects, geo-technical works and infrastructures organization, subsurface seismic effects, cooling modes (ventilation, heat transfer with the geologic environment), heat transfer research programs (convection, poly-phase cooling in porous media), handling constraints, concretes (use, behaviour, durability), corrosion of metallic materials, technical-economical analysis, international context (experience feedback from Sweden (CLAB) and the USA (Yucca Mountain), other European and French facilities). The last part of the report is a graphical appendix with 3-D views and schemes of the different concepts. (J.S.)

  2. Safety analysis in subsurface repositories

    International Nuclear Information System (INIS)

    1985-06-01

    The development of mathematical models to represent the repository-geosphere-biosphere system, and the development of a structure for data acquisition, processing, and use to analyse the safety of subsurface repositories, are presented. To study the behavior of radionuclides in geosphere a laboratory to determine the hydrodynamic dispersion coefficient was constructed. (M.C.K.) [pt

  3. SUBSURFACE VISUAL ALARM SYSTEM ANALYSIS

    International Nuclear Information System (INIS)

    D.W. Markman

    2001-01-01

    The ''Subsurface Fire Hazard Analysis'' (CRWMS M andO 1998, page 61), and the document, ''Title III Evaluation Report for the Surface and Subsurface Communication System'', (CRWMS M andO 1999a, pages 21 and 23), both indicate the installed communication system is adequate to support Exploratory Studies Facility (ESF) activities with the exception of the mine phone system for emergency notification purposes. They recommend the installation of a visual alarm system to supplement the page/party phone system The purpose of this analysis is to identify data communication highway design approaches, and provide justification for the selected or recommended alternatives for the data communication of the subsurface visual alarm system. This analysis is being prepared to document a basis for the design selection of the data communication method. This analysis will briefly describe existing data or voice communication or monitoring systems within the ESF, and look at how these may be revised or adapted to support the needed data highway of the subsurface visual alarm. system. The existing PLC communication system installed in subsurface is providing data communication for alcove No.5 ventilation fans, south portal ventilation fans, bulkhead doors and generator monitoring system. It is given that the data communication of the subsurface visual alarm system will be a digital based system. It is also given that it is most feasible to take advantage of existing systems and equipment and not consider an entirely new data communication system design and installation. The scope and primary objectives of this analysis are to: (1) Briefly review and describe existing available data communication highways or systems within the ESF. (2) Examine technical characteristics of an existing system to disqualify a design alternative is paramount in minimizing the number of and depth of a system review. (3) Apply general engineering design practices or criteria such as relative cost, and degree

  4. 非对称和不定椭圆问题的有限体积元方法的最大模估计%Maximum Norm Estimates for Finite Volume Element Method for Non-selfadjoint and Indefinite Elliptic Problems

    Institute of Scientific and Technical Information of China (English)

    毕春加

    2005-01-01

    In this paper, we establish the maximum norm estimates of the solutions of the finite volume element method (FVE) based on the P1 conforming element for the non-selfadjoint and indefinite elliptic problems.

  5. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    Science.gov (United States)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  6. UNSAT-H infiltration model calibration at the Subsurface Disposal Area, Idaho National Engineering Laboratory

    International Nuclear Information System (INIS)

    Martian, P.

    1995-10-01

    Soil moisture monitoring data from the expanded neutron probe monitoring network located at the Subsurface Disposal Area (SDA) of the Idaho National Engineering Laboratory (INEL) were used to calibrate numerical infiltration models for 15 locations within and near the SDA. These calibrated models were then used to simulate infiltration into the SDA surficial sediments and underlying basalts for the entire operational period of the SDA (1952--1995). The purpose of performing the simulations was to obtain a time variant infiltration source term for future subsurface pathway modeling efforts as part of baseline risk assessment or performance assessments. The simulation results also provided estimates of the average recharge rate for the simulation period and insight into infiltration patterns at the SDA. These results suggest that the average aquifer recharge rate below the SDA may be at least 8 cm/yr and may be as high as 12 cm/yr. These values represent 38 and 57% of the average annual precipitation occurring at the INEL, respectively. The simulation results also indicate that the maximum evaporative depth may vary between 28 and 148 cm and is highly dependent on localized lithology within the SDA

  7. Atmospheric energy for subsurface life on Mars?

    Science.gov (United States)

    Weiss, B. P.; Yung, Y. L.; Nealson, K. H.

    2000-01-01

    The location and density of biologically useful energy sources on Mars will limit the biomass, spatial distribution, and organism size of any biota. Subsurface Martian organisms could be supplied with a large energy flux from the oxidation of photochemically produced atmospheric H(2) and CO diffusing into the regolith. However, surface abundance measurements of these gases demonstrate that no more than a few percent of this available flux is actually being consumed, suggesting that biological activity driven by atmospheric H(2) and CO is limited in the top few hundred meters of the subsurface. This is significant because the available but unused energy is extremely large: for organisms at 30-m depth, it is 2,000 times previous estimates of hydrothermal and chemical weathering energy and far exceeds the energy derivable from other atmospheric gases. This also implies that the apparent scarcity of life on Mars is not attributable to lack of energy. Instead, the availability of liquid water may be a more important factor limiting biological activity because the photochemical energy flux can only penetrate to 100- to 1,000-m depth, where most H(2)O is probably frozen. Because both atmospheric and Viking lander soil data provide little evidence for biological activity, the detection of short-lived trace gases will probably be a better indicator of any extant Martian life.

  8. Integrated geomechanical modelling for deep subsurface damage

    NARCIS (Netherlands)

    Wees, J.D. van; Orlic, B.; Zijl, W.; Jongerius, P.; Schreppers, G.J.; Hendriks, M.

    2001-01-01

    Government, E&P and mining industry increasingly demand fundamental insight and accurate predictions on subsurface and surface deformation and damage due to exploitation of subsurface natural resources, and subsurface storage of energy residues (e.g. CO2). At this moment deformation is difficult to

  9. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  10. Modelling of real area of contact between tool and workpiece in metal forming processes including the influence of subsurface deformation

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, Paulo A. F.; Bay, Niels Oluf

    2016-01-01

    New equipment for testing asperity deformation at various normal loads and subsurface elongations is presented. Resulting real contact area ratios increase heavily with increasing subsurface expansion due to lowered yield pressure on the asperities when imposing subsurface normal stress parallel ...... for estimating friction in the numerical modelling of metal forming processes.......New equipment for testing asperity deformation at various normal loads and subsurface elongations is presented. Resulting real contact area ratios increase heavily with increasing subsurface expansion due to lowered yield pressure on the asperities when imposing subsurface normal stress parallel...... to the surface. Finite element modelling supports the presentation and contributes by extrapolation of results to complete the mapping of contact area as function of normal pressure and one-directional subsurface strain parallel to the surface. Improved modelling of the real contact area is the basis...

  11. Using a network-based approach and targeted maximum likelihood estimation to evaluate the effect of adding pre-exposure prophylaxis to an ongoing test-and-treat trial.

    Science.gov (United States)

    Balzer, Laura; Staples, Patrick; Onnela, Jukka-Pekka; DeGruttola, Victor

    2017-04-01

    Several cluster-randomized trials are underway to investigate the implementation and effectiveness of a universal test-and-treat strategy on the HIV epidemic in sub-Saharan Africa. We consider nesting studies of pre-exposure prophylaxis within these trials. Pre-exposure prophylaxis is a general strategy where high-risk HIV- persons take antiretrovirals daily to reduce their risk of infection from exposure to HIV. We address how to target pre-exposure prophylaxis to high-risk groups and how to maximize power to detect the individual and combined effects of universal test-and-treat and pre-exposure prophylaxis strategies. We simulated 1000 trials, each consisting of 32 villages with 200 individuals per village. At baseline, we randomized the universal test-and-treat strategy. Then, after 3 years of follow-up, we considered four strategies for targeting pre-exposure prophylaxis: (1) all HIV- individuals who self-identify as high risk, (2) all HIV- individuals who are identified by their HIV+ partner (serodiscordant couples), (3) highly connected HIV- individuals, and (4) the HIV- contacts of a newly diagnosed HIV+ individual (a ring-based strategy). We explored two possible trial designs, and all villages were followed for a total of 7 years. For each village in a trial, we used a stochastic block model to generate bipartite (male-female) networks and simulated an agent-based epidemic process on these networks. We estimated the individual and combined intervention effects with a novel targeted maximum likelihood estimator, which used cross-validation to data-adaptively select from a pre-specified library the candidate estimator that maximized the efficiency of the analysis. The universal test-and-treat strategy reduced the 3-year cumulative HIV incidence by 4.0% on average. The impact of each pre-exposure prophylaxis strategy on the 4-year cumulative HIV incidence varied by the coverage of the universal test-and-treat strategy with lower coverage resulting in a larger

  12. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  13. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  14. Estimation of a subsurface structure by using shallow seismic engineering exploration system with multiple function (SWS); Takino danseiha tansa sochi (SWS) ni yoru senbu chika kozo tansa ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y [Beijing Shuidian Research Institute of Geophysical Surveying, Beijing (China); Ling, S [Nihon Nessui Corp., Tokyo (Japan); Okada, H [Hokkaido University, Sapporo (Japan)

    1997-10-22

    The Beijing Shuidian Research Institute of Geophysical Surveying has performed ocean seismic exploration in the area where the Fujian Pingtan bridge was planned to be constructed. The elastic wave exploration device is of a multi-functional type. The device has functions of acquiring, processing and analyzing data in seismic exploration using the reflection method, and can visualize subsurface conditions at the same time as performing the exploration. The planned bridge building area spans over a sea area of about 3500 m long with water depths from several meters to 30 meters. The foundation bed consists of dacite lithologic tuff and granodiorite. The seal level varies from 4.0 m to 4.8 m between high and low tides. According to the result of other measurements, the elastic wave propagation velocities of the sea water were found from 1475 to 1485 m/s, and the elastic wave propagation velocities at the surface bed of the sea bottom were from 1550 to 1700 m/s. The exploration used a workboat which moves at a constant speed while maintaining the offset between a transmitting source and a receiving source constant, executing vibration transmitting, receiving and recording all on the sea. The result of the exploration revealed that neither obstacles such as sunken ships nor marks of occurrence of ocean bottom landslides were present. 1 ref., 5 figs.

  15. Martian sub-surface ionising radiation: biosignatures and geology

    Directory of Open Access Journals (Sweden)

    J. M. Ward

    2007-07-01

    Full Text Available The surface of Mars, unshielded by thick atmosphere or global magnetic field, is exposed to high levels of cosmic radiation. This ionising radiation field is deleterious to the survival of dormant cells or spores and the persistence of molecular biomarkers in the subsurface, and so its characterisation is of prime astrobiological interest. Here, we present modelling results of the absorbed radiation dose as a function of depth through the Martian subsurface, suitable for calculation of biomarker persistence. A second major implementation of this dose accumulation rate data is in application of the optically stimulated luminescence technique for dating Martian sediments.

    We present calculations of the dose-depth profile in the Martian subsurface for various scenarios: variations of surface composition (dry regolith, ice, layered permafrost, solar minimum and maximum conditions, locations of different elevation (Olympus Mons, Hellas basin, datum altitude, and increasing atmospheric thickness over geological history. We also model the changing composition of the subsurface radiation field with depth compared between Martian locations with different shielding material, determine the relative dose contributions from primaries of different energies, and discuss particle deflection by the crustal magnetic fields.

  16. Subsurface Noble Gas Sampling Manual

    Energy Technology Data Exchange (ETDEWEB)

    Carrigan, C. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-18

    The intent of this document is to provide information about best available approaches for performing subsurface soil gas sampling during an On Site Inspection or OSI. This information is based on field sampling experiments, computer simulations and data from the NA-22 Noble Gas Signature Experiment Test Bed at the Nevada Nuclear Security Site (NNSS). The approaches should optimize the gas concentration from the subsurface cavity or chimney regime while simultaneously minimizing the potential for atmospheric radioxenon and near-surface Argon-37 contamination. Where possible, we quantitatively assess differences in sampling practices for the same sets of environmental conditions. We recognize that all sampling scenarios cannot be addressed. However, if this document helps to inform the intuition of the reader about addressing the challenges resulting from the inevitable deviations from the scenario assumed here, it will have achieved its goal.

  17. VISUALIZATION OF REGISTERED SUBSURFACE ANATOMY

    DEFF Research Database (Denmark)

    2010-01-01

    A system and method for visualization of subsurface anatomy includes obtaining a first image from a first camera and a second image from a second camera or a second channel of the first camera, where the first and second images contain shared anatomical structures. The second camera and the secon....... A visual interface displays the registered visualization of the first and second images. The system and method are particularly useful for imaging during minimally invasive surgery, such as robotic surgery....

  18. Geophysical characterization of subsurface barriers

    International Nuclear Information System (INIS)

    Borns, D.J.

    1995-08-01

    An option for controlling contaminant migration from plumes and buried waste sites is to construct a subsurface barrier of a low-permeability material. The successful application of subsurface barriers requires processes to verify the emplacement and effectiveness of barrier and to monitor the performance of a barrier after emplacement. Non destructive and remote sensing techniques, such as geophysical methods, are possible technologies to address these needs. The changes in mechanical, hydrologic and chemical properties associated with the emplacement of an engineered barrier will affect geophysical properties such a seismic velocity, electrical conductivity, and dielectric constant. Also, the barrier, once emplaced and interacting with the in situ geologic system, may affect the paths along which electrical current flows in the subsurface. These changes in properties and processes facilitate the detection and monitoring of the barrier. The approaches to characterizing and monitoring engineered barriers can be divided between (1) methods that directly image the barrier using the contrasts in physical properties between the barrier and the host soil or rock and (2) methods that reflect flow processes around or through the barrier. For example, seismic methods that delineate the changes in density and stiffness associated with the barrier represents a direct imaging method. Electrical self potential methods and flow probes based on heat flow methods represent techniques that can delineate the flow path or flow processes around and through a barrier

  19. Subsurface transport program: Research summary

    International Nuclear Information System (INIS)

    1987-01-01

    DOE's research program in subsurface transport is designed to provide a base of fundamental scientific information so that the geochemical, hydrological, and biological mechanisms that contribute to the transport and long term fate of energy related contaminants in subsurface ecosystems can be understood. Understanding the physical and chemical mechanisms that control the transport of single and co-contaminants is the underlying concern of the program. Particular attention is given to interdisciplinary research and to geosphere-biosphere interactions. The scientific results of the program will contribute to resolving Departmental questions related to the disposal of energy-producing and defense wastes. The background papers prepared in support of this document contain additional information on the relevance of the research in the long term to energy-producing technologies. Detailed scientific plans and other research documents are available for high priority research areas, for example, in subsurface transport of organic chemicals and mixtures and in the microbiology of deep aquifers. 5 figs., 1 tab

  20. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  1. Urban heat islands in the subsurface of German cities

    Science.gov (United States)

    Menberg, K.; Blum, P.; Zhu, K.; Bayer, P.

    2012-04-01

    In the subsurface of many cities there are widespread and persistent thermal anomalies (subsurface urban heat islands) that result in a warming of urban aquifers. The reasons for this heating are manifold. Possible heat sources are basements of buildings, leakage of sewage systems, buried district heating networks, re-injection of cooling water and solar irradiation on paved surfaces. In the current study, the reported groundwater temperatures in several German cities, such as Berlin, Munich, Cologne and Karlsruhe, are compared. Available data sets are supplemented by temperature measurements and depth profiles in observation wells. Trend analyses are conducted with time series of groundwater temperatures, and three-dimensional groundwater temperature maps are provided. In all investigated cities, pronounced positive temperature anomalies are present. The distribution of groundwater temperatures appears to be spatially and temporally highly variable. Apparently, the increased heat input into the urban subsurface is controlled by very local and site-specific parameters. In the long-run, the superposition of various heat sources results in an extensive temperature increase. In many cases, the maximum temperature elevation is found close to the city centre. Regional groundwater temperature differences between the city centre and the rural background are up to 5 °C, with local hot spots of even more pronounced anomalies. Particular heat sources, like cooling water injections or case-specific underground constructions, can cause local temperatures > 20°C in the subsurface. Examination of the long-term variations in isotherm maps shows that temperatures have increased by about 1°C in the city, as well as in the rural background areas over the last decades. This increase could be reproduced with trend analysis of temperature data gathered from several groundwater wells. Comparison between groundwater and air temperatures in Karlsruhe, for example, also indicates a

  2. Microbial activity in the terrestrial subsurface

    International Nuclear Information System (INIS)

    Kaiser, J.P.; Bollag, J.M.

    1990-01-01

    Little is known about the layers under the earth's crust. Only in recent years have techniques for sampling the deeper subsurface been developed to permit investigation of the subsurface environment. Prevailing conditions in the subsurface habitat such as nutrient availability, soil composition, redox potential, permeability and a variety of other factors can influence the microflora that flourish in a given environment. Microbial diversity varies between geological formations, but in general sandy soils support growth better than soils rich in clay. Bacteria predominate in subsurface sediments, while eukaryotes constitute only 1-2% of the microorganisms. Recent investigations revealed that most uncontaminated subsurface soils support the growth of aerobic heteroorganotrophic bacteria, but obviously anaerobic microorganisms also exist in the deeper subsurface habitat. The microorganisms residing below the surface of the earth are capable of degrading both natural and xenobiotic contaminants and can thereby adapt to growth under polluted conditions. (author) 4 tabs, 77 refs

  3. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  4. Estimation of subsurface structures in a Minami Noshiro 3D seismic survey region by seismic-array observations of microtremors; Minami Noshiro sanjigen jishin tansa kuikinai no hyoso kozo ni tsuite. Bido no array kansoku ni yoru suitei

    Energy Technology Data Exchange (ETDEWEB)

    Okada, H; Ling, S; Ishikawa, K [Hokkaido University, Sapporo (Japan); Tsuburaya, Y; Minegishi, M [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-05-27

    Japan National Oil Corporation Technology Research Center has carried out experiments on the three-dimensional seismic survey method which is regarded as an effective means for petroleum exploration. The experiments were conducted at the Minami Noshiro area in Akita Prefecture. Seismometer arrays were developed in radii of 30 to 300 m at seven points in the three-dimensional seismic exploration region to observe microtremors. The purpose is to estimate S-wave velocities from the ground surface to the foundation by using surface waves included in microtremors. Estimation of the surface bed structure is also included in the purpose since this is indispensable in seismic exploration using the reflection method. This paper reports results of the microtremor observations and the estimation on S-wave velocities (microtremor exploration). One or two kinds of arrays with different sizes composed of seven observation points per area were developed to observe microtremors independently. The important point in the result obtained in the present experiments is that a low velocity bed suggesting existence of faults was estimated. It will be necessary to repeat experiments and observations in the future to verify whether this microtremor exploration method has that much of exploration capability. For the time being, however, interest is addressed to considerations on comparison with the result of 3D experiments using the reflection method. 4 refs., 7 figs.

  5. Does Aspartic Acid Racemization Constrain the Depth Limit of the Subsurface Biosphere?

    Science.gov (United States)

    Onstott, T C.; Magnabosco, C.; Aubrey, A. D.; Burton, A. S.; Dworkin, J. P.; Elsila, J. E.; Grunsfeld, S.; Cao, B. H.; Hein, J. E.; Glavin, D. P.; hide

    2013-01-01

    Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of approximately 89 years for 1 km depth and 27 C and 1-2 years for 3 km depth and 54 C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples.

  6. Surface and subsurface cracks characteristics of single crystal SiC wafer in surface machining

    Energy Technology Data Exchange (ETDEWEB)

    Qiusheng, Y., E-mail: qsyan@gdut.edu.cn; Senkai, C., E-mail: senkite@sina.com; Jisheng, P., E-mail: panjisheng@gdut.edu.cn [School of Electromechanical Engineering, Guangdong University of Technology, Guangzhou, 510006 (China)

    2015-03-30

    Different machining processes were used in the single crystal SiC wafer machining. SEM was used to observe the surface morphology and a cross-sectional cleavages microscopy method was used for subsurface cracks detection. Surface and subsurface cracks characteristics of single crystal SiC wafer in abrasive machining were analysed. The results show that the surface and subsurface cracks system of single crystal SiC wafer in abrasive machining including radial crack, lateral crack and the median crack. In lapping process, material removal is dominated by brittle removal. Lots of chipping pits were found on the lapping surface. With the particle size becomes smaller, the surface roughness and subsurface crack depth decreases. When the particle size was changed to 1.5µm, the surface roughness Ra was reduced to 24.0nm and the maximum subsurface crack was 1.2µm. The efficiency of grinding is higher than lapping. Plastic removal can be achieved by changing the process parameters. Material removal was mostly in brittle fracture when grinding with 325# diamond wheel. Plow scratches and chipping pits were found on the ground surface. The surface roughness Ra was 17.7nm and maximum subsurface crack depth was 5.8 µm. When grinding with 8000# diamond wheel, the material removal was in plastic flow. Plastic scratches were found on the surface. A smooth surface of roughness Ra 2.5nm without any subsurface cracks was obtained. Atomic scale removal was possible in cluster magnetorheological finishing with diamond abrasive size of 0.5 µm. A super smooth surface eventually obtained with a roughness of Ra 0.4nm without any subsurface crack.

  7. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  8. Cultivating the Deep Subsurface Microbiome

    Science.gov (United States)

    Casar, C. P.; Osburn, M. R.; Flynn, T. M.; Masterson, A.; Kruger, B.

    2017-12-01

    Subterranean ecosystems are poorly understood because many microbes detected in metagenomic surveys are only distantly related to characterized isolates. Cultivating microorganisms from the deep subsurface is challenging due to its inaccessibility and potential for contamination. The Deep Mine Microbial Observatory (DeMMO) in Lead, SD however, offers access to deep microbial life via pristine fracture fluids in bedrock to a depth of 1478 m. The metabolic landscape of DeMMO was previously characterized via thermodynamic modeling coupled with genomic data, illustrating the potential for microbial inhabitants of DeMMO to utilize mineral substrates as energy sources. Here, we employ field and lab based cultivation approaches with pure minerals to link phylogeny to metabolism at DeMMO. Fracture fluids were directed through reactors filled with Fe3O4, Fe2O3, FeS2, MnO2, and FeCO3 at two sites (610 m and 1478 m) for 2 months prior to harvesting for subsequent analyses. We examined mineralogical, geochemical, and microbiological composition of the reactors via DNA sequencing, microscopy, lipid biomarker characterization, and bulk C and N isotope ratios to determine the influence of mineralogy on biofilm community development. Pre-characterized mineral chips were imaged via SEM to assay microbial growth; preliminary results suggest MnO2, Fe3O4, and Fe2O3 were most conducive to colonization. Solid materials from reactors were used as inoculum for batch cultivation experiments. Media designed to mimic fracture fluid chemistry was supplemented with mineral substrates targeting metal reducers. DNA sequences and microscopy of iron oxide-rich biofilms and fracture fluids suggest iron oxidation is a major energy source at redox transition zones where anaerobic fluids meet more oxidizing conditions. We utilized these biofilms and fluids as inoculum in gradient cultivation experiments targeting microaerophilic iron oxidizers. Cultivation of microbes endemic to DeMMO, a system

  9. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  10. Surface Modification and Surface - Subsurface Exchange Processes on Europa

    Science.gov (United States)

    Phillips, C. B.; Molaro, J.; Hand, K. P.

    2017-12-01

    The surface of Jupiter's moon Europa is modified by exogenic processes such as sputtering, gardening, radiolysis, sulfur ion implantation, and thermal processing, as well as endogenic processes including tidal shaking, mass wasting, and the effects of subsurface tectonic and perhaps cryovolcanic activity. New materials are created or deposited on the surface (radiolysis, micrometeorite impacts, sulfur ion implantation, cryovolcanic plume deposits), modified in place (thermal segregation, sintering), transported either vertically or horizontally (sputtering, gardening, mass wasting, tectonic and cryovolcanic activity), or lost from Europa completely (sputtering, plumes, larger impacts). Some of these processes vary spatially, as visible in Europa's leading-trailing hemisphere brightness asymmetry. Endogenic geologic processes also vary spatially, depending on terrain type. The surface can be classified into general landform categories that include tectonic features (ridges, bands, cracks); disrupted "chaos-type" terrain (chaos blocks, matrix, domes, pits, spots); and impact craters (simple, complex, multi-ring). The spatial distribution of these terrain types is relatively random, with some differences in apex-antiapex cratering rates and latitudinal variation in chaos vs. tectonic features. In this work, we extrapolate surface processes and rates from the top meter of the surface in conjunction with global estimates of transport and resurfacing rates. We combine near-surface modification with an estimate of surface-subsurface (and vice versa) transport rates for various geologic terrains based on an average of proposed formation mechanisms, and a spatial distribution of each landform type over Europa's surface area. Understanding the rates and mass balance for each of these processes, as well as their spatial and temporal variability, allows us to estimate surface - subsurface exchange rates over the average surface age ( 50myr) of Europa. Quantifying the timescale

  11. Sub-surface defect detection using transient thermography

    International Nuclear Information System (INIS)

    Mohd Zaki Umar; Huda Abdullah; Abdul Razak Hamzah; Wan Saffiey Wan Abdullah; Ibrahim Ahmad; Vavilov, Vladimir

    2009-04-01

    An experimental research had been carried out to study the potential of transient thermography in detecting sub-surface defect of non-metal material. In this research, eight pieces of bakelite material were used as samples. Each samples had a sub-surface defect in the circular shape with different diameters and depths. Experiment was conducted using one-sided Pulsed Thermal technique. Heating of samples were done using 30 k Watt adjustable quartz lamp while infra red (IR) images of samples were recorded using THV 550 IR camera. These IR images were then analysed with thermo fit TM Pro software to obtain the Maximum Absolute Differential Temperature Signal value, ΔT max and the time of its appearance, τ max (ΔT). Result showed that all defects were able to be detected even for the smallest and deepest defect (diameter = 5 mm and depth = 4 mm). However the highest value of Differential Temperature Signal (ΔT max ), were obtained at defect with the largest diameter, 20 mm and at the shallowest depth, 1 mm. As a conclusion, the sensitivity of the pulsed thermography technique to detect sub-surface defects of bakelite material is proportionately related with the size of defect diameter if the defect area at the same depth. On the contrary, the sensitivity of the pulsed thermography technique inversely related with the depth of defect if the defects have similar diameter size. (author)

  12. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  13. Drawing the subsurface : an integrative design approach

    NARCIS (Netherlands)

    Hooimeijer, F.L.; Lafleur, F.; Trinh, T.T.; Gogu, Constantin Radu; Campbell, Diarmad; de Beer, Johannes

    2017-01-01

    The sub-surface, with its man-made and natural components, plays an important, if not crucial, role in the urban climate and global energy transition. On the one hand, the sub-surface is associated with a variety of challenges such as subsidence, pollution, damage to infrastructure and shortages of

  14. Extracting subsurface fingerprints using optical coherence tomography

    CSIR Research Space (South Africa)

    Akhoury, SS

    2015-02-01

    Full Text Available Subsurface Fingerprints using Optical Coherence Tomography Sharat Saurabh Akhoury, Luke Nicholas Darlow Modelling and Digital Science, Council for Scientific and Industrial Research, Pretoria, South Africa Abstract Physiologists have found... approach to extract the subsurface fingerprint representation using a high-resolution imaging technology known as Optical Coherence Tomography (OCT). ...

  15. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  16. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  17. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  18. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  19. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  20. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  1. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  2. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  3. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  4. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  5. Effects of rainfall patterns and land cover on the subsurface flow generation of sloping Ferralsols in southern China.

    Directory of Open Access Journals (Sweden)

    Jian Duan

    Full Text Available Rainfall patterns and land cover are two important factors that affect the runoff generation process. To determine the surface and subsurface flows associated with different rainfall patterns on sloping Ferralsols under different land cover types, observational data related to surface and subsurface flows from 5 m × 15 m plots were collected from 2010 to 2012. The experiment was conducted to assess three land cover types (grass, litter cover and bare land in the Jiangxi Provincial Soil and Water Conservation Ecological Park. During the study period, 114 natural rainfall events produced subsurface flow and were divided into four groups using k-means clustering according to rainfall duration, rainfall depth and maximum 30-min rainfall intensity. The results showed that the total runoff and surface flow values were highest for bare land under all four rainfall patterns and lowest for the covered plots. However, covered plots generated higher subsurface flow values than bare land. Moreover, the surface and subsurface flows associated with the three land cover types differed significantly under different rainfall patterns. Rainfall patterns with low intensities and long durations created more subsurface flow in the grass and litter cover types, whereas rainfall patterns with high intensities and short durations resulted in greater surface flow over bare land. Rainfall pattern I had the highest surface and subsurface flow values for the grass cover and litter cover types. The highest surface flow value and lowest subsurface flow value for bare land occurred under rainfall pattern IV. Rainfall pattern II generated the highest subsurface flow value for bare land. Therefore, grass or litter cover are able to convert more surface flow into subsurface flow under different rainfall patterns. The rainfall patterns studied had greater effects on subsurface flow than on total runoff and surface flow for covered surfaces, as well as a greater effect on surface

  6. Microbial Diversity in Hydrothermal Surface to Sub-surface Environment of Suiyo Seamount

    Science.gov (United States)

    Higashi, Y.; Sunamura, M.; Kitamura, K.; Kurusu, Y.; Nakamura, K.; Maruyama, A.

    2002-12-01

    After excavation trials to a hydrothermal subsurface biosphere of the Suiyo Seamount, Izu-Bonin Arc, microbial diversity was examined using samples collected from drilled boreholes and natural vents with an catheter-type in situ microbial entrapment/incubator. This instrument consisted of a heat-tolerant cylindrical pipe with entrapment of a titanium-mesh capsule, containing sterilized inorganic porous grains, on the tip. After 3-10 day deployment in venting fluids with the maximum temperatures from 156 to 305degC, Microbial DNA was extracted from the grains and a 16S rDNA region was amplified and sequenced. Through the phylogenetic analysis of total 72 Bacteria and 30 Archaea clones, we found three novel phylogenetic groups in this hydrothermal surface to subsurface biosphere. Some new clades within the epsilon-Proteobacteria, which seemed to be microaerophilic, moderate thermophilic, and/or sulfur oxidizing, were detected. Clones related to moderate thermophilic and photosynthetic microbes were found in grain-attached samples at collapsed borehole and natural vent sites. We also detected a new clade closely related to a hyperthermophilic Archaea, Methanococcus jannashii, which has the capability of growing autotrophically on hydrogen and producing methane. However, the later two phylogroups were estimated as below a detection limit in microscopic cell counting, i.e., fluorescence in situ hybridization and direct counting. Most of microbes in venting fluids were assigned to be Bacteria, but difficult in specifying them using any known probes. The environment must be notable in microbial and genetic resources, while the ecosystem seems to be mainly supported by chemosynthetic products through the microbial sulfur oxidation, as in most deep-sea hydrothermal systems.

  7. Modeling subsurface contamination at Fernald

    International Nuclear Information System (INIS)

    Jones, B.W.; Flinn, J.C.; Ruwe, P.R.

    1994-01-01

    The Department of Energy's Fernald site is located about 20 miles northwest of Cincinnati. Fernald produced refined uranium metal products from ores between 1953 and 1989. The pure uranium was sent to other DOE sites in South Carolina, Tennessee, Colorado,and Washington in support of the nation's strategic defense programs. Over the years of large-scale uranium production, contamination of the site's soil and groundwater occurred.The contamination is of particular concern because the Fernald site is located over the Great Miami Aquifer, a designated sole-source drinking water aquifer. Contamination of the aquifer with uranium was found beneath the site, and migration of the contamination had occurred well beyond the site's southern boundary. As a result, Fernald was placed on the National Priorities (CERCLA/Superfund) List in 1989. Uranium production at the site ended in 1989,and Fernald's mission has been changed to one of environmental restoration. This paper presents information about computerized modeling of subsurface contamination used for the environmental restoration project at Fernald

  8. Modeling Subsurface Hydrology in Floodplains

    Science.gov (United States)

    Evans, Cristina M.; Dritschel, David G.; Singer, Michael B.

    2018-03-01

    Soil-moisture patterns in floodplains are highly dynamic, owing to the complex relationships between soil properties, climatic conditions at the surface, and the position of the water table. Given this complexity, along with climate change scenarios in many regions, there is a need for a model to investigate the implications of different conditions on water availability to riparian vegetation. We present a model, HaughFlow, which is able to predict coupled water movement in the vadose and phreatic zones of hydraulically connected floodplains. Model output was calibrated and evaluated at six sites in Australia to identify key patterns in subsurface hydrology. This study identifies the importance of the capillary fringe in vadose zone hydrology due to its water storage capacity and creation of conductive pathways. Following peaks in water table elevation, water can be stored in the capillary fringe for up to months (depending on the soil properties). This water can provide a critical resource for vegetation that is unable to access the water table. When water table peaks coincide with heavy rainfall events, the capillary fringe can support saturation of the entire soil profile. HaughFlow is used to investigate the water availability to riparian vegetation, producing daily output of water content in the soil over decadal time periods within different depth ranges. These outputs can be summarized to support scientific investigations of plant-water relations, as well as in management applications.

  9. Introduction: energy and the subsurface

    Science.gov (United States)

    Viswanathan, Hari S.

    2016-01-01

    This theme issue covers topics at the forefront of scientific research on energy and the subsurface, ranging from carbon dioxide (CO2) sequestration to the recovery of unconventional shale oil and gas resources through hydraulic fracturing. As such, the goal of this theme issue is to have an impact on the scientific community, broadly, by providing a self-contained collection of articles contributing to and reviewing the state-of-the-art of the field. This collection of articles could be used, for example, to set the next generation of research directions, while also being useful as a self-study guide for those interested in entering the field. Review articles are included on the topics of hydraulic fracturing as a multiscale problem, numerical modelling of hydraulic fracture propagation, the role of computational sciences in the upstream oil and gas industry and chemohydrodynamic patterns in porous media. Complementing the reviews is a set of original research papers covering growth models for branched hydraulic crack systems, fluid-driven crack propagation in elastic matrices, elastic and inelastic deformation of fluid-saturated rock, reaction front propagation in fracture matrices, the effects of rock mineralogy and pore structure on stress-dependent permeability of shales, topographic viscous fingering and plume dynamics in porous media convection. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597784

  10. Association between mean and interannual equatorial Indian Ocean subsurface temperature bias in a coupled model

    Science.gov (United States)

    Srinivas, G.; Chowdary, Jasti S.; Gnanaseelan, C.; Prasad, K. V. S. R.; Karmakar, Ananya; Parekh, Anant

    2018-03-01

    In the present study the association between mean and interannual subsurface temperature bias over the equatorial Indian Ocean (EIO) is investigated during boreal summer (June through September; JJAS) in the National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFSv2) hindcast. Anomalously high subsurface warm bias (greater than 3 °C) over the eastern EIO (EEIO) region is noted in CFSv2 during summer, which is higher compared to other parts of the tropical Indian Ocean. Prominent eastward current bias in the upper 100 m over the EIO region induced by anomalous westerly winds is primarily responsible for subsurface temperature bias. The eastward currents transport warm water to the EEIO and is pushed down to subsurface due to downwelling. Thus biases in both horizontal and vertical currents over the EIO region support subsurface warm bias. The evolution of systematic subsurface warm bias in the model shows strong interannual variability. These maximum subsurface warming episodes over the EEIO are mainly associated with La Niña like forcing. Strong convergence of low level winds over the EEIO and Maritime continent enhanced the westerly wind bias over the EIO during maximum warming years. This low level convergence of wind is induced by the bias in the gradient in the mean sea level pressure with positive bias over western EIO and negative bias over EEIO and parts of western Pacific. Consequently, changes in the atmospheric circulation associated with La Niña like conditions affected the ocean dynamics by modulating the current bias thereby enhancing the subsurface warm bias over the EEIO. It is identified that EEIO subsurface warming is stronger when La Niña co-occurred with negative Indian Ocean Dipole events as compared to La Niña only years in the model. Ocean general circulation model (OGCM) experiments forced with CFSv2 winds clearly support our hypothesis that ocean dynamics influenced by westerly winds bias is primarily

  11. Data inversion in coupled subsurface flow and geomechanics models

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2012-01-01

    We present an inverse modeling approach to estimate petrophysical and elastic properties of the subsurface. The aim is to use the fully coupled geomechanics-flow model of Girault et al (2011 Math. Models Methods Appl. Sci. 21 169–213) to jointly invert surface deformation and pressure data from wells. We use a functional-analytic framework to construct a forward operator (parameter-to-output map) that arises from the geomechanics-flow model of Girault et al. Then, we follow a deterministic approach to pose the inverse problem of finding parameter estimates from measurements of the output of the forward operator. We prove that this inverse problem is ill-posed in the sense of stability. The inverse problem is then regularized with the implementation of the Newton-conjugate gradient (CG) algorithm of Hanke (1997 Numer. Funct. Anal. Optim. 18 18–971). For a consistent application of the Newton-CG scheme, we establish the differentiability of the forward map and characterize the adjoint of its linearization. We provide assumptions under which the theory of Hanke ensures convergence and regularizing properties of the Newton-CG scheme. These properties are verified in our numerical experiments. In addition, our synthetic experiments display the capabilities of the proposed inverse approach to estimate parameters of the subsurface by means of data inversion. In particular, the added value of measurements of surface deformation in the estimation of absolute permeability is quantified with respect to the standard history matching approach of inverting production data with flow models. The proposed methodology can be potentially used to invert satellite geodetic data (e.g. InSAR and GPS) in combination with production data for optimal monitoring and characterization of the subsurface. (paper)

  12. SUBSURFACE REPOSITORY INTEGRATED CONTROL SYSTEM DESIGN

    International Nuclear Information System (INIS)

    Randle, D.C.

    2000-01-01

    The primary purpose of this document is to develop a preliminary high-level functional and physical control system architecture for the potential repository at Yucca Mountain. This document outlines an overall control system concept that encompasses and integrates the many diverse process and communication systems being developed for the subsurface repository design. This document presents integrated design concepts for monitoring and controlling the diverse set of subsurface operations. The Subsurface Repository Integrated Control System design will be composed of a series of diverse process systems and communication networks. The subsurface repository design contains many systems related to instrumentation and control (I andC) for both repository development and waste emplacement operations. These systems include waste emplacement, waste retrieval, ventilation, radiological and air monitoring, rail transportation, construction development, utility systems (electrical, lighting, water, compressed air, etc.), fire protection, backfill emplacement, and performance confirmation. Each of these systems involves some level of I andC and will typically be integrated over a data communications network throughout the subsurface facility. The subsurface I andC systems will also interface with multiple surface-based systems such as site operations, rail transportation, security and safeguards, and electrical/piped utilities. In addition to the I andC systems, the subsurface repository design also contains systems related to voice and video communications. The components for each of these systems will be distributed and linked over voice and video communication networks throughout the subsurface facility. The scope and primary objectives of this design analysis are to: (1) Identify preliminary system-level functions and interfaces (Section 6.2). (2) Examine the overall system complexity and determine how and on what levels the engineered process systems will be monitored

  13. DOE UST interim subsurface barrier technologies workshop

    International Nuclear Information System (INIS)

    1992-09-01

    This document contains information which was presented at a workshop regarding interim subsurface barrier technologies that could be used for underground storage tanks, particularly the tank 241-C-106 at the Hanford Reservation

  14. Design and maintenance of subsurface gravel wetlands.

    Science.gov (United States)

    2015-02-01

    This report summarizes the University of New Hampshire Stormwater Center (UNHSC) evaluation of : a review of Subsurface Gravel Wetlands design and specifications used by the New Hampshire : Department of Transportation (NHDOT or Department). : Subsur...

  15. Component-based framework for subsurface simulations

    International Nuclear Information System (INIS)

    Palmer, B J; Fang, Yilin; Hammond, Glenn; Gurumoorthi, Vidhya

    2007-01-01

    Simulations in the subsurface environment represent a broad range of phenomena covering an equally broad range of scales. Developing modelling capabilities that can integrate models representing different phenomena acting at different scales present formidable challenges both from the algorithmic and computer science perspective. This paper will describe the development of an integrated framework that will be used to combine different models into a single simulation. Initial work has focused on creating two frameworks, one for performing smooth particle hydrodynamics (SPH) simulations of fluid systems, the other for performing grid-based continuum simulations of reactive subsurface flow. The SPH framework is based on a parallel code developed for doing pore scale simulations, the continuum grid-based framework is based on the STOMP (Subsurface Transport Over Multiple Phases) code developed at PNNL Future work will focus on combining the frameworks together to perform multiscale, multiphysics simulations of reactive subsurface flow

  16. Subsurface Prospecting by Planetary Drones, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed program innovates subsurface prospecting by planetary drones to seek a solution to the difficulty of robotic prospecting, sample acquisition, and sample...

  17. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  18. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  19. Joint inversion of geophysical and hydrological data for improved subsurface characterization

    International Nuclear Information System (INIS)

    Kowalsky, Michael B.; Chen, Jinsong; Hubbard, Susan S.

    2006-01-01

    Understanding fluid distribution and movement in the subsurface is critical for a variety of subsurface applications, such as remediation of environmental contaminants, sequestration of nuclear waste and CO2, intrusion of saline water into fresh water aquifers, and the production of oil and gas. It is well recognized that characterizing the properties that control fluids in the subsurface with the accuracy and spatial coverage needed to parameterize flow and transport models is challenging using conventional borehole data alone. Integration of conventional borehole data with more spatially extensive geophysical data (obtained from the surface, between boreholes, and from surface to boreholes) shows promise for providing quantitative information about subsurface properties and processes. Typically, estimation of subsurface properties involves a two-step procedure in which geophysical data are first inverted and then integrated with direct measurements and petrophysical relationship information to estimate hydrological parameters. However, errors inherent to geophysical data acquisition and inversion approaches and errors associated with petrophysical relationships can decrease the value of geophysical data in the estimation procedure. In this paper, we illustrate using two examples how joint inversion approaches, or simultaneous inversion of geophysical and hydrological data, offer great potential for overcoming some of these limitations

  20. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  1. Pedological criteria for estimating the importance of subsurface ...

    African Journals Online (AJOL)

    2012-10-31

    Oct 31, 2012 ... strongly expressed in the profiles of the 'high' and 'low' SLFI groups, respectively. This concept ... ii) surface soil is permeable, iii) a water-impeding layer is ... to Fe2+ and Mn2+. The latter are soluble resulting in leaching of.

  2. Subsurface Sampling and Sensing Using Burrowing Moles

    Science.gov (United States)

    Stoker, C. R.; Richter, L.; Smith, W. H.

    2004-01-01

    Finding evidence for life on Mars will likely require accessing the subsurface since the Martian surface is both hostile to life and to preservation of biosignatures due to the cold dry conditions, the strong W environment, and the presence of strong oxidants. Systems are needed to probe beneath the sun and oxidant baked surface of Mars and return samples to the surface for analysis or to bring the instrument sensing underground. Recognizing this need, the European Space Agency incorporated a small subsurface penetrometer or Mole onto the Beagle 2 Mars lander. Had the 2003 landing been successful, the Mole would have collected samples from 1-1.5 m depth and delivered them to an organic analysis instrument on the surface. The de- vice called the Planetary Underground Tool (PLUTO), also measured soil mechanical and thermophysical properties. Constrained by the small mass and volume allowance of the Beagle lander, the PLUTO mole was a slender cylinder only 2 cm diameter and 28 cm long equipped with a small sampling device designed to collect samples and bring them to the surface for analysis by other instrument. The mass of the entire system including deployment mechanism and tether was 1/2 kg. sensor package underground to make in situ measurements. The Mars Underground Mole (MUM) is a larger Mole based on the PLUTO design but incorporating light collection optics that interface to a fiber optic cable in the tether that transmits light to a combined stimulated emission Raman Spectrometer and Short Wave Infrared (SWIR) reflectance Spectrometer with sensitivity from 0.7 to 2.5 micrometers. This instrument is called the Dual Spectral Sensor and uses a Digital Array Scanning Interferometer as the sensor technology, a type of fourier transform interferometer that uses fixed element prisms and thus is highly rugged compared to a Michaelson interferometer. Due to the size limitations of an on-Mole instrument compartment, and the availability of a tether, the sensor head

  3. Review of Constructed Subsurface Flow vs. Surface Flow Wetlands

    International Nuclear Information System (INIS)

    HALVERSON, NANCY

    2004-01-01

    The purpose of this document is to use existing documentation to review the effectiveness of subsurface flow and surface flow constructed wetlands in treating wastewater and to demonstrate the viability of treating effluent from Savannah River Site outfalls H-02 and H-04 with a subsurface flow constructed wetland to lower copper, lead and zinc concentrations to within National Pollutant Discharge Elimination System (NPDES) Permit limits. Constructed treatment wetlands are engineered systems that have been designed and constructed to use the natural functions of wetlands for wastewater treatment. Constructed wetlands have significantly lower total lifetime costs and often lower capital costs than conventional treatment systems. The two main types of constructed wetlands are surface flow and subsurface flow. In surface flow constructed wetlands, water flows above ground. Subsurface flow constructed wetlands are designed to keep the water level below the top of the rock or gravel media, thus minimizing human and ecological exposure. Subsurface flow wetlands demonstrate higher rates of contaminant removal per unit of land than surface flow (free water surface) wetlands, therefore subsurface flow wetlands can be smaller while achieving the same level of contaminant removal. Wetlands remove metals using a variety of processes including filtration of solids, sorption onto organic matter, oxidation and hydrolysis, formation of carbonates, formation of insoluble sulfides, binding to iron and manganese oxides, reduction to immobile forms by bacterial activity, and uptake by plants and bacteria. Metal removal rates in both subsurface flow and surface flow wetlands can be high, but can vary greatly depending upon the influent concentrations and the mass loading rate. Removal rates of greater than 90 per cent for copper, lead and zinc have been demonstrated in operating surface flow and subsurface flow wetlands. The constituents that exceed NPDES limits at outfalls H-02 a nd H

  4. Intelligent SUBsurface Quality : Intelligent use of subsurface infrastructure for surface quality

    NARCIS (Netherlands)

    Hooimeijer, F.L.; Kuzniecow Bacchin, T.; Lafleur, F.; van de Ven, F.H.M.; Clemens, F.H.L.R.; Broere, W.; Laumann, S.J.; Klaassen, R.G.; Marinetti, C.

    2016-01-01

    This project focuses on the urban renewal of (delta) metropolises and concentrates on the question how to design resilient, durable (subsurface) infrastructure in urban renewal projects using parameters of the natural system – linking in an efficient way (a) water cycle, (b) soil and subsurface

  5. Developing a trend prediction model of subsurface damage for fixed-abrasive grinding of optics by cup wheels.

    Science.gov (United States)

    Dong, Zhichao; Cheng, Haobo

    2016-11-10

    Fixed-abrasive grinding by cup wheels plays an important role in the production of precision optics. During cup wheel grinding, we strive for a large removal rate while maintaining fine integrity on the surface and subsurface layers (academically recognized as surface roughness and subsurface damage, respectively). This study develops a theoretical model used to predict the trend of subsurface damage of optics (with respect to various grinding parameters) in fixed-abrasive grinding by cup wheels. It is derived from the maximum undeformed chip thickness model, and it successfully correlates the pivotal parameters of cup wheel grinding with the subsurface damage depth. The efficiency of this model is then demonstrated by a set of experiments performed on a cup wheel grinding machine. In these experiments, the characteristics of subsurface damage are inspected by a wedge-polishing plus microscopic inspection method, revealing that the subsurface damage induced in cup wheel grinding is composed of craterlike morphologies and slender cracks, with depth ranging from ∼6.2 to ∼13.2  μm under the specified grinding parameters. With the help of the proposed model, an optimized grinding strategy is suggested for realizing fine subsurface integrity as well as high removal rate, which can alleviate the workload of subsequent lapping and polishing.

  6. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  7. SUBSURFACE REPOSITORY INTEGRATED CONTROL SYSTEM DESIGN

    International Nuclear Information System (INIS)

    C.J. Fernado

    1998-01-01

    The purpose of this document is to develop preliminary high-level functional and physical control system architectures for the proposed subsurface repository at Yucca Mountain. This document outlines overall control system concepts that encompass and integrate the many diverse systems being considered for use within the subsurface repository. This document presents integrated design concepts for monitoring and controlling the diverse set of subsurface operations. The subsurface repository design will be composed of a series of diverse systems that will be integrated to accomplish a set of overall functions and objectives. The subsurface repository contains several Instrumentation and Control (I andC) related systems including: waste emplacement systems, ventilation systems, communication systems, radiation monitoring systems, rail transportation systems, ground control monitoring systems, utility monitoring systems (electrical, lighting, water, compressed air, etc.), fire detection and protection systems, retrieval systems, and performance confirmation systems. Each of these systems involve some level of I andC and will typically be integrated over a data communication network. The subsurface I andC systems will also integrate with multiple surface-based site-wide systems such as emergency response, health physics, security and safeguards, communications, utilities and others. The scope and primary objectives of this analysis are to: (1) Identify preliminary system level functions and interface needs (Presented in the functional diagrams in Section 7.2). (2) Examine the overall system complexity and determine how and on what levels these control systems will be controlled and integrated (Presented in Section 7.2). (3) Develop a preliminary subsurface facility-wide design for an overall control system architecture, and depict this design by a series of control system functional block diagrams (Presented in Section 7.2). (4) Develop a series of physical architectures

  8. SUBSURFACE REPOSITORY INTEGRATED CONTROL SYSTEM DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    C.J. Fernado

    1998-09-17

    The purpose of this document is to develop preliminary high-level functional and physical control system architectures for the proposed subsurface repository at Yucca Mountain. This document outlines overall control system concepts that encompass and integrate the many diverse systems being considered for use within the subsurface repository. This document presents integrated design concepts for monitoring and controlling the diverse set of subsurface operations. The subsurface repository design will be composed of a series of diverse systems that will be integrated to accomplish a set of overall functions and objectives. The subsurface repository contains several Instrumentation and Control (I&C) related systems including: waste emplacement systems, ventilation systems, communication systems, radiation monitoring systems, rail transportation systems, ground control monitoring systems, utility monitoring systems (electrical, lighting, water, compressed air, etc.), fire detection and protection systems, retrieval systems, and performance confirmation systems. Each of these systems involve some level of I&C and will typically be integrated over a data communication network. The subsurface I&C systems will also integrate with multiple surface-based site-wide systems such as emergency response, health physics, security and safeguards, communications, utilities and others. The scope and primary objectives of this analysis are to: (1) Identify preliminary system level functions and interface needs (Presented in the functional diagrams in Section 7.2). (2) Examine the overall system complexity and determine how and on what levels these control systems will be controlled and integrated (Presented in Section 7.2). (3) Develop a preliminary subsurface facility-wide design for an overall control system architecture, and depict this design by a series of control system functional block diagrams (Presented in Section 7.2). (4) Develop a series of physical architectures that

  9. Shuttle Imaging Radar - Physical controls on signal penetration and subsurface scattering in the Eastern Sahara

    Science.gov (United States)

    Schaber, G. G.; Mccauley, J. F.; Breed, C. S.; Olhoeft, G. R.

    1986-01-01

    Interpretation of Shuttle Imaging Radar-A (SIR-A) images by McCauley et al. (1982) dramatically changed previous concepts of the role that fluvial processes have played over the past 10,000 to 30 million years in shaping this now extremely flat, featureless, and hyperarid landscape. In the present paper, the near-surface stratigraphy, the electrical properties of materials, and the types of radar interfaces found to be responsible for different classes of SIR-A tonal response are summarized. The dominant factors related to efficient microwave signal penetration into the sediment blanket include (1) favorable distribution of particle sizes, (2) extremely low moisture content and (3) reduced geometric scattering at the SIR-A frequency (1.3 GHz). The depth of signal penetration that results in a recorded backscatter, here called 'radar imaging depth', was documented in the field to be a maximum of 1.5 m, or 0.25 of the calculated 'skin depth', for the sediment blanket. Radar imaging depth is estimated to be between 2 and 3 m for active sand dune materials. Diverse permittivity interfaces and volume scatterers within the shallow subsurface are responsible for most of the observed backscatter not directly attributable to grazing outcrops. Calcium carbonate nodules and rhizoliths concentrated in sandy alluvium of Pleistocene age south of Safsaf oasis in south Egypt provide effective contrast in premittivity and thus act as volume scatterers that enhance SIR-A portrayal of younger inset stream channels.

  10. SHUTTLE IMAGING RADAR: PHYSICAL CONTROLS ON SIGNAL PENETRATION AND SUBSURFACE SCATTERING IN THE EASTERN SAHARA.

    Science.gov (United States)

    Schaber, Gerald G.; McCauley, John F.; Breed, Carol S.; Olhoeft, Gary R.

    1986-01-01

    It is found that the Shuttle Imaging Radar A (SIR-A) signal penetration and subsurface backscatter within the upper meter or so of the sediment blanket in the Eastern Sahara of southern Egypt and northern Sudan are enhanced both by radar sensor parameters and by the physical and chemical characteristics of eolian and alluvial materials. The near-surface stratigraphy, the electrical properties of materials, and the types of radar interfaces found to be responsible for different classes of SIR-A tonal response are summarized. The dominant factors related to efficient microwave signal penetration into the sediment blanket include 1) favorable distribution of particle sizes, 2) extremely low moisture content and 3) reduced geometric scattering at the SIR-A frequency (1. 3 GHz). The depth of signal penetration that results in a recorded backscatter, called radar imaging depth, was documented in the field to be a maximum of 1. 5 m, or 0. 25 times the calculated skin depth, for the sediment blanket. The radar imaging depth is estimated to be between 2 and 3 m for active sand dune materials.

  11. Subsurface barrier verification technologies, informal report

    International Nuclear Information System (INIS)

    Heiser, J.H.

    1994-06-01

    One of the more promising remediation options available to the DOE waste management community is subsurface barriers. Some of the uses of subsurface barriers include surrounding and/or containing buried waste, as secondary confinement of underground storage tanks, to direct or contain subsurface contaminant plumes and to restrict remediation methods, such as vacuum extraction, to a limited area. To be most effective the barriers should be continuous and depending on use, have few or no breaches. A breach may be formed through numerous pathways including: discontinuous grout application, from joints between panels and from cracking due to grout curing or wet-dry cycling. The ability to verify barrier integrity is valuable to the DOE, EPA, and commercial sector and will be required to gain full public acceptance of subsurface barriers as either primary or secondary confinement at waste sites. It is recognized that no suitable method exists for the verification of an emplaced barrier's integrity. The large size and deep placement of subsurface barriers makes detection of leaks challenging. This becomes magnified if the permissible leakage from the site is low. Detection of small cracks (fractions of an inch) at depths of 100 feet or more has not been possible using existing surface geophysical techniques. Compounding the problem of locating flaws in a barrier is the fact that no placement technology can guarantee the completeness or integrity of the emplaced barrier. This report summarizes several commonly used or promising technologies that have been or may be applied to in-situ barrier continuity verification

  12. Subsurface Science Program Bibliography, 1985--1992

    International Nuclear Information System (INIS)

    1992-08-01

    The Subsurface Science Program sponsors long-term basic research on (1) the fundamental physical, chemical, and biological mechanisms that control the reactivity, mobilization, stability, and transport of chemical mixtures in subsoils and ground water; (2) hydrogeology, including the hydraulic, microbiological, and geochemical properties of the vadose and saturated zones that control contaminant mobility and stability, including predictive modeling of coupled hydraulic-geochemical-microbial processes; and (3) the microbiology of deep sediments and ground water. TWs research, focused as it is on the natural subsurface environments that are most significantly affected by the more than 40 years of waste generation and disposal at DOE sites, is making important contributions to cleanup of DOE sites. Past DOE waste-disposal practices have resulted in subsurface contamination at DOE sites by unique combinations of radioactive materials and organic and inorganic chemicals (including heavy metals), which make site cleanup particularly difficult. The long- term (10- to 30-year) goal of the Subsurface Science Program is to provide a foundation of fundamental knowledge that can be used to reduce environmental risks and to provide a sound scientific basis for cost-effective cleanup strategies. The Subsurface Science Program is organized into nine interdisciplinary subprograms, or areas of basic research emphasis. The subprograms currently cover the areas of Co-Contaminant Chemistry, Colloids/Biocolloids, Multiphase Fluid Flow, Biodegradation/ Microbial Physiology, Deep Microbiology, Coupled Processes, Field-Scale (Natural Heterogeneity and Scale), and Environmental Science Research Center

  13. Subsurface Shielding Source Term Specification Calculation

    International Nuclear Information System (INIS)

    S.Su

    2001-01-01

    The purpose of this calculation is to establish appropriate and defensible waste-package radiation source terms for use in repository subsurface shielding design. This calculation supports the shielding design for the waste emplacement and retrieval system, and subsurface facility system. The objective is to identify the limiting waste package and specify its associated source terms including source strengths and energy spectra. Consistent with the Technical Work Plan for Subsurface Design Section FY 01 Work Activities (CRWMS M and O 2001, p. 15), the scope of work includes the following: (1) Review source terms generated by the Waste Package Department (WPD) for various waste forms and waste package types, and compile them for shielding-specific applications. (2) Determine acceptable waste package specific source terms for use in subsurface shielding design, using a reasonable and defensible methodology that is not unduly conservative. This calculation is associated with the engineering and design activity for the waste emplacement and retrieval system, and subsurface facility system. The technical work plan for this calculation is provided in CRWMS M and O 2001. Development and performance of this calculation conforms to the procedure, AP-3.12Q, Calculations

  14. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  15. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  16. Subsurface Contaminants Focus Area annual report 1997

    International Nuclear Information System (INIS)

    1997-01-01

    In support of its vision for technological excellence, the Subsurface Contaminants Focus Area (SCFA) has identified three strategic goals. The three goals of the SCFA are: Contain and/or stabilize contamination sources that pose an imminent threat to surface and ground waters; Delineate DNAPL contamination in the subsurface and remediate DNAPL-contaminated soils and ground water; and Remove a full range of metal and radionuclide contamination in soils and ground water. To meet the challenges of remediating subsurface contaminants in soils and ground water, SCFA funded more than 40 technologies in fiscal year 1997. These technologies are grouped according to the following product lines: Dense Nonaqueous-Phase Liquids; Metals and Radionuclides; Source Term Containment; and Source Term Remediation. This report briefly describes the SCFA 1997 technologies and showcases a few key technologies in each product line

  17. Complete Subsurface Elemental Composition Measurements With PING

    Science.gov (United States)

    Parsons, A. M.

    2012-01-01

    The Probing In situ with Neutrons and Gamma rays (PING) instrument will measure the complete bulk elemental composition of the subsurface of Mars as well as any other solid planetary body. PING can thus be a highly effective tool for both detailed local geochemistry science investigations and precision measurements of Mars subsurface reSOurces in preparation for future human exploration. As such, PING is thus fully capable of meeting a majority of both ncar and far term elements in Challenge #1 presented for this conference. Measuring the ncar subsurface composition of Mars will enable many of the MEPAG science goals and will be key to filling an important Strategic Knowledge Gap with regard to In situ Resources Utilization (ISRU) needs for human exploration. [1, 2] PING will thus fill an important niche in the Mars Exploration Program.

  18. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  19. Improving the biodegradative capacity of subsurface bacteria

    International Nuclear Information System (INIS)

    Romine, M.F.; Brockman, F.J.

    1993-04-01

    The continual release of large volumes of synthetic materials into the environment by agricultural and industrial sources over the last few decades has resulted in pollution of the subsurface environment. Cleanup has been difficult because of the relative inaccessibility of the contaminants caused by their wide dispersal in the deep subsurface, often at low concentrations and in large volumes. As a possible solution for these problems, interest in the introduction of biodegradative bacteria for in situ remediation of these sites has increased greatly in recent years (Timmis et al. 1988). Selection of biodegradative microbes to apply in such cleanup is limited to those strains that can survive among the native bacterial and predator community members at the particular pH, temperature, and moisture status of the site (Alexander, 1984). The use of microorganisms isolated from subsurface environments would be advantageous because the organisms are already adapted to the subsurface conditions. The options are further narrowed to strains that are able to degrade the contaminant rapidly, even in the presence of highly recalcitrant anthropogenic waste mixtures, and in conditions that do not require addition of further toxic compounds for the expression of the biodegradative capacity (Sayler et al. 1990). These obstacles can be overcome by placing the genes of well-characterized biodegradative enzymes under the control of promoters that can be regulated by inexpensive and nontoxic external factors and then moving the new genetic constructs into diverse groups of subsurface microbes. ne objective of this research is to test this hypothesis by comparing expression of two different toluene biodegradative enzymatic pathways from two different regulatable promoters in a variety of subsurface isolates

  20. MSTS - Multiphase Subsurface Transport Simulator theory manual

    International Nuclear Information System (INIS)

    White, M.D.; Nichols, W.E.

    1993-05-01

    The US Department of Energy, through the Yucca Mountain Site Characterization Project Office, has designated the Yucca Mountain site in Nevada for detailed study as the candidate US geologic repository for spent nuclear fuel and high-level radioactive waste. Site characterization will determine the suitability of the Yucca Mountain site for the potential waste repository. If the site is determined suitable, subsequent studies and characterization will be conducted to obtain authorization from the Nuclear Regulatory Commission to construct the potential waste repository. A principal component of the characterization and licensing processes involves numerically predicting the thermal and hydrologic response of the subsurface environment of the Yucca Mountain site to the potential repository over a 10,000-year period. The thermal and hydrologic response of the subsurface environment to the repository is anticipated to include complex processes of countercurrent vapor and liquid migration, multiple-phase heat transfer, multiple-phase transport, and geochemical reactions. Numerical simulators based on mathematical descriptions of these subsurface phenomena are required to make numerical predictions of the thermal and hydrologic response of the Yucca Mountain subsurface environment The engineering simulator called the Multiphase Subsurface Transport Simulator (MSTS) was developed at the request of the Yucca Mountain Site Characterization Project Office to produce numerical predictions of subsurface flow and transport phenomena at the potential Yucca Mountain site. This document delineates the design architecture and describes the specific computational algorithms that compose MSTS. Details for using MSTS and sample problems are given in the open-quotes User's Guide and Referenceclose quotes companion document

  1. Heating systems for heating subsurface formations

    Science.gov (United States)

    Nguyen, Scott Vinh [Houston, TX; Vinegar, Harold J [Bellaire, TX

    2011-04-26

    Methods and systems for heating a subsurface formation are described herein. A heating system for a subsurface formation includes a sealed conduit positioned in an opening in the formation and a heat source. The sealed conduit includes a heat transfer fluid. The heat source provides heat to a portion of the sealed conduit to change phase of the heat transfer fluid from a liquid to a vapor. The vapor in the sealed conduit rises in the sealed conduit, condenses to transfer heat to the formation and returns to the conduit portion as a liquid.

  2. Cotton, tomato, corn, and onion production with subsurface drip irrigation – a review

    Science.gov (United States)

    The usage of subsurface drip irrigation (SDI) has increased by 89% in the USA during the last ten years according to USDA NASS estimates and over 93% of the SDI land area is located in just ten states. Combining public entity and private industry perceptions of SDI in these ten states, the major cro...

  3. Subsurface clade of Geobacteraceae that predominates in a diversity of Fe(III)-reducing subsurface environments

    Science.gov (United States)

    Holmes, Dawn E.; O'Neil, Regina A.; Vrionis, Helen A.; N'Guessan, Lucie A.; Ortiz-Bernad, Irene; Larrahondo, Maria J.; Adams, Lorrie A.; Ward, Joy A.; Nicoll , Julie S.; Nevin, Kelly P.; Chavan, Milind A.; Johnson, Jessica P.; Long, Philip E.; Lovely, Derek R.

    2007-01-01

    There are distinct differences in the physiology of Geobacter species available in pure culture. Therefore, to understand the ecology of Geobacter species in subsurface environments, it is important to know which species predominate. Clone libraries were assembled with 16S rRNA genes and transcripts amplified from three subsurface environments in which Geobacter species are known to be important members of the microbial community: (1) a uranium-contaminated aquifer located in Rifle, CO, USA undergoing in situ bioremediation; (2) an acetate-impacted aquifer that serves as an analog for the long-term acetate amendments proposed for in situ uranium bioremediation and (3) a petroleum-contaminated aquifer in which Geobacter species play a role in the oxidation of aromatic hydrocarbons coupled with the reduction of Fe(III). The majority of Geobacteraceae 16S rRNA sequences found in these environments clustered in a phylogenetically coherent subsurface clade, which also contains a number of Geobacter species isolated from subsurface environments. Concatamers constructed with 43 Geobacter genes amplified from these sites also clustered within this subsurface clade. 16S rRNA transcript and gene sequences in the sediments and groundwater at the Rifle site were highly similar, suggesting that sampling groundwater via monitoring wells can recover the most active Geobacter species. These results suggest that further study of Geobacter species in the subsurface clade is necessary to accurately model the behavior of Geobacter species during subsurface bioremediation of metal and organic contaminants.

  4. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  5. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  6. Modelling Nitrogen Transformation in Horizontal Subsurface Flow ...

    African Journals Online (AJOL)

    A mathematical model was developed to permit dynamic simulation of nitrogen interaction in a pilot horizontal subsurface flow constructed wetland receiving effluents from primary facultative pond. The system was planted with Phragmites mauritianus, which was provided with root zone depth of 75 cm. The root zone was ...

  7. Electrical resistivity determination of subsurface layers, subsoil ...

    African Journals Online (AJOL)

    Electrical resistivity determination of subsurface layers, subsoil competence and soil corrosivity at and engineering site location in Akungba-Akoko, ... The study concluded that the characteristics of the earth materials in the site would be favourable to normal engineering structures/materials that may be located on it.

  8. Simulation for ground penetrating radar (GPR) study of the subsurface structure of the Moon

    Science.gov (United States)

    Fa, Wenzhe

    2013-12-01

    Ground penetrating radar (GPR) is currently within the scope of China's Chang-E 3 lunar mission, to study the shallow subsurface of the Moon. In this study, key factors that could affect a lunar GPR performance, such as frequency, range resolution, and antenna directivity, are discussed firstly. Geometrical optics and ray tracing techniques are used to model GPR echoes, considering the transmission, attenuation, reflection, geometrical spreading of radar waves, and the antenna directivity. The influence on A-scope GPR echoes and on the simulated radargrams for the Sinus Iridum region by surface and subsurface roughness, dielectric loss of the lunar regolith, radar frequency and bandwidth, and the distance between the transmit and receive antennas are discussed. Finally, potential scientific return about lunar subsurface properties from GPR echoes is also discussed. Simulation results suggest that subsurface structure from several to hundreds of meters can be studied from GPR echoes at P and VHF bands, and information about dielectric permittivity and thickness of subsurface layers can be estimated from GPR echoes in combination with regolith composition data.

  9. An Evaluation of Subsurface Microbial Activity Conditional to Subsurface Temperature, Porosity, and Permeability at North American Carbon Sequestration Sites

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, B. [Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States); National Energy Technology Lab. (NETL), Albany, OR (United States); Mordensky, S. [Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States); National Energy Technology Lab. (NETL), Albany, OR (United States); Verba, Circe [National Energy Technology Lab. (NETL), Albany, OR (United States); Rabjohns, K. [Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States); National Energy Technology Lab. (NETL), Albany, OR (United States); Colwell, F. [National Energy Technology Lab. (NETL), Albany, OR (United States); Oregon State Univ., Corvallis, OR (United States). College of Earth, Ocean, and Atmospheric Sciences

    2016-06-21

    Several nations, including the United States, recognize global climate change as a force transforming the global ecosphere. Carbon dioxide (CO2) is a greenhouse gas that contributes to the evolving climate. Reduction of atmospheric CO2 levels is a goal for many nations and carbon sequestration which traps CO2 in the Earth’s subsurface is one method to reduce atmospheric CO2 levels. Among the variables that must be considered in developing this technology to a national scale is microbial activity. Microbial activity or biomass can change rock permeability, alter artificial seals around boreholes, and play a key role in biogeochemistry and accordingly may determine how CO2 is sequestered underground. Certain physical parameters of a reservoir found in literature (e.g., temperature, porosity, and permeability) may indicate whether a reservoir can host microbial communities. In order to estimate which subsurface formations may host microbes, this report examines the subsurface temperature, porosity, and permeability of underground rock formations that have high potential to be targeted for CO2 sequestration. Of the 268 North American wellbore locations from the National Carbon Sequestration Database (NATCARB; National Energy and Technology Laboratory, 2015) and 35 sites from Nelson and Kibler (2003), 96 sequestration sites contain temperature data. Of these 96 sites, 36 sites have temperatures that would be favorable for microbial survival, 48 sites have mixed conditions for supporting microbial populations, and 11 sites would appear to be unfavorable to support microbial populations. Future studies of microbe viability would benefit from a larger database with more formation parameters (e.g. mineralogy, structure, and groundwater chemistry), which would help to increase understanding of where CO2 sequestration could be most efficiently implemented.

  10. Absorption and scattering coefficients estimation in two-dimensional participating media using the generalized maximum entropy and Levenberg-Marquardt methods; Estimacion del coeficiente de absorcion y dispersion en medios participantes bidimensionales utilizando el metodo de maxima entropia generalizada y el metodo Levenberg-Marquardt

    Energy Technology Data Exchange (ETDEWEB)

    Berrocal T, Mariella J. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]|[Universidad Nacional de Ingenieria, Lima (Peru); Roberty, Nilson C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; Silva Neto, Antonio J. [Universidade do Estado, Nova Friburgo, RJ (Brazil). Instituto Politecnico. Dept. de Engenharia Mecanica e Energia]|[Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear

    2002-07-01

    The solution of inverse problems in participating media where there is emission, absorption and dispersion of the radiation possesses several applications in engineering and medicine. The objective of this work is to estimative the coefficients of absorption and dispersion in two-dimensional heterogeneous participating media, using in independent form the Generalized Maximum Entropy and Levenberg Marquardt methods. Both methods are based on the solution of the direct problem that is modeled by the Boltzmann equation in cartesian geometry. Some cases testes are presented. (author)

  11. Development of subsurface characterization method for decommissioning site remediation

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Sang Bum; Nam, Jong Soo; Choi, Yong Suk; Seo, Bum Kyoung; Moon, Jei Kwon; Choi, Jong Won [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In situ measurement of peak to valley method based on the ratio of counting rate between the full energy peak and Compton region was applied to identify the depth distribution of 137Cs. The In situ measurement and sampling results were applied to evaluate a residual radioactivity before and after remediation in decommissioning KRR site. Spatial analysis based on the Geostatistics method provides a reliable estimating the volume of contaminated soil with a graphical analysis, which was applied to the site characterization in the decommissioning KRR site. The in situ measurement and spatial analysis results for characterization of subsurface contamination are presented. The objective of a remedial action is to reduce risks to human health to acceptable levels by removing the source of contamination. Site characterization of the subsurface contamination is an important factor for planning and implementation of site remediation. Radiological survey and evaluation technology are required to ensure the reliability of the results, and the process must be easily applied during field measurements. In situ gamma-ray spectrometry is a powerful method for site characterization that can be used to identify the depth distribution and quantify radionuclides directly at the measurement site. The in situ measurement and Geostatistics method was applied to the site characterization for remediation and final status survey in decommissioning KRR site.

  12. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  13. A new model of equilibrium subsurface hydration on Mars

    Science.gov (United States)

    Hecht, M. H.

    2011-12-01

    One of the surprises of the Odyssey mission was the discovery by the Gamma Ray Spectrometer (GRS) suite of large concentrations of water-equivalent hydrogen (WEH) in the shallow subsurface at low latitudes, consistent with 5-7% regolith water content by weight (Mitrofanov et al. Science 297, p. 78, 2002; Feldman et al. Science 297, p. 75, 2002). Water at low latitudes on Mars is generally believed to be sequestered in the form of hydrated minerals. Numerous attempts have been made to relate the global map of WEH to specific mineralogy. For example Feldman et al. (Geophys. Res. Lett., 31, L16702, 2004) associated an estimated 10% sulfate content of the soil with epsomite (51% water), hexahydrite (46% water) and kieserite (13% water). In such studies, stability maps have been created by assuming equilibration of the subsurface water vapor density with a global mean annual column mass vapor density. Here it is argued that this value significantly understates the subsurface humidity. Results from the Phoenix mission are used to suggest that the midday vapor pressure measured just above the surface is a better proxy for the saturation vapor pressure of subsurface hydrous minerals. The measured frostpoint at the Phoenix site was found to be equal to the surface temperature by night and the modeled temperature at the top of the ice table by day (Zent et al. J. Geophys. Res., 115, E00E14, 2010). It was proposed by Hecht (41st LPSC abstract #1533, 2010) that this phenomenon results from water vapor trapping at the coldest nearby surface. At night, the surface is colder than the surface of the ice table; by day it is warmer. Thus, at night, the subsurface is bounded by a fully saturated layer of cold water frost or adsorbed water at the surface, not by the dry boundary layer itself. This argument is not strongly dependent on the particular saturation vapor pressure (SVP) of ice or other subsurface material, only on the thickness of the dry layer. Specifically, the diurnal

  14. Integrating non-colocated well and geophysical data to capture subsurface heterogeneity at an aquifer recharge and recovery site

    Science.gov (United States)

    Gottschalk, Ian P.; Hermans, Thomas; Knight, Rosemary; Caers, Jef; Cameron, David A.; Regnery, Julia; McCray, John E.

    2017-12-01

    Geophysical data have proven to be very useful for lithological characterization. However, quantitatively integrating the information gained from acquiring geophysical data generally requires colocated lithological and geophysical data for constructing a rock-physics relationship. In this contribution, the issue of integrating noncolocated geophysical and lithological data is addressed, and the results are applied to simulate groundwater flow in a heterogeneous aquifer in the Prairie Waters Project North Campus aquifer recharge site, Colorado. Two methods of constructing a rock-physics transform between electrical resistivity tomography (ERT) data and lithology measurements are assessed. In the first approach, a maximum likelihood estimation (MLE) is used to fit a bimodal lognormal distribution to horizontal crosssections of the ERT resistivity histogram. In the second approach, a spatial bootstrap is applied to approximate the rock-physics relationship. The rock-physics transforms provide soft data for multiple point statistics (MPS) simulations. Subsurface models are used to run groundwater flow and tracer test simulations. Each model's uncalibrated, predicted breakthrough time is evaluated based on its agreement with measured subsurface travel time values from infiltration basins to selected groundwater recovery wells. We find that incorporating geophysical information into uncalibrated flow models reduces the difference with observed values, as compared to flow models without geophysical information incorporated. The integration of geophysical data also narrows the variance of predicted tracer breakthrough times substantially. Accuracy is highest and variance is lowest in breakthrough predictions generated by the MLE-based rock-physics transform. Calibrating the ensemble of geophysically constrained models would help produce a suite of realistic flow models for predictive purposes at the site. We find that the success of breakthrough predictions is highly

  15. Whole-stream metabolism of a perennial spring-fed aufeis field in Alaska, with coincident surface and subsurface flow

    Science.gov (United States)

    Hendrickson, P. J.; Gooseff, M. N.; Huryn, A. D.

    2017-12-01

    Aufeis (icings or naleds) are seasonal arctic and sub-arctic features that accumulate through repeated overflow and freeze events of river or spring discharge. Aufeis fields, defined as the substrate on which aufeis form and the overlaying ice, have been studied to mitigate impacts on engineering structures; however, ecological characteristics and functions of aufeis fields are poorly understood. The perennial springs that supply warm water to aufeis fields create unique fluvial habitats, and are thought to act as winter and summer oases for biota. To investigate ecosystem function, we measured whole-stream metabolism at the Kuparuk River Aufeis (North Slope, AK), a large ( 5 km2) field composed of cobble substrate and predominately subsurface flow dynamics. The single-station open channel diel oxygen method was utilized at several dissolved oxygen (DO) stations located within and downstream of the aufeis field. DO loggers were installed in August 2016, and data downloaded summer 2017. Daily ecosystem respiration (ER), gross primary production (GPP) and reaeration rates were modeled using BASE, a package freely available in the open-source software R. Preliminary results support net heterotrophy during a two-week period of DO measurements in the fall season when minimum ice extent is observed. GPP, ER, and net metabolism are greater at the upstream reach near the spring source (P/R = 0.53), and decrease as flow moves downstream. As flow exits the aufeis field, surface and subsurface flow are incorporated into the metabolism model, and indicate the stream system becomes dependent on autochthonous production (P/R = 0.91). Current work is directed towards spring and summer discharge and metabolic parameter estimation, which is associated with maximum ice extent and rapid melting of the aufeis feature.

  16. The maximum significant wave height in the Southern North Sea

    NARCIS (Netherlands)

    Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.

    1995-01-01

    The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is

  17. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  18. Detecting a Subsurface Ocean From Periodic Orbits at Enceladus

    Science.gov (United States)

    Casotto, S.; Padovan, S.; Russell, R. P.; Lara, M.

    2008-12-01

    Enceladus is a small icy satellite of Saturn which has been observed by the Cassini orbiter to eject plumes mainly consisting of water vapor from the "tiger stripes" located near its South pole. While tidal heating has been ruled out as an inadequate energy source to drive these eruptions, tidally induced shear stress both along and across the stripes appears to be sufficiently powerful. The internal constitution of Enceladus that fits this model is likely to entail a thin crust and a subcrustal water layer above an undifferentiated interior. Apart from the lack of a core/mantle boundary, the situation is similar to the current hypothetical models of Europa's interior. The determination of the existence of a subsurface fluid layer can therefore be pursued with similar methods, including the study of the gravitational perturbations of tidal origin on an Enceladus orbiter, and the use of altimeter measurements to the tidally deformed surface. The dynamical environment of an Enceladus orbiter is made very unstable by the overwhelming presence of nearby Saturn. The Enceladus sphere of influence is roughly twice its radius. This makes it considerably more difficult to orbit than Europa, whose sphere of influence is ~six times its radius. While low-altitude, near-polar Enceladus orbits suffer extreme instability, recent works have extended the inclination envelope for long-term stable orbits at Enceladus. Several independent methods suggest that ~65 degrees inclination is the maximum attainable for stable, perturbed Keplerian motion. These orbits are non-circular and exist with altitude variations from ~200 to ~300 km. We propose a nominal reference orbit that enjoys long term stability and is favorable for long-term mapping and other scientific experiments. A brief excursion to a lower altitude, slightly higher inclined, yet highly unstable orbit is proposed to improve gravity signatures and enable high resolution, nadir-pointing experiments on the geysers emanating

  19. Spatial Estimation of Losses Attributable to Meteorological Disasters in a Specific Area (105.0°E–115.0°E, 25°N–35°N Using Bayesian Maximum Entropy and Partial Least Squares Regression

    Directory of Open Access Journals (Sweden)

    F. S. Zhang

    2016-01-01

    Full Text Available The spatial mapping of losses attributable to such disasters is now well established as a means of describing the spatial patterns of disaster risk, and it has been shown to be suitable for many types of major meteorological disasters. However, few studies have been carried out by developing a regression model to estimate the effects of the spatial distribution of meteorological factors on losses associated with meteorological disasters. In this study, the proposed approach is capable of the following: (a estimating the spatial distributions of seven meteorological factors using Bayesian maximum entropy, (b identifying the four mapping methods used in this research with the best performance based on the cross validation, and (c establishing a fitted model between the PLS components and disaster losses information using partial least squares regression within a specific research area. The results showed the following: (a best mapping results were produced by multivariate Bayesian maximum entropy with probabilistic soft data; (b the regression model using three PLS components, extracted from seven meteorological factors by PLS method, was the most predictive by means of PRESS/SS test; (c northern Hunan Province sustains the most damage, and southeastern Gansu Province and western Guizhou Province sustained the least.

  20. Geophysical data fusion for subsurface imaging

    International Nuclear Information System (INIS)

    Blohm, M.; Hatch, W.E.; Hoekstra, P.; Porter, D.W.

    1994-01-01

    Effective site characterization requires that many relevant geologic, hydrogeologic and biological properties of the subsurface be evaluated. A parameter that often directly influences chemical processes, ground water flow, contaminant transport, and biological activities is the lateral and vertical distribution of clays. The objective of the research an development under this contract is to improve non-invasive methods for detecting clay lenses. The percentage of clays in soils influences most physical properties that have an impact on environmental restoration and waste management. For example, the percentage of clays determine hydraulic permeability and the rate of contaminant migration, absorption of radioactive elements, and interaction with organic compounds. Therefore, improvements in non-invasive mapping of clays in the subsurface will result in better: characterization of contaminated sites, prediction of pathways of contaminant migration, assessment of risk of contaminants to public health if contaminants reach water supplies, design of remedial action and evaluation of alternative action

  1. Exact Maximum-Entropy Estimation with Feynman Diagrams

    Science.gov (United States)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  2. Using Elementary Mechanics to Estimate the Maximum Range of ICBMs

    Science.gov (United States)

    Amato, Joseph

    2018-01-01

    North Korea's development of nuclear weapons and, more recently, intercontinental ballistic missiles (ICBMs) has added a grave threat to world order. The threat presented by these weapons depends critically on missile range, i.e., the ability to reach North America or Europe while carrying a nuclear warhead. Using the limited information available…

  3. Estimating minimum and maximum air temperature using MODIS ...

    Indian Academy of Sciences (India)

    in a wide range of applications in areas of ecology, hydrology ... stations, thus attracting researchers to make use ... simpler because of the lack of solar radiation effect .... water from the snow packed Himalayan region to ... tribution System (LAADS) webdata archive cen- ..... ing due to greenhouse gases is different for the air.

  4. Network Inference and Maximum Entropy Estimation on Information Diagrams

    Czech Academy of Sciences Publication Activity Database

    Martin, E.A.; Hlinka, Jaroslav; Meinke, A.; Děchtěrenko, Filip; Tintěra, J.; Oliver, I.; Davidsen, J.

    2017-01-01

    Roč. 7, č. 1 (2017), č. článku 7062. ISSN 2045-2322 R&D Projects: GA ČR GA13-23940S; GA MZd(CZ) NV15-29835A Grant - others:GA MŠk(CZ) LO1611 Institutional support: RVO:67985807 Keywords : complex networks * mutual information * entropy maximization * fMRI Subject RIV: BD - Theory of Information OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 4.259, year: 2016

  5. Network Inference and Maximum Entropy Estimation on Information Diagrams

    Czech Academy of Sciences Publication Activity Database

    Martin, E.A.; Hlinka, J.; Meinke, A.; Děchtěrenko, Filip; Tintěra, J.; Oliver, I.; Davidsen, J.

    2017-01-01

    Roč. 7, č. 1 (2017), s. 1-15, č. článku 7062. ISSN 2045-2322 R&D Projects: GA ČR GA13-23940S Institutional support: RVO:68081740 Keywords : complex networks * mutual information * entropy maximization * fMRI Subject RIV: AN - Psychology OBOR OECD: Cognitive sciences Impact factor: 4.259, year: 2016

  6. Revision of regional maximum flood (RMF) estimation in Namibia

    African Journals Online (AJOL)

    2013-11-26

    Nov 26, 2013 ... sediment deposits, also known as slackwater flood deposits, are stage indicators of ..... of these stations has been operational for 33 years. This cor- responds to ..... Management, University of Haifa, Haifa, Israel. GRODEK T ...

  7. Water Table Recession in Subsurface Drained Soils

    OpenAIRE

    Moustafa, Mahmoud Mohamed; Yomota, Atsushi

    1999-01-01

    Theoretical drainage equations are intensively tested in many parts of humid and arid regions and are commonly used in drainage design. However, this is still a great concern in Japan as the drainage design is exclusively based on local experiences and empirical basis. There is a need therefore to evaluate the theoretical drainage equations under Japanese field conditions to recommend equations for design of subsurface drainage systems. This was the main motivation for this study. While drain...

  8. CLASSIFICATION OF THE MGR SUBSURFACE VENTILATION SYSTEM

    International Nuclear Information System (INIS)

    R.J. Garrett

    1999-01-01

    The purpose of this analysis is to document the Quality Assurance (QA) classification of the Monitored Geologic Repository (MGR) subsurface ventilation system structures, systems and components (SSCs) performed by the MGR Safety Assurance Department. This analysis also provides the basis for revision of YMP/90-55Q, Q-List (YMP 1998). The Q-List identifies those MGR SSCs subject to the requirements of DOE/RW-0333P7 ''Quality Assurance Requirements and Description'' (QARD) (DOE 1998)

  9. Cultivation Of Deep Subsurface Microbial Communities

    Science.gov (United States)

    Obrzut, Natalia; Casar, Caitlin; Osburn, Magdalena R.

    2018-01-01

    The potential habitability of surface environments on other planets in our solar system is limited by exposure to extreme radiation and desiccation. In contrast, subsurface environments may offer protection from these stressors and are potential reservoirs for liquid water and energy that support microbial life (Michalski et al., 2013) and are thus of interest to the astrobiology community. The samples used in this project were extracted from the Deep Mine Microbial Observatory (DeMMO) in the former Homestake Mine at depths of 800 to 2000 feet underground (Osburn et al., 2014). Phylogenetic data from these sites indicates the lack of cultured representatives within the community. We used geochemical data to guide media design to cultivate and isolate organisms from the DeMMO communities. Media used for cultivation varied from heterotrophic with oxygen, nitrate or sulfate to autotrophic media with ammonia or ferrous iron. Environmental fluid was used as inoculum in batch cultivation and strains were isolated via serial transfers or dilution to extinction. These methods resulted in isolating aerobic heterotrophs, nitrate reducers, sulfate reducers, ammonia oxidizers, and ferric iron reducers. DNA sequencing of these strains is underway to confirm which species they belong to. This project is part of the NASA Astrobiology Institute Life Underground initiative to detect and characterize subsurface microbial life; by characterizing the intraterrestrials, the life living deep within Earth’s crust, we aim to understand the controls on how and where life survives in subsurface settings. Cultivation of terrestrial deep subsurface microbes will provide insight into the survival mechanisms of intraterrestrials guiding the search for these life forms on other planets.

  10. CLASSIFICATION OF THE MGR SUBSURFACE EXCAVATION SYSTEM

    International Nuclear Information System (INIS)

    R. Garrett

    1999-01-01

    The purpose of this analysis is to document the Quality Assurance (QA) classification of the Monitored Geologic Repository (MGR) subsurface excavation system structures, systems and components (SSCs) performed by the MGR Safety Assurance Department. This analysis also provides the basis for revision of YMP/90-55Q, Q-List (YMP 1998). The Q-List identifies those MGR SSCs subject to the requirements of DOE/RW-0333P, ''Quality Assurance Requirements and Description'' (QARD) (DOE 1998)

  11. Molecular analysis of deep subsurface bacteria

    International Nuclear Information System (INIS)

    Jimenez Baez, L.E.

    1989-09-01

    Deep sediments samples from site C10a, in Appleton, and sites, P24, P28, and P29, at the Savannah River Site (SRS), near Aiken, South Carolina were studied to determine their microbial community composition, DNA homology and mol %G+C. Different geological formations with great variability in hydrogeological parameters were found across the depth profile. Phenotypic identification of deep subsurface bacteria underestimated the bacterial diversity at the three SRS sites, since bacteria with the same phenotype have different DNA composition and less than 70% DNA homology. Total DNA hybridization and mol %G+C analysis of deep sediment bacterial isolates suggested that each formation is comprised of different microbial communities. Depositional environment was more important than site and geological formation on the DNA relatedness between deep subsurface bacteria, since more 70% of bacteria with 20% or more of DNA homology came from the same depositional environments. Based on phenotypic and genotypic tests Pseudomonas spp. and Acinetobacter spp.-like bacteria were identified in 85 million years old sediments. This suggests that these microbial communities might have been adapted during a long period of time to the environmental conditions of the deep subsurface

  12. Linking Chaotic Advection with Subsurface Biogeochemical Processes

    Science.gov (United States)

    Mays, D. C.; Freedman, V. L.; White, S. K.; Fang, Y.; Neupauer, R.

    2017-12-01

    This work investigates the extent to which groundwater flow kinematics drive subsurface biogeochemical processes. In terms of groundwater flow kinematics, we consider chaotic advection, whose essential ingredient is stretching and folding of plumes. Chaotic advection is appealing within the context of groundwater remediation because it has been shown to optimize plume spreading in the laminar flows characteristic of aquifers. In terms of subsurface biogeochemical processes, we consider an existing model for microbially-mediated reduction of relatively mobile uranium(VI) to relatively immobile uranium(IV) following injection of acetate into a floodplain aquifer beneath a former uranium mill in Rifle, Colorado. This model has been implemented in the reactive transport code eSTOMP, the massively parallel version of STOMP (Subsurface Transport Over Multiple Phases). This presentation will report preliminary numerical simulations in which the hydraulic boundary conditions in the eSTOMP model are manipulated to simulate chaotic advection resulting from engineered injection and extraction of water through a manifold of wells surrounding the plume of injected acetate. This approach provides an avenue to simulate the impact of chaotic advection within the existing framework of the eSTOMP code.

  13. Subsurface urban heat islands in German cities.

    Science.gov (United States)

    Menberg, Kathrin; Bayer, Peter; Zosseder, Kai; Rumohr, Sven; Blum, Philipp

    2013-01-01

    Little is known about the intensity and extension of subsurface urban heat islands (UHI), and the individual role of the driving factors has not been revealed either. In this study, we compare groundwater temperatures in shallow aquifers beneath six German cities of different size (Berlin, Munich, Cologne, Frankfurt, Karlsruhe and Darmstadt). It is revealed that hotspots of up to +20K often exist, which stem from very local heat sources, such as insufficiently insulated power plants, landfills or open geothermal systems. When visualizing the regional conditions in isotherm maps, mostly a concentric picture is found with the highest temperatures in the city centers. This reflects the long-term accumulation of thermal energy over several centuries and the interplay of various factors, particularly in heat loss from basements, elevated ground surface temperatures (GST) and subsurface infrastructure. As a primary indicator to quantify and compare large-scale UHI intensity the 10-90%-quantile range UHII(10-90) of the temperature distribution is introduced. The latter reveals, in comparison to annual atmospheric UHI intensities, an even more pronounced heating of the shallow subsurface. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Hydrogen utilization potential in subsurface sediments

    Directory of Open Access Journals (Sweden)

    Rishi Ram Adhikari

    2016-01-01

    Full Text Available Subsurface microbial communities undertake many terminal electron-accepting processes, often simultaneously. Using a tritium-based assay, we measured the potential hydrogen oxidation catalyzed by hydrogenase enzymes in several subsurface sedimentary environments (Lake Van, Barents Sea, Equatorial Pacific and Gulf of Mexico with different predominant electron-acceptors. Hydrogenases constitute a diverse family of enzymes expressed by microorganisms that utilize molecular hydrogen as a metabolic substrate, product or intermediate. The assay reveals the potential for utilizing molecular hydrogen and allows qualitative detection of microbial activity irrespective of the predominant electron-accepting process. Because the method only requires samples frozen immediately after recovery, the assay can be used for identifying microbial activity in subsurface ecosystems without the need to preserve live material.We measured potential hydrogen oxidation rates in all samples from multiple depths at several sites that collectively span a wide range of environmental conditions and biogeochemical zones. Potential activity normalized to total cell abundance ranges over five orders of magnitude and varies, dependent upon the predominant terminal electron acceptor. Lowest per-cell potential rates characterize the zone of nitrate reduction and highest per-cell potential rates occur in the methanogenic zone. Possible reasons for this relationship to predominant electron acceptor include (i increasing importance of fermentation in successively deeper biogeochemical zones and (ii adaptation of H2ases to successively higher concentrations of H2 in successively deeper zones.

  15. Coordenadas geográficas na estimativa das temperaturas máxima e média decendiais do ar no Estado do Rio Grande do Sul Geographic coordinates in the ten-day maximum and mean air temperature estimation in the State of Rio Grande do Sul, Brazil

    Directory of Open Access Journals (Sweden)

    Alberto Cargnelutti Filho

    2008-12-01

    Full Text Available A partir dos dados referentes à temperatura máxima média decendial (Tx e à temperatura média decendial (Tm do ar de 41 municípios do Estado do Rio Grande do Sul, de 1945 a 1974, este trabalho teve como objetivo verificar se a Tx e a Tm podem ser estimadas em função da altitude, latitude e longitude. Para cada um dos 36 decêndios do ano, realizou-se análise de correlação e estimaram-se os parâmetros do modelo das equações de regressão linear múltipla, considerando Tx e Tm como variável dependente e altitude, latitude e longitude como variáveis independentes. Na validação dos modelos de estimativa da Tx e Tm, usou-se o coeficiente de correlação linear de Pearson, entre a Tx e a Tm estimada e a Tx e a Tm observada em dez municípios do Estado, com dados da série de observações meteorológicas de 1975 a 2004. A temperatura máxima média decendial e a temperatura média decendial podem ser estimadas por meio da altitude, latitude e longitude, em qualquer local e decêndio, no Estado do Rio Grande do Sul.The objective of this research was to estimate ten-day maximum (Tx and mean (Tm air temperature using altitude and the geographic coordinates latitude and longitude for the Rio Grande do Sul State, Brazil. Normal ten-day maximum and mean air temperature of 41 counties in the State of Rio Grande do Sul, from 1945 to 1974 were used. Correlation analysis and parameters estimate of multiple linear regression equations were performed using Tx and Tm as dependent variable and altitude, latitude and longitude as independent variables, for the 36 ten-day periods of the year. Pearson's linear correlation coefficient between estimated and observed Tx and Tm, calculated for tem counties using data of were used as independent data sets. The ten-day maximum and mean air temperature may be estimated from the altitude and the geographic coordinates latitude and longitude in the State of Rio Grande do Sul.

  16. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  17. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  18. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  19. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  20. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  1. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  2. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is significantly ...

  3. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  4. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  5. Enhanced recovery of subsurface geological structures using compressed sensing and the Ensemble Kalman filter

    KAUST Repository

    Sana, Furrukh

    2015-07-26

    Recovering information on subsurface geological features, such as flow channels, holds significant importance for optimizing the productivity of oil reservoirs. The flow channels exhibit high permeability in contrast to low permeability rock formations in their surroundings, enabling formulation of a sparse field recovery problem. The Ensemble Kalman filter (EnKF) is a widely used technique for the estimation of subsurface parameters, such as permeability. However, the EnKF often fails to recover and preserve the channel structures during the estimation process. Compressed Sensing (CS) has shown to significantly improve the reconstruction quality when dealing with such problems. We propose a new scheme based on CS principles to enhance the reconstruction of subsurface geological features by transforming the EnKF estimation process to a sparse domain representing diverse geological structures. Numerical experiments suggest that the proposed scheme provides an efficient mechanism to incorporate and preserve structural information in the estimation process and results in significant enhancement in the recovery of flow channel structures.

  6. Enhanced recovery of subsurface geological structures using compressed sensing and the Ensemble Kalman filter

    KAUST Repository

    Sana, Furrukh; Katterbauer, Klemens; Al-Naffouri, Tareq Y.; Hoteit, Ibrahim

    2015-01-01

    Recovering information on subsurface geological features, such as flow channels, holds significant importance for optimizing the productivity of oil reservoirs. The flow channels exhibit high permeability in contrast to low permeability rock formations in their surroundings, enabling formulation of a sparse field recovery problem. The Ensemble Kalman filter (EnKF) is a widely used technique for the estimation of subsurface parameters, such as permeability. However, the EnKF often fails to recover and preserve the channel structures during the estimation process. Compressed Sensing (CS) has shown to significantly improve the reconstruction quality when dealing with such problems. We propose a new scheme based on CS principles to enhance the reconstruction of subsurface geological features by transforming the EnKF estimation process to a sparse domain representing diverse geological structures. Numerical experiments suggest that the proposed scheme provides an efficient mechanism to incorporate and preserve structural information in the estimation process and results in significant enhancement in the recovery of flow channel structures.

  7. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  8. Some Ecological Mechanisms to Generate Habitability in Planetary Subsurface Areas by Chemolithotrophic Communities: The Ro Tinto Subsurface Ecosystem as a Model System

    Science.gov (United States)

    Fernández-Remolar, David C.; Gómez, Felipe; Prieto-Ballesteros, Olga; Schelble, Rachel T.; Rodríguez, Nuria; Amiols, Ricardo

    2008-02-01

    Chemolithotrophic communities that colonize subsurface habitats have great relevance for the astrobiological exploration of our Solar System. We hypothesize that the chemical and thermal stabilization of an environment through microbial activity could make a given planetary region habitable. The MARTE project ground-truth drilling campaigns that sampled cryptic subsurface microbial communities in the basement of the Ro Tinto headwaters have shown that acidic surficial habitats are the result of the microbial oxidation of pyritic ores. The oxidation process is exothermic and releases heat under both aerobic and anaerobic conditions. These microbial communities can maintain the subsurface habitat temperature through storage heat if the subsurface temperature does not exceed their maximum growth temperature. In the acidic solutions of the Ro Tinto, ferric iron acts as an effective buffer for controlling water pH. Under anaerobic conditions, ferric iron is the oxidant used by microbes to decompose pyrite through the production of sulfate, ferrous iron, and protons. The integration between the physical and chemical processes mediated by microorganisms with those driven by the local geology and hydrology have led us to hypothesize that thermal and chemical regulation mechanisms exist in this environment and that these homeostatic mechanisms could play an essential role in creating habitable areas for other types of microorganisms. Therefore, searching for the physicochemical expression of extinct and extant homeostatic mechanisms through physical and chemical anomalies in the Mars crust (i.e., local thermal gradient or high concentration of unusual products such as ferric sulfates precipitated out from acidic solutions produced by hypothetical microbial communities) could be a first step in the search for biological traces of a putative extant or extinct Mars biosphere.

  9. Some ecological mechanisms to generate habitability in planetary subsurface areas by chemolithotrophic communities: the Río Tinto subsurface ecosystem as a model system.

    Science.gov (United States)

    Fernández-Remolar, David C; Gómez, Felipe; Prieto-Ballesteros, Olga; Schelble, Rachel T; Rodríguez, Nuria; Amils, Ricardo

    2008-02-01

    Chemolithotrophic communities that colonize subsurface habitats have great relevance for the astrobiological exploration of our Solar System. We hypothesize that the chemical and thermal stabilization of an environment through microbial activity could make a given planetary region habitable. The MARTE project ground-truth drilling campaigns that sampled cryptic subsurface microbial communities in the basement of the Río Tinto headwaters have shown that acidic surficial habitats are the result of the microbial oxidation of pyritic ores. The oxidation process is exothermic and releases heat under both aerobic and anaerobic conditions. These microbial communities can maintain the subsurface habitat temperature through storage heat if the subsurface temperature does not exceed their maximum growth temperature. In the acidic solutions of the Río Tinto, ferric iron acts as an effective buffer for controlling water pH. Under anaerobic conditions, ferric iron is the oxidant used by microbes to decompose pyrite through the production of sulfate, ferrous iron, and protons. The integration between the physical and chemical processes mediated by microorganisms with those driven by the local geology and hydrology have led us to hypothesize that thermal and chemical regulation mechanisms exist in this environment and that these homeostatic mechanisms could play an essential role in creating habitable areas for other types of microorganisms. Therefore, searching for the physicochemical expression of extinct and extant homeostatic mechanisms through physical and chemical anomalies in the Mars crust (i.e., local thermal gradient or high concentration of unusual products such as ferric sulfates precipitated out from acidic solutions produced by hypothetical microbial communities) could be a first step in the search for biological traces of a putative extant or extinct Mars biosphere.

  10. Low-Rank Kalman Filtering in Subsurface Contaminant Transport Models

    KAUST Repository

    El Gharamti, Mohamad

    2010-12-01

    Understanding the geology and the hydrology of the subsurface is important to model the fluid flow and the behavior of the contaminant. It is essential to have an accurate knowledge of the movement of the contaminants in the porous media in order to track them and later extract them from the aquifer. A two-dimensional flow model is studied and then applied on a linear contaminant transport model in the same porous medium. Because of possible different sources of uncertainties, the deterministic model by itself cannot give exact estimations for the future contaminant state. Incorporating observations in the model can guide it to the true state. This is usually done using the Kalman filter (KF) when the system is linear and the extended Kalman filter (EKF) when the system is nonlinear. To overcome the high computational cost required by the KF, we use the singular evolutive Kalman filter (SEKF) and the singular evolutive extended Kalman filter (SEEKF) approximations of the KF operating with low-rank covariance matrices. The SEKF can be implemented on large dimensional contaminant problems while the usage of the KF is not possible. Experimental results show that with perfect and imperfect models, the low rank filters can provide as much accurate estimates as the full KF but at much less computational cost. Localization can help the filter analysis as long as there are enough neighborhood data to the point being analyzed. Estimating the permeabilities of the aquifer is successfully tackled using both the EKF and the SEEKF.

  11. Low-Rank Kalman Filtering in Subsurface Contaminant Transport Models

    KAUST Repository

    El Gharamti, Mohamad

    2010-01-01

    Understanding the geology and the hydrology of the subsurface is important to model the fluid flow and the behavior of the contaminant. It is essential to have an accurate knowledge of the movement of the contaminants in the porous media in order to track them and later extract them from the aquifer. A two-dimensional flow model is studied and then applied on a linear contaminant transport model in the same porous medium. Because of possible different sources of uncertainties, the deterministic model by itself cannot give exact estimations for the future contaminant state. Incorporating observations in the model can guide it to the true state. This is usually done using the Kalman filter (KF) when the system is linear and the extended Kalman filter (EKF) when the system is nonlinear. To overcome the high computational cost required by the KF, we use the singular evolutive Kalman filter (SEKF) and the singular evolutive extended Kalman filter (SEEKF) approximations of the KF operating with low-rank covariance matrices. The SEKF can be implemented on large dimensional contaminant problems while the usage of the KF is not possible. Experimental results show that with perfect and imperfect models, the low rank filters can provide as much accurate estimates as the full KF but at much less computational cost. Localization can help the filter analysis as long as there are enough neighborhood data to the point being analyzed. Estimating the permeabilities of the aquifer is successfully tackled using both the EKF and the SEEKF.

  12. A Review of distribution and quantity of lingering subsurface oil from the Exxon Valdez Oil Spill

    Science.gov (United States)

    Nixon, Zachary; Michel, Jacqueline

    2018-01-01

    Remaining lingering subsurface oil residues from the Exxon Valdez oil spill (EVOS) are, at present, patchily distributed across the geologically complex and spatially extensive shorelines of Prince William Sound and the Gulf of Alaska. We review and synthesize previous literature describing the causal geomorphic and physical mechanisms for persistence of oil in the intertidal subsurface sediments of these areas. We also summarize previous sampling and modeling efforts, and refine previously presented models with additional data to characterize the present-day linear and areal spatial extent, and quantity of lingering subsurface oil. In the weeks after the spill in March of 1989, approximately 17,750 t of oil were stranded along impacted shorelines, and by October of 1992, only 2% of the mass of spilled oil was estimated to remain in intertidal areas. We estimate that lingering subsurface residues, generally between 5 and 20 cm thick and sequestered below 10-20 cm of clean sediment, are present over 30 ha of intertidal area, along 11.4 km of shoreline, and represent approximately 227 t or 0.6% of the total mass of spilled oil. These residues are typically located in finer-grained sand and gravel sediments, often under an armor of cobble- or boulder-sized clasts, in areas with limited groundwater flow and porosity. Persistence of these residues is correlated with heavy initial oil loading together with localized sheltering from physical disturbance such as wave energy within the beach face. While no longer generally bioavailable and increasingly chemically weathered, present removal rates for these remaining subsurface oil residues have slowed to nearly zero. The only remaining plausible removal mechanisms will operate over time scales of decades.

  13. A new high-resolution electromagnetic method for subsurface imaging

    Science.gov (United States)

    Feng, Wanjie

    For most electromagnetic (EM) geophysical systems, the contamination of primary fields on secondary fields ultimately limits the capability of the controlled-source EM methods. Null coupling techniques were proposed to solve this problem. However, the small orientation errors in the null coupling systems greatly restrict the applications of these systems. Another problem encountered by most EM systems is the surface interference and geologic noise, which sometimes make the geophysical survey impossible to carry out. In order to solve these problems, the alternating target antenna coupling (ATAC) method was introduced, which greatly removed the influence of the primary field and reduced the surface interference. But this system has limitations on the maximum transmitter moment that can be used. The differential target antenna coupling (DTAC) method was proposed to allow much larger transmitter moments and at the same time maintain the advantages of the ATAC method. In this dissertation, first, the theoretical DTAC calculations were derived mathematically using Born and Wolf's complex magnetic vector. 1D layered and 2D blocked earth models were used to demonstrate that the DTAC method has no responses for 1D and 2D structures. Analytical studies of the plate model influenced by conductive and resistive backgrounds were presented to explain the physical phenomenology behind the DTAC method, which is the magnetic fields of the subsurface targets are required to be frequency dependent. Then, the advantages of the DTAC method, e.g., high-resolution, reducing the geologic noise and insensitive to surface interference, were analyzed using surface and subsurface numerical examples in the EMGIMA software. Next, the theoretical advantages, such as high resolution and insensitive to surface interference, were verified by designing and developing a low-power (moment of 50 Am 2) vertical-array DTAC system and testing it on controlled targets and scaled target coils. At last, a

  14. Low temperature monitoring system for subsurface barriers

    Science.gov (United States)

    Vinegar, Harold J [Bellaire, TX; McKinzie, II Billy John [Houston, TX

    2009-08-18

    A system for monitoring temperature of a subsurface low temperature zone is described. The system includes a plurality of freeze wells configured to form the low temperature zone, one or more lasers, and a fiber optic cable coupled to at least one laser. A portion of the fiber optic cable is positioned in at least one freeze well. At least one laser is configured to transmit light pulses into a first end of the fiber optic cable. An analyzer is coupled to the fiber optic cable. The analyzer is configured to receive return signals from the light pulses.

  15. Physiologically anaerobic microorganisms of the deep subsurface

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, S.E. Jr.; Chung, K.T.

    1991-06-01

    This study seeks to determine numbers, diversity, and morphology of anaerobic microorganisms in 15 samples of subsurface material from the Idaho National Engineering Laboratory, in 18 samples from the Hanford Reservation and in 1 rock sample from the Nevada Test Site; set up long term experiments on the chemical activities of anaerobic microorganisms based on these same samples; work to improve methods for the micro-scale determination of in situ anaerobic microbial activity;and to begin to isolate anaerobes from these samples into axenic culture with identification of the axenic isolates.

  16. Instrumented Moles for Planetary Subsurface Regolith Studies

    Science.gov (United States)

    Richter, L. O.; Coste, P. A.; Grzesik, A.; Knollenberg, J.; Magnani, P.; Nadalini, R.; Re, E.; Romstedt, J.; Sohl, F.; Spohn, T.

    2006-12-01

    Soil-like materials, or regolith, on solar system objects provide a record of physical and/or chemical weathering processes on the object in question and as such possess significant scientific relevance for study by landed planetary missions. In the case of Mars, a complex interplay has been at work between impact gardening, aeolian as well as possibly fluvial processes. This resulted in regolith that is texturally as well as compositionally layered as hinted at by results from the Mars Exploration Rover (MER) missions which are capable of accessing shallow subsurface soils by wheel trenching. Significant subsurface soil access on Mars, i.e. to depths of a meter or more, remains to be accomplished on future missions. This has been one of the objectives of the unsuccessful Beagle 2 landed element of the ESA Mars Express mission having been equipped with the Planetary Underground Tool (PLUTO) subsurface soil sampling Mole system capable of self-penetration into regolith due to an internal electro-mechanical hammering mechanism. This lightweight device of less than 900 g mass was designed to repeatedly obtain and deliver to the lander regolith samples from depths down to 2 m which would have been analysed for organic matter and, specifically, organic carbon from potential extinct microbial activity. With funding from the ESA technology programme, an evolved Mole system - the Instrumented Mole System (IMS) - has now been developed to a readiness level of TRL 6. The IMS is to serve as a carrier for in situ instruments for measurements in planetary subsurface soils. This could complement or even eliminate the need to recover samples to the surface. The Engineering Model hardware having been developed within this effort is designed for accommodating a geophysical instrument package (Heat Flow and Physical Properties Package, HP3) that would be capable of measuring regolith physical properties and planetary heat flow. The chosen design encompasses a two-body Mole

  17. Microbiological Transformations of Radionuclides in the Subsurface

    International Nuclear Information System (INIS)

    Marshall, Matthew J.; Beliaev, Alex S.; Fredrickson, Jim K.

    2010-01-01

    Microorganisms are ubiquitous in subsurface environments although their populations sizes and metabolic activities can vary considerably depending on energy and nutrient inputs. As a result of their metabolic activities and the chemical properties of their cell surfaces and the exopolymers they produce, microorganisms can directly or indirectly facilitate the biotransformation of radionuclides, thus altering their solubility and overall fate and transport in the environment. Although biosorption to cell surfaces and exopolymers can be an important factor modifying the solubility of some radionuclides under specific conditions, oxidation state is often considered the single most important factor controlling their speciation and, therefore, environmental behavior.

  18. Directional Dipole Model for Subsurface Scattering

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Hachisuka, Toshiya; Kjeldsen, Thomas Kim

    2014-01-01

    Rendering translucent materials using Monte Carlo ray tracing is computationally expensive due to a large number of subsurface scattering events. Faster approaches are based on analytical models derived from diffusion theory. While such analytical models are efficient, they miss out on some...... point source diffusion. A ray source corresponds better to the light that refracts through the surface of a translucent material. Using this ray source, we are able to take the direction of the incident light ray and the direction toward the point of emergence into account. We use a dipole construction...

  19. In-situ Planetary Subsurface Imaging System

    Science.gov (United States)

    Song, W.; Weber, R. C.; Dimech, J. L.; Kedar, S.; Neal, C. R.; Siegler, M.

    2017-12-01

    Geophysical and seismic instruments are considered the most effective tools for studying the detailed global structures of planetary interiors. A planet's interior bears the geochemical markers of its evolutionary history, as well as its present state of activity, which has direct implications to habitability. On Earth, subsurface imaging often involves massive data collection from hundreds to thousands of geophysical sensors (seismic, acoustic, etc) followed by transfer by hard links or wirelessly to a central location for post processing and computing, which will not be possible in planetary environments due to imposed mission constraints on mass, power, and bandwidth. Emerging opportunities for geophysical exploration of the solar system from Venus to the icy Ocean Worlds of Jupiter and Saturn dictate that subsurface imaging of the deep interior will require substantial data reduction and processing in-situ. The Real-time In-situ Subsurface Imaging (RISI) technology is a mesh network that senses and processes geophysical signals. Instead of data collection then post processing, the mesh network performs the distributed data processing and computing in-situ, and generates an evolving 3D subsurface image in real-time that can be transmitted under bandwidth and resource constraints. Seismic imaging algorithms (including traveltime tomography, ambient noise imaging, and microseismic imaging) have been successfully developed and validated using both synthetic and real-world terrestrial seismic data sets. The prototype hardware system has been implemented and can be extended as a general field instrumentation platform tailored specifically for a wide variety of planetary uses, including crustal mapping, ice and ocean structure, and geothermal systems. The team is applying the RISI technology to real off-world seismic datasets. For example, the Lunar Seismic Profiling Experiment (LSPE) deployed during the Apollo 17 Moon mission consisted of four geophone instruments

  20. Prediction of future subsurface temperatures in Korea

    Science.gov (United States)

    Lee, Y.; Kim, S. K.; Jeong, J.; SHIN, E.

    2017-12-01

    The importance of climate change has been increasingly recognized because it has had the huge amount of impact on social, economic, and environmental aspect. For the reason, paleoclimate change has been studied intensively using different geological tools including borehole temperatures and future surface air temperatures (SATs) have been predicted for the local areas and the globe. Future subsurface temperatures can have also enormous impact on various areas and be predicted by an analytical method or a numerical simulation using measured and predicted SATs, and thermal diffusivity data of rocks. SATs have been measured at 73 meteorological observatories since 1907 in Korea and predicted at same locations up to the year of 2100. Measured SATs at the Seoul meteorological observatory increased by about 3.0 K from the year of 1907 to the present. Predicted SATs have 4 different scenarios depending on mainly CO2 concentration and national action plan on climate change in the future. The hottest scenario shows that SATs in Korea will increase by about 5.0 K from the present to the year of 2100. In addition, thermal diffusivity values have been measured on 2,903 rock samples collected from entire Korea. Data pretreatment based on autocorrelation analysis was conducted to control high frequency noise in thermal diffusivity data. Finally, future subsurface temperatures in Korea were predicted up to the year of 2100 by a FEM simulation code (COMSOL Multiphysics) using measured and predicted SATs, and thermal diffusivity data in Korea. At Seoul, the results of predictions show that subsurface temperatures will increase by about 5.4 K, 3.0 K, 1.5 K, and 0.2 K from the present to 2050 and then by about 7.9 K, 4.8 K, 2.5 K, and 0.5 K to 2100 at the depths of 10 m, 50 m, 100 m, and 200 m, respectively. We are now proceeding numerical simulations for subsurface temperature predictions for 73 locations in Korea.