WorldWideScience

Sample records for efficient interval estimation

  1. Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt; Sørensen, Michael

    Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...

  2. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  3. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    Science.gov (United States)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  4. Optimal Data Interval for Estimating Advertising Response

    OpenAIRE

    Gerard J. Tellis; Philip Hans Franses

    2006-01-01

    The abundance of highly disaggregate data (e.g., at five-second intervals) raises the question of the optimal data interval to estimate advertising carryover. The literature assumes that (1) the optimal data interval is the interpurchase time, (2) too disaggregate data causes a disaggregation bias, and (3) recovery of true parameters requires assumption of the underlying advertising process. In contrast, we show that (1) the optimal data interval is what we call , (2) too disaggregate data do...

  5. DEVELOPMENT MANAGEMENT TRANSFER PRICING BY APPLICATION OF THE INTERVAL ESTIMATES

    Directory of Open Access Journals (Sweden)

    Elena B. Shuvalova

    2013-01-01

    Full Text Available The article discusses the application of the method of interval estimation of conformity of the transaction price the market price. A comparative analysis of interval and point estimate. Identified the positive and negative effects of using interval estimation.

  6. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  7. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    MCNP's criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP's three k eff estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k eff estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered

  8. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    MCNP has three different, but correlated, estimators for Calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov Theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the individual estimator with the smallest variance. The importance of MCNP's batch statistics is demonstrated by an investigation of the effects of individual estimator variance bias on the combination of estimators, both heuristically with the analytical study and emprically with MCNP

  9. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    The Monte Carlo code MCNP has three different, but correlated, estimators for calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the estimator with the smallest variance. Empirically, MCNP examples for several physical systems demonstrate the three-combined estimator's superiority over each of the three individual estimators and its correct coverage rates. Additionally, the importance of MCNP's statistical checks is demonstrated

  10. INTERVAL STATE ESTIMATION FOR SINGULAR DIFFERENTIAL EQUATION SYSTEMS WITH DELAYS

    Directory of Open Access Journals (Sweden)

    T. A. Kharkovskaia

    2016-07-01

    Full Text Available The paper deals with linear differential equation systems with algebraic restrictions (singular systems and a method of interval observer design for this kind of systems. The systems contain constant time delay, measurement noise and disturbances. Interval observer synthesis is based on monotone and cooperative systems technique, linear matrix inequations, Lyapunov function theory and interval arithmetic. The set of conditions that gives the possibility for interval observer synthesis is proposed. Results of synthesized observer operation are shown on the example of dynamical interindustry balance model. The advantages of proposed method are that it is adapted to observer design for uncertain systems, if the intervals of admissible values for uncertain parameters are given. The designed observer is capable to provide asymptotically definite limits on the estimation accuracy, since the interval of admissible values for the object state is defined at every instant. The obtained result provides an opportunity to develop the interval estimation theory for complex systems that contain parametric uncertainty, varying delay and nonlinear elements. Interval observers increasingly find applications in economics, electrical engineering, mechanical systems with constraints and optimal flow control.

  11. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  12. Parametric change point estimation, testing and confidence interval ...

    African Journals Online (AJOL)

    In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...

  13. Efficient, Differentially Private Point Estimators

    OpenAIRE

    Smith, Adam

    2008-01-01

    Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (furt...

  14. The lower limit of interval efficiency in Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Bijan Rahmani Parchikolaei

    2015-05-01

    Full Text Available In data envelopment analysis technique, the relative efficiency of the homogenous decision making units is calculated. These calculations are done based on the classical model of linear programming such as CCR,BCC,.... Because of maximizing the weighted sum of outputs to that in inputs of one unit under certain conditions, the obtained efficiency in all of these models is the upper limit of exact relative efficiency. In other words, the efficiency is calculatedfrom the optimistic viewpoint. To caculated the lower limit of efficiency, i.e. the efficiency obtained from a pessimistic viewpoint for certain weights, the existing models cannot calculate the exact lower limit and in some cases, there exist some models that show an incorrect lower limit. Through the model introduced in the present study, we can calculate the exact lower limit of the interval efficiency. The designed model can be obtained by minimizing the ratio of weighted sum of outputs to that of inputs for every unit under certion conditions. The exact lower limit can be calculated in all states through our adopted model.

  15. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  16. Estimating the NIH efficient frontier.

    Directory of Open Access Journals (Sweden)

    Dimitrios Bisias

    Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  17. Estimating the NIH efficient frontier.

    Science.gov (United States)

    Bisias, Dimitrios; Lo, Andrew W; Watkins, James F

    2012-01-01

    The National Institutes of Health (NIH) is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of

  18. Estimating the NIH Efficient Frontier

    Science.gov (United States)

    2012-01-01

    Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  19. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...

  20. Estimating reliable paediatric reference intervals in clinical chemistry and haematology.

    Science.gov (United States)

    Ridefelt, Peter; Hellberg, Dan; Aldrimer, Mattias; Gustafsson, Jan

    2014-01-01

    Very few high-quality studies on paediatric reference intervals for general clinical chemistry and haematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The present review summarises current reference interval studies for common clinical chemistry and haematology analyses. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  1. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    Science.gov (United States)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It

  2. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    Science.gov (United States)

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  3. Feature-Based Correlation and Topological Similarity for Interbeat Interval Estimation Using Ultrawideband Radar.

    Science.gov (United States)

    Sakamoto, Takuya; Imasaka, Ryohei; Taki, Hirofumi; Sato, Toru; Yoshioka, Mototaka; Inoue, Kenichi; Fukuda, Takeshi; Sakai, Hiroyuki

    2016-04-01

    The objectives of this paper are to propose a method that can accurately estimate the human heart rate (HR) using an ultrawideband (UWB) radar system, and to determine the performance of the proposed method through measurements. The proposed method uses the feature points of a radar signal to estimate the HR efficiently and accurately. Fourier- and periodicity-based methods are inappropriate for estimation of instantaneous HRs in real time because heartbeat waveforms are highly variable, even within the beat-to-beat interval. We define six radar waveform features that enable correlation processing to be performed quickly and accurately. In addition, we propose a feature topology signal that is generated from a feature sequence without using amplitude information. This feature topology signal is used to find unreliable feature points, and thus, to suppress inaccurate HR estimates. Measurements were taken using UWB radar, while simultaneously performing electrocardiography measurements in an experiment that was conducted on nine participants. The proposed method achieved an average root-mean-square error in the interbeat interval of 7.17 ms for the nine participants. The results demonstrate the effectiveness and accuracy of the proposed method. The significance of this study for biomedical research is that the proposed method will be useful in the realization of a remote vital signs monitoring system that enables accurate estimation of HR variability, which has been used in various clinical settings for the treatment of conditions such as diabetes and arterial hypertension.

  4. Estimating confidence intervals in predicted responses for oscillatory biological models.

    Science.gov (United States)

    St John, Peter C; Doyle, Francis J

    2013-07-29

    The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently

  5. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Directory of Open Access Journals (Sweden)

    Mary Kathryn Abel

    Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  6. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Science.gov (United States)

    Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  7. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  8. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  9. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    Science.gov (United States)

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  10. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

    CSIR Research Space (South Africa)

    Kirton, A

    2010-08-01

    Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...

  11. Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos; Limnios, Nikolaos

    2016-01-01

    In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...

  12. Optimizing lengths of confidence intervals: fourth-order efficiency in location models

    NARCIS (Netherlands)

    Klaassen, C.; Venetiaan, S.

    2010-01-01

    Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation

  13. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  14. Using Estimated On-Site Ambient Temperature Has Uncertain Benefit When Estimating Postmortem Interval

    Directory of Open Access Journals (Sweden)

    Laurent Dourel

    2010-01-01

    Full Text Available The forensic entomologist uses weather station data as part of the calculation when estimating the postmortem interval (PMI. To reduce the potential inaccuracies of this method caused by the distance between the crime scene and the meteorological station, temperature correlation data from the site of the corpse may be used. This experiment simulated the impact of retrospective weather data correction using linear regression between seven stations and sites in three climatic exposure groups during three different seasons as part of the accumulated degree days calculation for three necrophagous species (Diptera: Calliphoridae. No consistent benefit in the use of correlation or the original data from the meteorological stations was observed. In nine cases out of 12, the data from the weather station network limited the risk of a deviation from reality. The forensic entomologist should be cautious when using this correlation model.

  15. An Interval Estimation Method of Patent Keyword Data for Sustainable Technology Forecasting

    Directory of Open Access Journals (Sweden)

    Daiho Uhm

    2017-11-01

    Full Text Available Technology forecasting (TF is forecasting the future state of a technology. It is exciting to know the future of technologies, because technology changes the way we live and enhances the quality of our lives. In particular, TF is an important area in the management of technology (MOT for R&D strategy and new product development. Consequently, there are many studies on TF. Patent analysis is one method of TF because patents contain substantial information regarding developed technology. The conventional methods of patent analysis are based on quantitative approaches such as statistics and machine learning. The most traditional TF methods based on patent analysis have a common problem. It is the sparsity of patent keyword data structured from collected patent documents. After preprocessing with text mining techniques, most frequencies of technological keywords in patent data have values of zero. This problem creates a disadvantage for the performance of TF, and we have trouble analyzing patent keyword data. To solve this problem, we propose an interval estimation method (IEM. Using an adjusted Wald confidence interval called the Agresti–Coull confidence interval, we construct our IEM for efficient TF. In addition, we apply the proposed method to forecast the technology of an innovative company. To show how our work can be applied in the real domain, we conduct a case study using Apple technology.

  16. Application of Fourier transform infrared spectroscopy with chemometrics on postmortem interval estimation based on pericardial fluids.

    Science.gov (United States)

    Zhang, Ji; Li, Bing; Wang, Qi; Wei, Xin; Feng, Weibo; Chen, Yijiu; Huang, Ping; Wang, Zhenyuan

    2017-12-21

    Postmortem interval (PMI) evaluation remains a challenge in the forensic community due to the lack of efficient methods. Studies have focused on chemical analysis of biofluids for PMI estimation; however, no reports using spectroscopic methods in pericardial fluid (PF) are available. In this study, Fourier transform infrared (FTIR) spectroscopy with attenuated total reflectance (ATR) accessory was applied to collect comprehensive biochemical information from rabbit PF at different PMIs. The PMI-dependent spectral signature was determined by two-dimensional (2D) correlation analysis. The partial least square (PLS) and nu-support vector machine (nu-SVM) models were then established based on the acquired spectral dataset. Spectral variables associated with amide I, amide II, COO - , C-H bending, and C-O or C-OH vibrations arising from proteins, polypeptides, amino acids and carbohydrates, respectively, were susceptible to PMI in 2D correlation analysis. Moreover, the nu-SVM model appeared to achieve a more satisfactory prediction than the PLS model in calibration; the reliability of both models was determined in an external validation set. The study shows the possibility of application of ATR-FTIR methods in postmortem interval estimation using PF samples.

  17. Efficient bootstrap estimates for tail statistics

    Science.gov (United States)

    Breivik, Øyvind; Aarnes, Ole Johan

    2017-03-01

    Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.

  18. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...

  19. [Research Progress of Vitreous Humor Detection Technique on Estimation of Postmortem Interval].

    Science.gov (United States)

    Duan, W C; Lan, L M; Guo, Y D; Zha, L; Yan, J; Ding, Y J; Cai, J F

    2018-02-01

    Estimation of postmortem interval (PMI) plays a crucial role in forensic study and identification work. Because of the unique anatomy location, vitreous humor is considered to be used for estima- ting PMI, which has aroused interest among scholars, and some researches have been carried out. The detection techniques of vitreous humor are constantly developed and improved which have been gradually applied in forensic science, meanwhile, the study of PMI estimation using vitreous humor is updated rapidly. This paper reviews various techniques and instruments applied to vitreous humor detection, such as ion selective electrode, capillary ion analysis, spectroscopy, chromatography, nano-sensing technology, automatic biochemical analyser, flow cytometer, etc., as well as the related research progress on PMI estimation in recent years. In order to provide a research direction for scholars and promote a more accurate and efficient application in PMI estimation by vitreous humor analysis, some inner problems are also analysed in this paper. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  20. An efficient quantum algorithm for spectral estimation

    Science.gov (United States)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  1. Efficient Frontier - Comparing Different Volatility Estimators

    OpenAIRE

    Tea Poklepović; Zdravka Aljinović; Mario Matković

    2015-01-01

    Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that ra...

  2. Inferring uncertainty from interval estimates: Effects of alpha level and numeracy

    Directory of Open Access Journals (Sweden)

    Luke F. Rinne

    2013-05-01

    Full Text Available Interval estimates are commonly used to descriptively communicate the degree of uncertainty in numerical values. Conventionally, low alpha levels (e.g., .05 ensure a high probability of capturing the target value between interval endpoints. Here, we test whether alpha levels and individual differences in numeracy influence distributional inferences. In the reported experiment, participants received prediction intervals for fictitious towns' annual rainfall totals (assuming approximately normal distributions. Then, participants estimated probabilities that future totals would be captured within varying margins about the mean, indicating the approximate shapes of their inferred probability distributions. Results showed that low alpha levels (vs. moderate levels; e.g., .25 more frequently led to inferences of over-dispersed approximately normal distributions or approximately uniform distributions, reducing estimate accuracy. Highly numerate participants made more accurate estimates overall, but were more prone to inferring approximately uniform distributions. These findings have important implications for presenting interval estimates to various audiences.

  3. An efficient estimator for Gibbs random fields

    Czech Academy of Sciences Publication Activity Database

    Janžura, Martin

    2014-01-01

    Roč. 50, č. 6 (2014), s. 883-895 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Gibbs random field * efficient estimator * empirical estimator Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/SI/janzura-0441325.pdf

  4. Piecewise Loglinear Estimation of Efficient Production Surfaces

    OpenAIRE

    Rajiv D. Banker; Ajay Maindiratta

    1986-01-01

    Linear programming formulations for piecewise loglinear estimation of efficient production surfaces are derived from a set of basic properties postulated for the underlying production possibility sets. Unlike the piecewise linear model of Banker, Charnes, and Cooper (Banker R. D., A. Charnes, W. W. Cooper. 1984. Models for the estimation of technical and scale inefficiencies in data envelopment analysis. Management Sci. 30 (September) 1078--1092.), this approach permits the identification of ...

  5. Computationally Efficient and Noise Robust DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...... a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...

  6. Efficient Methods of Estimating Switchgrass Biomass Supplies

    Science.gov (United States)

    Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...

  7. MILITARY MISSION COMBAT EFFICIENCY ESTIMATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Ighoyota B. AJENAGHUGHRURE

    2017-04-01

    Full Text Available Military infantry recruits, although trained, lacks experience in real-time combat operations, despite the combat simulations training. Therefore, the choice of including them in military operations is a thorough and careful process. This has left top military commanders with the tough task of deciding, the best blend of inexperienced and experienced infantry soldiers, for any military operation, based on available information on enemy strength and capability. This research project delves into the design of a mission combat efficiency estimator (MCEE. It is a decision support system that aids top military commanders in estimating the best combination of soldiers suitable for different military operations, based on available information on enemy’s combat experience. Hence, its advantages consist of reducing casualties and other risks that compromises the entire operation overall success, and also boosting the morals of soldiers in an operation, with such information as an estimation of combat efficiency of their enemies. The system was developed using Microsoft Asp.Net and Sql server backend. A case study test conducted with the MECEE system, reveals clearly that the MECEE system is an efficient tool for military mission planning in terms of team selection. Hence, when the MECEE system is fully deployed it will aid military commanders in the task of decision making on team members’ combination for any given operation based on enemy personnel information that is well known beforehand. Further work on the MECEE will be undertaken to explore fire power types and impact in mission combat efficiency estimation.

  8. Channel Equalization and Phase Estimation for Reduced-Guard-Interval CO-OFDM Systems

    Science.gov (United States)

    Zhuge, Qunbi

    Reduced-guard-interval (RGI) coherent optical (CO) orthogonal frequency-division multiplexing (OFDM) is a potential candidate for next generation 100G beyond optical transports, attributed to its advantages such as high spectral efficiency and high tolerance to optical channel impairments. First of all, we review the coherent optical systems with an emphasis on CO-OFDM systems as well as the optical channel impairments and the general digital signal processing techniques to combat them. This work focuses on the channel equalization and phase estimation of RGI CO-OFDM systems. We first propose a novel equalization scheme based on the equalization structure of RGI CO-OFDM to reduce the cyclic prefix overhead to zero. Then we show that intra-channel nonlinearities should be considered when designing the training symbols for channel estimation. Afterwards, we propose and analyze the phenomenon of dispersion-enhanced phase noise (DEPN) caused by the interaction between the laser phase noise and the chromatic dispersion in RGI CO-OFDM transmissions. DEPN induces a non-negligible performance degradation and limits the tolerant laser linewidth. However, it can be compensated by the grouped maximum-likelihood phase estimation proposed in this work.

  9. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  10. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    Science.gov (United States)

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  11. Efficient multidimensional regularization for Volterra series estimation

    Science.gov (United States)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  12. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  13. [Reflection of estimating postmortem interval in forensic entomology and the Daubert standard].

    Science.gov (United States)

    Xie, Dan; Peng, Yu-Long; Guo, Ya-Dong; Cai, Ji-Feng

    2013-08-01

    Estimating postmortem interval (PMI) is always the emphasis and difficulty in forensic practice. Forensic entomology plays a significant indispensable role. Recently, the theories and technologies of forensic entomology are increasingly rich. But many problems remain in the research and practice. With proposing the Daubert standard, the reliability and accuracy of estimation PMI by forensic entomology need more demands. This review summarizes the application of the Daubert standard in several aspects of ecology, quantitative genetics, population genetics, molecular biology, and microbiology in the practice of forensic entomology. It builds a bridge for basic research and forensic practice to provide higher accuracy for estimating postmortem interval by forensic entomology.

  14. Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

    Science.gov (United States)

    Aagten-Murphy, David; Cappagli, Giulia; Burr, David

    2014-03-01

    Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to

  15. Multi-Level Interval Estimation for Locating damage in Structures by Using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Pan Danguang; Gao Yanhua; Song Junlei

    2010-01-01

    A new analysis technique, called multi-level interval estimation method, is developed for locating damage in structures. In this method, the artificial neural networks (ANN) analysis method is combined with the statistics theory to estimate the range of damage location. The ANN is multilayer perceptron trained by back-propagation. Natural frequencies and modal shape at a few selected points are used as input to identify the location and severity of damage. Considering the large-scale structures which have lots of elements, multi-level interval estimation method is developed to reduce the estimation range of damage location step-by-step. Every step, estimation range of damage location is obtained from the output of ANN by using the method of interval estimation. The next ANN training cases are selected from the estimation range after linear transform, and the output of new ANN estimation range of damage location will gained a reduced estimation range. Two numerical example analyses on 10-bar truss and 100-bar truss are presented to demonstrate the effectiveness of the proposed method.

  16. Engineering estimates versus impact evaluation of energy efficiency projects: Regression discontinuity evidence from a case study

    International Nuclear Information System (INIS)

    Lang, Corey; Siler, Matthew

    2013-01-01

    Energy efficiency upgrades have been gaining widespread attention across global channels as a cost-effective approach to addressing energy challenges. The cost-effectiveness of these projects is generally predicted using engineering estimates pre-implementation, often with little ex post analysis of project success. In this paper, for a suite of energy efficiency projects, we directly compare ex ante engineering estimates of energy savings to ex post econometric estimates that use 15-min interval, building-level energy consumption data. In contrast to most prior literature, our econometric results confirm the engineering estimates, even suggesting the engineering estimates were too modest. Further, we find heterogeneous efficiency impacts by time of day, suggesting select efficiency projects can be useful in reducing peak load. - Highlights: • Regression discontinuity used to estimate energy savings from efficiency projects. • Ex post econometric estimates validate ex ante engineering estimates of energy savings. • Select efficiency projects shown to reduce peak load

  17. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  18. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    Science.gov (United States)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  19. Efficient estimation of diffusion during dendritic solidification

    Science.gov (United States)

    Yeum, K. S.; Poirier, D. R.; Laxmanan, V.

    1989-01-01

    A very efficient finite difference method has been developed to estimate the solute redistribution during solidification with diffusion in the solid. This method is validated by comparing the computed results with the results of an analytical solution derived by Kobayashi (1988) for the assumptions of a constant diffusion coefficient, a constant equilibrium partition ratio, and a parabolic rate of the advancement of the solid/liquid interface. The flexibility of the method is demonstrated by applying it to the dendritic solidification of a Pb-15 wt pct Sn alloy, for which the equilibrium partition ratio and diffusion coefficient vary substantially during solidification. The fraction eutectic at the end of solidification is also obtained by estimating the fraction solid, in greater resolution, where the concentration of solute in the interdendritic liquid reaches the eutectic composition of the alloy.

  20. Estimation of sojourn time in chronic disease screening without data on interval cases.

    Science.gov (United States)

    Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W

    2000-03-01

    Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.

  1. Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates

    International Nuclear Information System (INIS)

    Trapero, Juan R.

    2016-01-01

    In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.

  2. Rational Choice of the Investment Project Using Interval Estimates of the Initial Parameters

    Directory of Open Access Journals (Sweden)

    Kotsyuba Oleksiy S.

    2016-11-01

    Full Text Available The article is dedicated to the development of instruments to support decision-making on the problem of choosing the best investment project in a situation when initial quantitative parameters of the considered investment alternatives are described by interval estimates. In terms of managing the risk caused by interval uncertainty of the initial data, the study is limited to the component (aspect of risk measure as a degree of possibility of discrepancy between the resulting economic indicator (criterion and its normative level (the norm. An important hypothesis used as a basis for the proposed in the work formalization of the problem under consideration is the presence – for some or all of the projects from which the choice is made – of risk of poor rate of return in terms of net present (current value. Based upon relevant developments within the framework of the fuzzy-set methodology and interval analysis, there formulated a model for choosing an optimal investment project from the set of alternative options for the interval formulation of the problem. In this case it is assumed that indicators of economic attractiveness (performance of the compared directions of real investment are described either by interval estimates or possibility distribution functions. With the help of the estimated conditional example there implemented an approbation of the proposed model, which demonstrated its practical viability.

  3. Sequential Interval Estimation of a Location Parameter with Fixed Width in the Nonregular Case

    OpenAIRE

    Koike, Ken-ichi

    2007-01-01

    For a location-scale parameter family of distributions with a finite support, a sequential confidence interval with a fixed width is obtained for the location parameter, and its asymptotic consistency and efficiency are shown. Some comparisons with the Chow-Robbins procedure are also done.

  4. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    Science.gov (United States)

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  5. An Integrated Theory of Prospective Time Interval Estimation: The Role of Cognition, Attention, and Learning

    Science.gov (United States)

    Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John

    2007-01-01

    A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary…

  6. Nonparametric estimation in an "illness-death" model when all transition times are interval censored

    DEFF Research Database (Denmark)

    Frydman, Halina; Gerds, Thomas; Grøn, Randi

    2013-01-01

    We develop nonparametric maximum likelihood estimation for the parameters of an irreversible Markov chain on states {0,1,2} from the observations with interval censored times of 0 → 1, 0 → 2 and 1 → 2 transitions. The distinguishing aspect of the data is that, in addition to all transition times ...

  7. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  8. Ventricular Cycle Length Characteristics Estimative of Prolonged RR Interval during Atrial Fibrillation

    Science.gov (United States)

    CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN

    2014-01-01

    Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759

  9. Estimation of energy efficiency of residential buildings

    Directory of Open Access Journals (Sweden)

    Glushkov Sergey

    2017-01-01

    Full Text Available Increasing energy performance of the residential buildings by means of reducing heat consumption on the heating and ventilation is the last segment in the system of energy resources saving. The first segments in the energy saving process are heat producing and transportation over the main lines and outside distribution networks. In the period from 2006 to 2013. by means of the heat-supply schemes optimization and modernization of the heating systems. using expensive (200–300 $US per 1 m though hugely effective preliminary coated pipes. the economy reached 2.7 mln tons of fuel equivalent. Considering the multi-stage and multifactorial nature (electricity. heat and water supply of the residential sector energy saving. the reasonable estimate of the efficiency of the saving of residential buildings energy should be performed in tons of fuel equivalent per unit of time.

  10. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  11. Postmortem interval estimation: a novel approach utilizing gas chromatography/mass spectrometry-based biochemical profiling.

    Science.gov (United States)

    Kaszynski, Richard H; Nishiumi, Shin; Azuma, Takeshi; Yoshida, Masaru; Kondo, Takeshi; Takahashi, Motonori; Asano, Migiwa; Ueno, Yasuhiro

    2016-05-01

    While the molecular mechanisms underlying postmortem change have been exhaustively investigated, the establishment of an objective and reliable means for estimating postmortem interval (PMI) remains an elusive feat. In the present study, we exploit low molecular weight metabolites to estimate postmortem interval in mice. After sacrifice, serum and muscle samples were procured from C57BL/6J mice (n = 52) at seven predetermined postmortem intervals (0, 1, 3, 6, 12, 24, and 48 h). After extraction and isolation, low molecular weight metabolites were measured via gas chromatography/mass spectrometry (GC/MS) and examined via semi-quantification studies. Then, PMI prediction models were generated for each of the 175 and 163 metabolites identified in muscle and serum, respectively, using a non-linear least squares curve fitting program. A PMI estimation panel for muscle and serum was then erected which consisted of 17 (9.7%) and 14 (8.5%) of the best PMI biomarkers identified in muscle and serum profiles demonstrating statistically significant correlations between metabolite quantity and PMI. Using a single-blinded assessment, we carried out validation studies on the PMI estimation panels. Mean ± standard deviation for accuracy of muscle and serum PMI prediction panels was -0.27 ± 2.88 and -0.89 ± 2.31 h, respectively. Ultimately, these studies elucidate the utility of metabolomic profiling in PMI estimation and pave the path toward biochemical profiling studies involving human samples.

  12. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  13. Various methods for the estimation of the post mortem interval from Calliphoridae: A review

    Directory of Open Access Journals (Sweden)

    Ruchi Sharma

    2015-03-01

    Forensic entomology is recognized in many countries as an important tool for legal investigations. Unfortunately, it has not received much attention in India as an important investigative tool. The maggots of the flies crawling on the dead bodies are widely considered to be just another disgusting element of decay and are not collected at the time of autopsy. They can aid in death investigations (time since death, manner of death, etc.. This paper reviews the various methods of post mortem interval estimation using Calliphoridae to make the investigators, law personnel and researchers aware of the importance of entomology in criminal investigations. The various problems confronted by forensic entomologists in estimating the time since death have also been discussed and there is a need for further research in the field as well as the laborator. Correct estimation of the post mortem interval is one of the most important aspects of legal medicine.

  14. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    Science.gov (United States)

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  15. Obtaining appropriate interval estimates for age when multiple indicators are used

    DEFF Research Database (Denmark)

    Fieuws, Steffen; Willems, Guy; Larsen, Sara Tangmose

    2016-01-01

    When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regres...... the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.......When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical...... regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple...

  16. Ranking DMUs by Comparing DEA Cross-Efficiency Intervals Using Entropy Measures

    Directory of Open Access Journals (Sweden)

    Tim Lu

    2016-12-01

    Full Text Available Cross-efficiency evaluation, an extension of data envelopment analysis (DEA, can eliminate unrealistic weighing schemes and provide a ranking for decision making units (DMUs. In the literature, the determination of input and output weights uniquely receives more attentions. However, the problem of choosing the aggressive (minimal or benevolent (maximal formulation for decision-making might still remain. In this paper, we develop a procedure to perform cross-efficiency evaluation without the need to make any specific choice of DEA weights. The proposed procedure takes into account the aggressive and benevolent formulations at the same time, and the choice of DEA weights can then be avoided. Consequently, a number of cross-efficiency intervals is obtained for each DMU. The entropy, which is based on information theory, is an effective tool to measure the uncertainty. We then utilize the entropy to construct a numerical index for DMUs with cross-efficiency intervals. A mathematical program is proposed to find the optimal entropy values of DMUs for comparison. With the derived entropy value, we can rank DMUs accordingly. Two examples are illustrated to show the effectiveness of the idea proposed in this paper.

  17. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    Science.gov (United States)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  18. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  19. H∞ state estimation of generalised neural networks with interval time-varying delays

    Science.gov (United States)

    Saravanakumar, R.; Syed Ali, M.; Cao, Jinde; Huang, He

    2016-12-01

    This paper focuses on studying the H∞ state estimation of generalised neural networks with interval time-varying delays. The integral terms in the time derivative of the Lyapunov-Krasovskii functional are handled by the Jensen's inequality, reciprocally convex combination approach and a new Wirtinger-based double integral inequality. A delay-dependent criterion is derived under which the estimation error system is globally asymptotically stable with H∞ performance. The proposed conditions are represented by linear matrix inequalities. Optimal H∞ norm bounds are obtained easily by solving convex problems in terms of linear matrix inequalities. The advantage of employing the proposed inequalities is illustrated by numerical examples.

  20. Intrapuparial Development of Sarconesia Chlorogaster (Diptera: Calliphoridae) for Postmortem Interval Estimation (PMI).

    Science.gov (United States)

    Flissak, J C; Moura, M O

    2018-02-28

    Sarconesia chlorogaster (Wiedemann) (Diptera: Calliphoridae) is an endemic blow fly species of forensic importance in South America, and whose duration of pupal development is about 70% of the total immature development time. Therefore, morphological changes during this stage, if refined, may provide greater accuracy and reliability in the calculation of minimum postmortem interval. Considering the importance of this species, the main objective of this work was to identify and describe temporal intrapuparial morphological changes of S. chlorogaster. The development of S. chlorogaster reared on an artificial diet and at two constant temperatures (20 and 25ºC) was monitored. Every 8 h until the end of the pupal stage, 10 pupae were killed, fixed, and had their external morphology described and photographed. Of the 29 morphological characteristics described, 13 are potentially informative for estimating the age of S. chlorogaster. In general, body shape (presence or absence of tagmatization), general coloration, visible presence of the mouth hook (portion of the mandible), thoracic appendages, change in eye color, and bristle formation are the most useful characteristics for determining specific age. The results presented here make it possible to estimate the postmortem interval of a corpse using intrapuparial morphological characters, expanding one's ability to estimate postmortem interval.

  1. Confidence interval of intrinsic optimum temperature estimated using thermodynamic SSI model

    Institute of Scientific and Technical Information of China (English)

    Takaya Ikemoto; Issei Kurahashi; Pei-Jian Shi

    2013-01-01

    The intrinsic optimum temperature for the development of ectotherms is one of the most important factors not only for their physiological processes but also for ecological and evolutional processes.The Sharpe-Schoolfield-Ikemoto (SSI) model succeeded in defining the temperature that can thermodynamically meet the condition that at a particular temperature the probability of an active enzyme reaching its maximum activity is realized.Previously,an algorithm was developed by Ikemoto (Tropical malaria does not mean hot environments.Journal of Medical Entomology,45,963-969) to estimate model parameters,but that program was computationally very time consuming.Now,investigators can use the SSI model more easily because a full automatic computer program was designed by Shi et al.(A modified program for estimating the parameters of the SSI model.Environmental Entomology,40,462-469).However,the statistical significance of the point estimate of the intrinsic optimum temperature for each ectotherm has not yet been determined.Here,we provided a new method for calculating the confidence interval of the estimated intrinsic optimum temperature by modifying the approximate bootstrap confidence intervals method.For this purpose,it was necessary to develop a new program for a faster estimation of the parameters in the SSI model,which we have also done.

  2. R package to estimate intracluster correlation coefficient with confidence interval for binary data.

    Science.gov (United States)

    Chakraborty, Hrishikesh; Hossain, Akhtar

    2018-03-01

    The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The Total Deviation Index estimated by Tolerance Intervals to evaluate the concordance of measurement devices

    Directory of Open Access Journals (Sweden)

    Ascaso Carlos

    2010-04-01

    Full Text Available Abstract Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.

  4. Study on the pupal morphogenesis of Chrysomya rufifacies (Macquart) (Diptera: Calliphoridae) for postmortem interval estimation.

    Science.gov (United States)

    Ma, Ting; Huang, Jia; Wang, Jiang-Feng

    2015-08-01

    Chrysomya rufifacies (Macquart) is one of the most common species of blow flies at the scene of death in Southern China. Pupae are useful in postmortem interval (PMI) estimation due to their sedentary nature and longer duration of association with the corpse. However, to determine the age of a pupa is more difficult than that of a larva, due to the fact that morphological changes are rarely visible during pupal development. In this study, eggs of C. rufifacies were reared in climatic chambers under four different constant temperatures (20, 24, 28 and 32°C each±1°C, respectively) with same rearing conditions such as foodstuff, substrate, photoperiod and relative humidity. Ten duplicate pupae were sampled at 8-h intervals from prepupae to emergence under the different constant temperatures, respectively. The pupae were sampled, killed, fixed, dissected and with the puparium removed, the external morphological changes of the pupae were observed, recorded and photographed. The morphological characters of C. rufifacies pupae were described. Based on the visible external morphological characters during pupal morphogenesis at 28°C±1°C, the developmental period of C. rufifacies was divided into nine developmental stages and recorded in detailed description. Based on above-mentioned nine developmental stages, some visible external morphological characters were selected as indications for developmental stages. These indications mapped to 8-h sampling intervals at the four different constant temperatures were also described in this study. It is demonstrated that generally the duration of each developmental stage of C. rufifacies pupae is inversely correlated to appropriate developmental temperatures. This study provides relatively systematic pupal developmental data of C. rufifacies for the estimation of PMI. In addition, further work may improve by focus on other environmental factors, histological analysis, more thorough external examination by shortening sampling

  5. Forensic Entomology: Evaluating Uncertainty Associated With Postmortem Interval (PMI) Estimates With Ecological Models.

    Science.gov (United States)

    Faris, A M; Wang, H-H; Tarone, A M; Grant, W E

    2016-05-31

    Estimates of insect age can be informative in death investigations and, when certain assumptions are met, can be useful for estimating the postmortem interval (PMI). Currently, the accuracy and precision of PMI estimates is unknown, as error can arise from sources of variation such as measurement error, environmental variation, or genetic variation. Ecological models are an abstract, mathematical representation of an ecological system that can make predictions about the dynamics of the real system. To quantify the variation associated with the pre-appearance interval (PAI), we developed an ecological model that simulates the colonization of vertebrate remains by Cochliomyia macellaria (Fabricius) (Diptera: Calliphoridae), a primary colonizer in the southern United States. The model is based on a development data set derived from a local population and represents the uncertainty in local temperature variability to address PMI estimates at local sites. After a PMI estimate is calculated for each individual, the model calculates the maximum, minimum, and mean PMI, as well as the range and standard deviation for stadia collected. The model framework presented here is one manner by which errors in PMI estimates can be addressed in court when no empirical data are available for the parameter of interest. We show that PAI is a potential important source of error and that an ecological model is one way to evaluate its impact. Such models can be re-parameterized with any development data set, PAI function, temperature regime, assumption of interest, etc., to estimate PMI and quantify uncertainty that arises from specific prediction systems. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods

    Directory of Open Access Journals (Sweden)

    Humberto Muñoz

    2009-06-01

    Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best fit to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for finding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.

  7. Profiling of RNA degradation for estimation of post mortem [corrected] interval.

    Directory of Open Access Journals (Sweden)

    Fernanda Sampaio-Silva

    Full Text Available An estimation of the post mortem interval (PMI is frequently touted as the Holy Grail of forensic pathology. During the first hours after death, PMI estimation is dependent on the rate of physical observable modifications including algor, rigor and livor mortis. However, these assessment methods are still largely unreliable and inaccurate. Alternatively, RNA has been put forward as a valuable tool in forensic pathology, namely to identify body fluids, estimate the age of biological stains and to study the mechanism of death. Nevertheless, the attempts to find correlation between RNA degradation and PMI have been unsuccessful. The aim of this study was to characterize the RNA degradation in different post mortem tissues in order to develop a mathematical model that can be used as coadjuvant method for a more accurate PMI determination. For this purpose, we performed an eleven-hour kinetic analysis of total extracted RNA from murine's visceral and muscle tissues. The degradation profile of total RNA and the expression levels of several reference genes were analyzed by quantitative real-time PCR. A quantitative analysis of normalized transcript levels on the former tissues allowed the identification of four quadriceps muscle genes (Actb, Gapdh, Ppia and Srp72 that were found to significantly correlate with PMI. These results allowed us to develop a mathematical model with predictive value for estimation of the PMI (confidence interval of ±51 minutes at 95% that can become an important complementary tool for traditional methods.

  8. Flexible regression models for estimating postmortem interval (PMI) in forensic medicine.

    Science.gov (United States)

    Muñoz Barús, José Ignacio; Febrero-Bande, Manuel; Cadarso-Suárez, Carmen

    2008-10-30

    Correct determination of time of death is an important goal in forensic medicine. Numerous methods have been described for estimating postmortem interval (PMI), but most are imprecise, poorly reproducible and/or have not been validated with real data. In recent years, however, some progress in PMI estimation has been made, notably through the use of new biochemical methods for quantifying relevant indicator compounds in the vitreous humour. The best, but unverified, results have been obtained with [K+] and hypoxanthine [Hx], using simple linear regression (LR) models. The main aim of this paper is to offer more flexible alternatives to LR, such as generalized additive models (GAMs) and support vector machines (SVMs) in order to obtain improved PMI estimates. The present study, based on detailed analysis of [K+] and [Hx] in more than 200 vitreous humour samples from subjects with known PMI, compared classical LR methodology with GAM and SVM methodologies. Both proved better than LR for estimation of PMI. SVM showed somewhat greater precision than GAM, but GAM offers a readily interpretable graphical output, facilitating understanding of findings by legal professionals; there are thus arguments for using both types of models. R code for these methods is available from the authors, permitting accurate prediction of PMI from vitreous humour [K+], [Hx] and [U], with confidence intervals and graphical output provided. Copyright 2008 John Wiley & Sons, Ltd.

  9. Fast and Statistically Efficient Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2016-01-01

    Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...... a recursive solver. Via benchmarks, we demonstrate that the computation time is reduced by approximately two orders of magnitude. The proposed fast algorithm is available for download online....

  10. Detection of heart beats in multimodal data: a robust beat-to-beat interval estimation approach.

    Science.gov (United States)

    Antink, Christoph Hoog; Brüser, Christoph; Leonhardt, Steffen

    2015-08-01

    The heart rate and its variability play a vital role in the continuous monitoring of patients, especially in the critical care unit. They are commonly derived automatically from the electrocardiogram as the interval between consecutive heart beat. While their identification by QRS-complexes is straightforward under ideal conditions, the exact localization can be a challenging task if the signal is severely contaminated with noise and artifacts. At the same time, other signals directly related to cardiac activity are often available. In this multi-sensor scenario, methods of multimodal sensor-fusion allow the exploitation of redundancies to increase the accuracy and robustness of beat detection.In this paper, an algorithm for the robust detection of heart beats in multimodal data is presented. Classic peak-detection is augmented by robust multi-channel, multimodal interval estimation to eliminate false detections and insert missing beats. This approach yielded a score of 90.70 and was thus ranked third place in the PhysioNet/Computing in Cardiology Challenge 2014: Robust Detection of Heart Beats in Muthmodal Data follow-up analysis.In the future, the robust beat-to-beat interval estimator may directly be used for the automated processing of multimodal patient data for applications such as diagnosis support and intelligent alarming.

  11. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    Science.gov (United States)

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  12. Diagnostic Efficiency of MR Imaging of the Knee. Relationship to time Interval between MR and Arthroscopy

    International Nuclear Information System (INIS)

    Barrera, M. C.; Recondo, J. A.; Aperribay, M.; Gervas, C.; Fernandez, E.; Alustiza, J. M.

    2003-01-01

    To evaluate the efficiency of magnetic resonance (MR) in the diagnosis of knee lesions and how the results are influenced by the time interval between MR and arthroscopy. 248 knees studied by MR were retrospectively analyzed, as well as those which also underwent arthroscopy. Arthroscopy was considered to be the gold standard, MR diagnostic capacity was evaluated for both meniscal and cruciate ligament lesions. Sensitivity, specificity and Kappa index were calculated for the set of all knees included in the study (248), for those in which the time between MR and arthroscopy was less than or equal to three months (134) and for those in which the time between both procedures was less than or equal to one month. Sensitivity, specificity and Kappa index of the MR had global values of 96.5%, 70% and 71%, respectively. When the interval between MR and arthroscopy was less than or equal to three months, sensitivity, specificity and Kappa index were 95.5%, 75% and 72%, respectively. When it was less than or equal to one month, sensitivity was 100%, specificity was 87.5% and Kappa index was 91%. MR is an excellent tool for the diagnosis of knee lesions. Higher MR values of sensitivity, specificity and Kappa index are obtained when the time interval between both procedures is kept to a minimum. (Author) 11 refs

  13. Studies on the estimation of the postmortem interval. 3. Rigor mortis (author's transl).

    Science.gov (United States)

    Suzutani, T; Ishibashi, H; Takatori, T

    1978-11-01

    The authors have devised a method for classifying rigor mortis into 10 types based on its appearance and strength in various parts of a cadaver. By applying the method to the findings of 436 cadavers which were subjected to medico-legal autopsies in our laboratory during the last 10 years, it has been demonstrated that the classifying method is effective for analyzing the phenomenon of onset, persistence and disappearance of rigor mortis statistically. The investigation of the relationship between each type of rigor mortis and the postmortem interval has demonstrated that rigor mortis may be utilized as a basis for estimating the postmortem interval but the values have greater deviation than those described in current textbooks.

  14. A new perspective in the estimation of postmortem interval (PMI) based on vitreous.

    Science.gov (United States)

    Muñoz, J I; Suárez-Peñaranda, J M; Otero, X L; Rodríguez-Calvo, M S; Costas, E; Miguéns, X; Concheiro, L

    2001-03-01

    The relation between the potassium concentration in the vitreous humor, [K+], and the postmortem interval has been studied by several authors. Many formulae are available and they are based on a correlation test and linear regression using the PMI as the independent variable and [K+] as the dependent variable. The estimation of the confidence interval is based on this formulation. However, in forensic work, it is necessary to use [K+] as the independent variable to estimate the PMI. Although all authors have obtained the PMI by direct use of these formulae, it is, nevertheless, an inexact approach, which leads to false estimations. What is required is to change the variables, obtaining a new equation in which [K+] is considered as the independent variable and the PMI as the dependent. The regression line obtained from our data is [K+] = 5.35 + 0.22 PMI, by changing the variables we get PMI = 2.58[K+] - 9.30. When only nonhospital deaths are considered, the results are considerably improved. In this case, we get [K+] = 5.60 + 0.17 PMI and, consequently, PMI = 3.92[K+] - 19.04.

  15. Resampling methods in Microsoft Excel® for estimating reference intervals.

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  16. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...

  17. A Comparative Test of the Interval-Scale Properties of Magnitude Estimation and Case III Scaling and Recommendations for Equal-Interval Frequency Response Anchors.

    Science.gov (United States)

    Schriesheim, Chester A.; Novelli, Luke, Jr.

    1989-01-01

    Differences between recommended sets of equal-interval response anchors derived from scaling techniques using magnitude estimations and Thurstone Case III pair-comparison treatment of complete ranks were compared. Differences in results for 205 undergraduates reflected differences in the samples as well as in the tasks and computational…

  18. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  19. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-01

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed

  20. Methods for confidence interval estimation of a ratio parameter with application to location quotients

    Directory of Open Access Journals (Sweden)

    Beyene Joseph

    2005-10-01

    Full Text Available Abstract Background The location quotient (LQ ratio, a measure designed to quantify and benchmark the degree of relative concentration of an activity in the analysis of area localization, has received considerable attention in the geographic and economics literature. This index can also naturally be applied in the context of population health to quantify and compare health outcomes across spatial domains. However, one commonly observed limitation of LQ is its widespread use as only a point estimate without an accompanying confidence interval. Methods In this paper we present statistical methods that can be used to construct confidence intervals for location quotients. The delta and Fieller's methods are generic approaches for a ratio parameter and the generalized linear modelling framework is a useful re-parameterization particularly helpful for generating profile-likelihood based confidence intervals for the location quotient. A simulation experiment is carried out to assess the performance of each of the analytic approaches and a health utilization data set is used for illustration. Results Both the simulation results as well as the findings from the empirical data show that the different analytical methods produce very similar confidence limits for location quotients. When incidence of outcome is not rare and sample sizes are large, the confidence limits are almost indistinguishable. The confidence limits from the generalized linear model approach might be preferable in small sample situations. Conclusion LQ is a useful measure which allows quantification and comparison of health and other outcomes across defined geographical regions. It is a very simple index to compute and has a straightforward interpretation. Reporting this estimate with appropriate confidence limits using methods presented in this paper will make the measure particularly attractive for policy and decision makers.

  1. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

    Directory of Open Access Journals (Sweden)

    Eleanor S Devenish Nelson

    Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

  2. Estimated Aerobic Capacity Changes in Adolescents with Obesity Following High Intensity Interval Exercise

    Directory of Open Access Journals (Sweden)

    Brooke E. Starkoff

    2014-07-01

    Full Text Available Vigorous aerobic exercise may improve aerobic capacity (VO2max and cardiometabolic profiles in adolescents with obesity, independent of changes to weight. Our aim was to assess changes in estimated VO2max in obese adolescents following a 6-week exercise program of varying intensities. Adolescents with obesity were recruited from an American mid-west children’s hospital and randomized into moderate exercise (MOD or high intensity interval exercise (HIIE groups for a 6-week exercise intervention, consisting of cycle ergometry for 40 minutes, 3 days per week. Heart rate was measured every two minutes during each exercise session.  Estimated VO2max measured via Åstrand cycle test, body composition, and physical activity (PA enjoyment evaluated via questionnaire were assessed pre/post-intervention. Twenty-seven adolescents (age 14.7±1.5; 17 female, 10 male completed the intervention. Estimated VO2max increased only in the HIIE group (20.0±5.7 to 22.7±6.5 ml/kg/min, p=0.015. The HIIE group also demonstrated increased PA enjoyment, which was correlated with average heart rate achieved during the intervention (r=0.55; p=0.043. Six weeks of HIIE elicited improvements to estimated VO2max in adolescents with obesity. Furthermore, those exercising at higher heart rates demonstrated greater PA enjoyment, implicating enjoyment as an important determinant of VO2max, specifically following higher intensity activities.

  3. Estimation of gas turbine blades cooling efficiency

    NARCIS (Netherlands)

    Moskalenko, A.B.; Kozhevnikov, A.

    2016-01-01

    This paper outlines the results of the evaluation of the most thermally stressed gas turbine elements, first stage power turbine blades, cooling efficiency. The calculations were implemented using a numerical simulation based on the Finite Element Method. The volume average temperature of the blade

  4. [Studies of marker screening efficiency and corresponding influencing factors in QTL composite interval mapping].

    Science.gov (United States)

    Gao, Yong-Ming; Wan, Ping

    2002-06-01

    Screening markers efficiently is the foundation of mapping QTLs by composite interval mapping. Main and interaction markers distinguished, besides using background control for genetic variation, could also be used to construct intervals of two-way searching for mapping QTLs with epistasis, which can save a lot of calculation time. Therefore, the efficiency of marker screening would affect power and precision of QTL mapping. A doubled haploid population with 200 individuals and 5 chromosomes was constructed, with 50 markers evenly distributed at 10 cM space. Among a total of 6 QTLs, one was placed on chromosome I, two linked on chromosome II, and the other three linked on chromosome IV. QTL setting included additive effects and epistatic effects of additive x additive, the corresponding QTL interaction effects were set if data were collected under multiple environments. The heritability was assumed to be 0.5 if no special declaration. The power of marker screening by stepwise regression, forward regression, and three methods for random effect prediction, e.g. best linear unbiased prediction (BLUP), linear unbiased prediction (LUP) and adjusted unbiased prediction (AUP), was studied and compared through 100 Monte Carlo simulations. The results indicated that the marker screening power by stepwise regression at 0.1, 0.05 and 0.01 significant level changed from 2% to 68%, the power changed from 2% to 72% by forward regression. The larger the QTL effects, the higher the marker screening power. While the power of marker screening by three random effect prediction was very low, the maximum was only 13%. That suggested that regression methods were much better than those by using the approaches of random effect prediction to identify efficient markers flanking QTLs, and forward selection method was more simple and efficient. The results of simulation study on heritability showed that heightening of both general heritability and interaction heritability of genotype x

  5. The use of Leptodyctium riparium (Hedw.) Warnst in the estimation of minimum postmortem interval.

    Science.gov (United States)

    Lancia, Massimo; Conforti, Federica; Aleffi, Michele; Caccianiga, Marco; Bacci, Mauro; Rossi, Riccardo

    2013-01-01

    The estimation of the postmortem interval (PMI) is still one of the most challenging issues in forensic investigations, especially in cases in which advanced transformative phenomena have taken place. The dating of skeletal remains is even more difficult and sometimes only a rough determination of the PMI is possible. Recent studies suggest that plant analysis can provide a reliable estimation for skeletal remains dating, when traditional techniques are not applicable. Forensic Botany is a relatively recent discipline that includes many subdisciplines such as Palynology, Anatomy, Dendrochronology, Limnology, Systematic, Ecology, and Molecular Biology. In a recent study, Cardoso et al. (Int J Legal Med 2010;124:451) used botanical evidence for the first time to establish the PMI of human skeletal remains found in a forested area of northern Portugal from the growth rate of mosses and shrub roots. The present paper deals with a case in which the study of the growth rate of the bryophyte Leptodyctium riparium (Hedw.) Warnst, was used in estimating the PMI of some human skeletal remains that were found in a wooded area near Perugia, in Central Italy. © 2012 American Academy of Forensic Sciences.

  6. Point and interval estimation of pollinator importance: a study using pollination data of Silene caroliniana.

    Science.gov (United States)

    Reynolds, Richard J; Fenster, Charles B

    2008-05-01

    Pollinator importance, the product of visitation rate and pollinator effectiveness, is a descriptive parameter of the ecology and evolution of plant-pollinator interactions. Naturally, sources of its variation should be investigated, but the SE of pollinator importance has never been properly reported. Here, a Monte Carlo simulation study and a result from mathematical statistics on the variance of the product of two random variables are used to estimate the mean and confidence limits of pollinator importance for three visitor species of the wildflower, Silene caroliniana. Both methods provided similar estimates of mean pollinator importance and its interval if the sample size of the visitation and effectiveness datasets were comparatively large. These approaches allowed us to determine that bumblebee importance was significantly greater than clearwing hawkmoth, which was significantly greater than beefly. The methods could be used to statistically quantify temporal and spatial variation in pollinator importance of particular visitor species. The approaches may be extended for estimating the variance of more than two random variables. However, unless the distribution function of the resulting statistic is known, the simulation approach is preferable for calculating the parameter's confidence limits.

  7. Quantifying human decomposition in an indoor setting and implications for postmortem interval estimation.

    Science.gov (United States)

    Ceciliason, Ann-Sofie; Andersson, M Gunnar; Lindström, Anders; Sandler, Håkan

    2018-02-01

    This study's objective is to obtain accuracy and precision in estimating the postmortem interval (PMI) for decomposing human remains discovered in indoor settings. Data were collected prospectively from 140 forensic cases with a known date of death, scored according to the Total Body Score (TBS) scale at the post-mortem examination. In our model setting, it is estimated that, in cases with or without the presence of blowfly larvae, approximately 45% or 66% respectively, of the variance in TBS can be derived from Accumulated Degree-Days (ADD). The precision in estimating ADD/PMI from TBS is, in our setting, moderate to low. However, dividing the cases into defined subgroups suggests the possibility to increase the precision of the model. Our findings also suggest a significant seasonal difference with concomitant influence on TBS in the complete data set, possibly initiated by the presence of insect activity mainly during summer. PMI may be underestimated in cases with presence of desiccation. Likewise, there is a need for evaluating the effect of insect activity, to avoid overestimating the PMI. Our data sample indicates that the scoring method might need to be slightly modified to better reflect indoor decomposition, especially in cases with insect infestations or/and extensive desiccation. When applying TBS in an indoor setting, the model requires distinct inclusion criteria and a defined population. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Daily thanatomicrobiome changes in soil as an approach of postmortem interval estimation: An ecological perspective.

    Science.gov (United States)

    Adserias-Garriga, Joe; Hernández, Marta; Quijada, Narciso M; Rodríguez Lázaro, David; Steadman, Dawnie; Garcia-Gil, Jesús

    2017-09-01

    Understanding human decomposition is critical for its use in postmortem interval (PMI) estimation, having a significant impact on forensic investigations. In recognition of the need to establish the scientific basis for PMI estimation, several studies on decomposition have been carried out in the last years. The aims of the present study were: (i) to identify soil microbiota communities involved in human decomposition through high-throughput sequencing (HTS) of DNA sequences from the different bacteria, (ii) to monitor quantitatively and qualitatively the decay of such signature species, and (iii) to describe succesional changes in bacterial populations from the early putrefaction state until skeletonization. Three donated individuals to the University of Tennessee FAC were studied. Soil samples around the body were taken from the placement of the donor until advanced decay/dry remains stage. Bacterial DNA extracts were obtained from the samples, HTS techniques were applied and bioinformatic data analysis was performed. The three cadavers showed similar overall successional changes. At the beginning of the decomposition process the soil microbiome consisted of diverse indigenous soil bacterial communities. As decomposition advanced, Firmicutes community abundance increased in the soil during the bloat stage. The growth curve of Firmicutes from human remains can be used to estimate time since death during Tennessee summer conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    Science.gov (United States)

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  10. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    Science.gov (United States)

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  11. [Research Progress of Carrion-breeding Phorid Flies for Post-mortem Interval Estimation in Forensic Medicine].

    Science.gov (United States)

    Li, L; Feng, D X; Wu, J

    2016-10-01

    It is a difficult problem of forensic medicine to accurately estimate the post-mortem interval. Entomological approach has been regarded as an effective way to estimate the post-mortem interval. The developmental biology of carrion-breeding flies has an important position at the post-mortem interval estimation. Phorid flies are tiny and occur as the main or even the only insect evidence in relatively enclosed environments. This paper reviews the research progress of carrion-breeding phorid flies for estimating post-mortem interval in forensic medicine which includes their roles, species identification and age determination of immatures. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  12. Estimation of Postmortem Interval Using the Radiological Techniques, Computed Tomography: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Jiulin Wang

    2017-01-01

    Full Text Available Estimation of postmortem interval (PMI has been an important and difficult subject in the forensic study. It is a primary task of forensic work, and it can help guide the work in field investigation. With the development of computed tomography (CT technology, CT imaging techniques are now being more frequently applied to the field of forensic medicine. This study used CT imaging techniques to observe area changes in different tissues and organs of rabbits after death and the changing pattern of the average CT values in the organs. The study analyzed the relationship between the CT values of different organs and PMI with the imaging software Max Viewer and obtained multiparameter nonlinear regression equation of the different organs, and the study provided an objective and accurate method and reference information for the estimation of PMI in the forensic medicine. In forensic science, PMI refers to the time interval between the discovery or inspection of corpse and the time of death. CT, magnetic resonance imaging, and other imaging techniques have become important means of clinical examinations over the years. Although some scholars in our country have used modern radiological techniques in various fields of forensic science, such as estimation of injury time, personal identification of bodies, analysis of the cause of death, determination of the causes of injury, and identification of the foreign substances of bodies, there are only a few studies on the estimation of time of death. We detected the process of subtle changes in adult rabbits after death, the shape and size of tissues and organs, and the relationship between adjacent organs in three-dimensional space in an effort to develop new method for the estimation of PMI. The bodies of the dead rabbits were stored at 20°C room temperature, sealed condition, and prevented exposure to flesh flies. The dead rabbits were randomly divided into comparison group and experimental group. The whole

  13. An evaluation of soil chemistry in human cadaver decomposition islands: Potential for estimating postmortem interval (PMI).

    Science.gov (United States)

    Fancher, J P; Aitkenhead-Peterson, J A; Farris, T; Mix, K; Schwab, A P; Wescott, D J; Hamilton, M D

    2017-10-01

    Soil samples from the Forensic Anthropology Research Facility (FARF) at Texas State University, San Marcos, TX, were analyzed for multiple soil characteristics from cadaver decomposition islands to a depth of 5centimeters (cm) from 63 human decomposition sites, as well as depths up to 15cm in a subset of 11 of the cadaver decomposition islands plus control soils. Postmortem interval (PMI) of the cadaver decomposition islands ranged from 6 to 1752 days. Some soil chemistry, including nitrate-N (NO 3 -N), ammonium-N (NH 4 -N), and dissolved inorganic carbon (DIC), peaked at early PMI values and their concentrations at 0-5cm returned to near control values over time likely due to translocation down the soil profile. Other soil chemistry, including dissolved organic carbon (DOC), dissolved organic nitrogen (DON), orthophosphate-P (PO 4 -P), sodium (Na + ), and potassium (K + ), remained higher than the control soil up to a PMI of 1752days postmortem. The body mass index (BMI) of the cadaver appeared to have some effect on the cadaver decomposition island chemistry. To estimate PMI using soil chemistry, backward, stepwise multiple regression analysis was used with PMI as the dependent variable and soil chemistry, body mass index (BMI) and physical soil characteristics such as saturated hydraulic conductivity as independent variables. Measures of soil parameters derived from predator and microbial mediated decomposition of human remains shows promise in estimating PMI to within 365days for a period up to nearly five years. This persistent change in soil chemistry extends the ability to estimate PMI beyond the traditionally utilized methods of entomology and taphonomy in support of medical-legal investigations, humanitarian recovery efforts, and criminal and civil cases. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  15. Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations.

    Science.gov (United States)

    Cockle, Diane L; Bell, Lynne S

    2015-08-01

    Human decomposition is a complex biological process driven by an array of variables which are not clearly understood. The medico-legal community have long been searching for a reliable method to establish the post-mortem interval (PMI) for those whose deaths have either been hidden, or gone un-noticed. To date, attempts to develop a PMI estimation method based on the state of the body either at the scene or at autopsy have been unsuccessful. One recent study has proposed that two simple formulae, based on the level of decomposition humidity and temperature, could be used to accurately calculate the PMI for bodies outside, on or under the surface worldwide. This study attempted to validate 'Formula I' [1] (for bodies on the surface) using 42 Canadian cases with known PMIs. The results indicated that bodies exposed to warm temperatures consistently overestimated the known PMI by a large and inconsistent margin for Formula I estimations. And for bodies exposed to cold and freezing temperatures (less than 4°C), then the PMI was dramatically under estimated. The ability of 'Formulae II' to estimate the PMI for buried bodies was also examined using a set of 22 known Canadian burial cases. As these cases used in this study are retrospective, some of the data needed for Formula II was not available. The 4.6 value used in Formula II to represent the standard ratio of time that burial decelerates the rate of decomposition was examined. The average time taken to achieve each stage of decomposition both on, and under the surface was compared for the 118 known cases. It was found that the rate of decomposition was not consistent throughout all stages of decomposition. The rates of autolysis above and below the ground were equivalent with the buried cases staying in a state of putrefaction for a prolonged period of time. It is suggested that differences in temperature extremes and humidity levels between geographic regions may make it impractical to apply formulas developed in

  16. Cardiac parasympathetic outflow during dynamic exercise in humans estimated from power spectral analysis of P-P interval variability.

    Science.gov (United States)

    Takahashi, Makoto; Nakamoto, Tomoko; Matsukawa, Kanji; Ishii, Kei; Watanabe, Tae; Sekikawa, Kiyokazu; Hamada, Hironobu

    2016-03-01

    What is the central question of this study? Should we use the high-frequency (HF) component of P-P interval as an index of cardiac parasympathetic nerve activity during moderate exercise? What is the main finding and its importance? The HF component of P-P interval variability remained even at a heart rate of 120-140 beats min(-1) and was further reduced by atropine, indicating incomplete cardiac vagal withdrawal during moderate exercise. The HF component of R-R interval is invalid as an estimate of cardiac parasympathetic outflow during moderate exercise; instead, the HF component of P-P interval variability should be used. The high-frequency (HF) component of R-R interval variability has been widely used as an indirect estimate of cardiac parasympathetic (vagal) outflow to the sino-atrial node of the heart. However, we have recently found that the variability of the R-R interval becomes much smaller during dynamic exercise than that of the P-P interval above a heart rate (HR) of ∼100 beats min(-1). We hypothesized that cardiac parasympathetic outflow during dynamic exercise with a higher intensity may be better estimated using the HF component of P-P interval variability. To test this hypothesis, the HF components of both P-P and R-R interval variability were analysed using a Wavelet transform during dynamic exercise. Twelve subjects performed ergometer exercise to increase HR from the baseline of 69 ± 3 beats min(-1) to three different levels of 100, 120 and 140 beats min(-1). We also examined the effect of atropine sulfate on the HF components in eight of the 12 subjects during exercise at an HR of 140 beats min(-1) . The HF component of P-P interval variability was significantly greater than that of R-R interval variability during exercise, especially at the HRs of 120 and 140 beats min(-1). The HF component of P-P interval variability was more reduced by atropine than that of R-R interval variability. We conclude that cardiac parasympathetic outflow to the

  17. PFP total operating efficiency calculation and basis of estimate

    International Nuclear Information System (INIS)

    SINCLAIR, J.C.

    1999-01-01

    The purpose of the Plutonium Finishing Plant (PFP) Total Operating Efficiency Calculation and Basis of Estimate document is to provide the calculated value and basis of estimate for the Total Operating Efficiency (TOE) for the material stabilization operations to be conducted in 234-52 Building. This information will be used to support both the planning and execution of the Plutonium Finishing Plant (PFP) Stabilization and Deactivation Project's (hereafter called the Project) resource-loaded, integrated schedule

  18. A Machine Learning Approach for Using the Postmortem Skin Microbiome to Estimate the Postmortem Interval.

    Directory of Open Access Journals (Sweden)

    Hunter R Johnson

    Full Text Available Research on the human microbiome, the microbiota that live in, on, and around the human person, has revolutionized our understanding of the complex interactions between microbial life and human health and disease. The microbiome may also provide a valuable tool in forensic death investigations by helping to reveal the postmortem interval (PMI of a decedent that is discovered after an unknown amount of time since death. Current methods of estimating PMI for cadavers discovered in uncontrolled, unstudied environments have substantial limitations, some of which may be overcome through the use of microbial indicators. In this project, we sampled the microbiomes of decomposing human cadavers, focusing on the skin microbiota found in the nasal and ear canals. We then developed several models of statistical regression to establish an algorithm for predicting the PMI of microbial samples. We found that the complete data set, rather than a curated list of indicator species, was preferred for training the regressor. We further found that genus and family, rather than species, are the most informative taxonomic levels. Finally, we developed a k-nearest- neighbor regressor, tuned with the entire data set from all nasal and ear samples, that predicts the PMI of unknown samples with an average error of ±55 accumulated degree days (ADD. This study outlines a machine learning approach for the use of necrobiome data in the prediction of the PMI and thereby provides a successful proof-of- concept that skin microbiota is a promising tool in forensic death investigations.

  19. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

    Science.gov (United States)

    Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

    2014-03-01

    Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  1. Analysis of Low Frequency Oscillation Using the Multi-Interval Parameter Estimation Method on a Rolling Blackout in the KEPCO System

    Directory of Open Access Journals (Sweden)

    Kwan-Shik Shim

    2017-04-01

    Full Text Available This paper describes a multiple time interval (“multi-interval” parameter estimation method. The multi-interval parameter estimation method estimates a parameter from a new multi-interval prediction error polynomial that can simultaneously consider multiple time intervals. The root of the multi-interval prediction error polynomial includes the effect on each time interval, and the important mode can be estimated by solving one polynomial for multiple time intervals or signals. The algorithm of the multi-interval parameter estimation method proposed in this paper is applied to the test function and the data measured from a PMU (phasor measurement unit installed in the KEPCO (Korea Electric Power Corporation system. The results confirm that the proposed multi-interval parameter estimation method accurately and reliably estimates important parameters.

  2. Estimation of farm level technical efficiency and its determinants ...

    African Journals Online (AJOL)

    With the difficulties encountered by the farmers in adopting improved technologies, increasing resource use efficiency has become a very significant factor in increasing productivity. Therefore, this study was designed to estimate the farm level technical efficiency and its determinants among male and female sweet potato ...

  3. Estimating Production Technical Efficiency of Irvingia Seed (Ogbono ...

    African Journals Online (AJOL)

    This study estimated the production technical efficiency of irvingia seed (Ogbono) farmers in Nsukka agricultural zone in Enugu State, Nigeria. This is against the backdrop of the importance of efficiency as a factor of productivity in a growing economy like Nigeria where resources are scarce and opportunities for new ...

  4. THE MISSING EARTHQUAKES OF HUMBOLDT COUNTY: RECONCILING RECURRENCE INTERVAL ESTIMATES, SOUTHERN CASCADIA SUBDUCTION ZONE

    Science.gov (United States)

    Patton, J. R.; Leroy, T. H.

    2009-12-01

    Earthquake and tsunami hazard for northwestern California and southern Oregon is predominately based on estimates of recurrence for earthquakes on the Cascadia subduction zone and upper plate thrust faults, each with unique deformation and recurrence histories. Coastal northern California is uniquely located to enable us to distinguish these different sources of seismic hazard as the accretionary prism extends on land in this region. This region experiences ground deformation from rupture of upper plate thrust faults like the Little Salmon fault. Most of this region is thought to be above the locked zone of the megathrust, so is subject to vertical deformation during the earthquake cycle. Secondary evidence of earthquake history is found here in the form of marsh soils that coseismically subside and commonly are overlain by estuarine mud and rarely tsunami sand. It is not currently known what the source of the subsidence is for this region; it may be due to upper plate rupture, megathrust rupture, or a combination of the two. Given that many earlier investigations utilized bulk peat for 14C age determinations and that these early studies were largely reconnaissance work, these studies need to be reevaluated. Recurrence Interval estimates are inconsistent when comparing terrestrial (~500 years) and marine (~220 years) data sets. This inconsistency may be due to 1) different sources of archival bias in marine and terrestrial data sets and/or 2) different sources of deformation. Factors controlling successful archiving of paleoseismic data are considered as this relates to geologic setting and how that might change through time. We compile, evaluate, and rank existing paleoseismic data in order to prioritize future paleoseismic investigations. 14C ages are recalibrated and quality assessments are made for each age determination. We then evaluate geologic setting and prioritize important research locations and goals based on these existing data. Terrestrial core

  5. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  6. Highly Efficient Estimators of Multivariate Location with High Breakdown Point

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1991-01-01

    We propose an affine equivariant estimator of multivariate location that combines a high breakdown point and a bounded influence function with high asymptotic efficiency. This proposal is basically a location $M$-estimator based on the observations obtained after scaling with an affine equivariant

  7. [Estimation of the atrioventricular time interval by pulse Doppler in the normal fetal heart].

    Science.gov (United States)

    Hamela-Olkowska, Anita; Dangel, Joanna

    2009-08-01

    To assess normative values of the fetal atrioventricular (AV) time interval by pulse-wave Doppler methods on 5-chamber view. Fetal echocardiography exams were performed using Acuson Sequoia 512 in 140 singleton fetuses at 18 to 40 weeks of gestation with sinus rhythm and normal cardiac and extracardiac anatomy. Pulsed Doppler derived AV intervals were measured from left ventricular inflow/outflow view using transabdominal convex 3.5-6 MHz probe. The values of AV time interval ranged from 100 to 150 ms (mean 123 +/- 11.2). The AV interval was negatively correlated with the heart rhythm (page of gestation (p=0.007). However, in the same subgroup of the fetal heart rate there was no relation between AV intervals and gestational age. Therefore, the AV intervals showed only the heart rate dependence. The 95th percentiles of AV intervals according to FHR ranged from 135 to 148 ms. 1. The AV interval duration was negatively correlated with the heart rhythm. 2. Measurement of AV time interval is easy to perform and has a good reproducibility. It may be used for the fetal heart block screening in anti-Ro and anti-La positive pregnancies. 3. Normative values established in the study may help obstetricians in assessing fetal abnormalities of the AV conduction.

  8. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    Science.gov (United States)

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  9. Estimating the generation interval of influenza A (H1N1) in a range of social settings.

    Science.gov (United States)

    te Beest, Dennis E; Wallinga, Jacco; Donker, Tjibbe; van Boven, Michiel

    2013-03-01

    A proper understanding of the infection dynamics of influenza A viruses hinges on the availability of reliable estimates of key epidemiologic parameters such as the reproduction number, intrinsic growth rate, and generation interval. Often the generation interval is assumed to be similar in different settings although there is little evidence justifying this. Here we estimate the generation interval for stratifications based on age, cluster size, and social setting (camp, school, workplace, household) using data from 16 clusters of Novel Influenza A (H1N1) in the Netherlands. Our analyses are based on a Bayesian inferential framework, enabling flexible handling of both missing infection links and missing times of symptoms onset. The analysis indicates that a stratification that allows the generation interval to differ by social setting fits the data best. Specifically, the estimated generation interval was shorter in households (2.1 days [95% credible interval = 1.6-2.9]) and camps (2.3 days [1.4-3.4]) than in workplaces (2.7 days [1.9-3.7]) and schools (3.4 days [2.5-4.5]). Our findings could be the result of differences in the number of contacts between settings, differences in prophylactic use of antivirals between settings, and differences in underreporting.

  10. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Winn, W.G.

    1994-01-01

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  11. Interval estimation of the overall treatment effect in a meta-analysis of a few small studies with zero events.

    Science.gov (United States)

    Pateras, Konstantinos; Nikolakopoulos, Stavros; Mavridis, Dimitris; Roes, Kit C B

    2018-03-01

    When a meta-analysis consists of a few small trials that report zero events, accounting for heterogeneity in the (interval) estimation of the overall effect is challenging. Typically, we predefine meta-analytical methods to be employed. In practice, data poses restrictions that lead to deviations from the pre-planned analysis, such as the presence of zero events in at least one study arm. We aim to explore heterogeneity estimators behaviour in estimating the overall effect across different levels of sparsity of events. We performed a simulation study that consists of two evaluations. We considered an overall comparison of estimators unconditional on the number of observed zero cells and an additional one by conditioning on the number of observed zero cells. Estimators that performed modestly robust when (interval) estimating the overall treatment effect across a range of heterogeneity assumptions were the Sidik-Jonkman, Hartung-Makambi and improved Paul-Mandel. The relative performance of estimators did not materially differ between making a predefined or data-driven choice. Our investigations confirmed that heterogeneity in such settings cannot be estimated reliably. Estimators whose performance depends strongly on the presence of heterogeneity should be avoided. The choice of estimator does not need to depend on whether or not zero cells are observed.

  12. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  13. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    Science.gov (United States)

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  14. Phytoremediation: realistic estimation of modern efficiency and future possibility

    International Nuclear Information System (INIS)

    Kravets, A.; Pavlenko, Y.; Kusmenko, L.; Ermak, M.

    1996-01-01

    Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technology has been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)

  15. Phytoremediation: realistic estimation of modern efficiency and future possibility

    Energy Technology Data Exchange (ETDEWEB)

    Kravets, A; Pavlenko, Y [Institute of Cell Biology and Genetic Engineering NAS, Kiev (Ukraine); Kusmenko, L; Ermak, M [Institute of Plant Physiology and Genetic NAS, Vasilkovsky, Kiev (Ukraine)

    1996-11-01

    Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technologyhas been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)

  16. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-21

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.

  17. Interval estimation of the overall treatment effect in a meta-analysis of a few small studies with zero events

    NARCIS (Netherlands)

    Pateras, Konstantinos; Nikolakopoulos, Stavros; Mavridis, Dimitris; Roes, Kit C.B.

    2018-01-01

    When a meta-analysis consists of a few small trials that report zero events, accounting for heterogeneity in the (interval) estimation of the overall effect is challenging. Typically, we predefine meta-analytical methods to be employed. In practice, data poses restrictions that lead to deviations

  18. An axiomatic approach to the estimation of interval-valued preferences in multi-criteria decision modeling

    DEFF Research Database (Denmark)

    Franco de los Ríos, Camilo; Hougaard, Jens Leth; Nielsen, Kurt

    In this paper we explore multi-dimensional preference estimation from imprecise (interval) data. Focusing on different multi-criteria decision models, such as PROMETHEE, ELECTRE, TOPSIS or VIKOR, and their extensions dealing with imprecise data, preference modeling is examined with respect...

  19. Estimating the postmortem interval (PMI) using accumulated degree-days (ADD) in a temperate region of South Africa.

    Science.gov (United States)

    Myburgh, Jolandie; L'Abbé, Ericka N; Steyn, Maryna; Becker, Piet J

    2013-06-10

    The validity of the method in which total body score (TBS) and accumulated degree-days (ADD) are used to estimate the postmortem interval (PMI) is examined. TBS and ADD were recorded for 232 days in northern South Africa, which has temperatures between 17 and 28 °C in summer and 6 and 20 °C in winter. Winter temperatures rarely go below 0°C. Thirty pig carcasses, which weighed between 38 and 91 kg, were used. TBS was scored using the modified method of Megyesi et al. [1]. Temperature was acquired from an on site data logger and the weather station bureau; differences between these two sources were not statistically significant. Using loglinear random-effects maximum likelihood regression, an r(2) value for ADD (0.6227) was produced and linear regression formulae to estimate PMI from ADD with a 95% prediction interval were developed. The data of 16 additional pigs that were placed a year later were then used to validate the accuracy of this method. The actual PMI and ADD were compared to the estimated PMI and ADD produced by the developed formulae as well as the estimated PMIs within the 95% prediction interval. A validation of the study produced poor results as only one pig of 16 fell within the 95% interval when using the formulae, showing that ADD has limited use in the prediction of PMI in a South African setting. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Estimation of surface water quality in a Yazoo River tributary using the duration curve and recurrence interval approach

    Science.gov (United States)

    Ying Ouyang; Prem B. Parajuli; Daniel A. Marion

    2013-01-01

    Pollution of surface water with harmful chemicals and eutrophication of rivers and lakes with excess nutrients are serious environmental concerns. This study estimated surface water quality in a stream within the Yazoo River Basin (YRB), Mississippi, USA, using the duration curve and recurrence interval analysis techniques. Data from the US Geological Survey (USGS)...

  1. The relative efficiency of three methods of estimating herbage mass ...

    African Journals Online (AJOL)

    The methods involved were randomly placed circular quadrats; randomly placed narrow strips; and disc meter sampling. Disc meter and quadrat sampling appear to be more efficient than strip sampling. In a subsequent small plot grazing trial the estimates of herbage mass, using the disc meter, had a consistent precision ...

  2. Efficient estimation for high similarities using odd sketches

    DEFF Research Database (Denmark)

    Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang

    2014-01-01

    . This means that Odd Sketches provide a highly space-efficient estimator for sets of high similarity, which is relevant in applications such as web duplicate detection, collaborative filtering, and association rule learning. The method extends to weighted Jaccard similarity, relevant e.g. for TF-IDF vector...... and web duplicate detection tasks....

  3. Estimating length of avian incubation and nestling stages in afrotropical forest birds from interval-censored nest records

    Science.gov (United States)

    Stanley, T.R.; Newmark, W.D.

    2010-01-01

    In the East Usambara Mountains in northeast Tanzania, research on the effects of forest fragmentation and disturbance on nest survival in understory birds resulted in the accumulation of 1,002 nest records between 2003 and 2008 for 8 poorly studied species. Because information on the length of the incubation and nestling stages in these species is nonexistent or sparse, our objectives in this study were (1) to estimate the length of the incubation and nestling stage and (2) to compute nest survival using these estimates in combination with calculated daily survival probability. Because our data were interval censored, we developed and applied two new statistical methods to estimate stage length. In the 8 species studied, the incubation stage lasted 9.6-21.8 days and the nestling stage 13.9-21.2 days. Combining these results with estimates of daily survival probability, we found that nest survival ranged from 6.0% to 12.5%. We conclude that our methodology for estimating stage lengths from interval-censored nest records is a reasonable and practical approach in the presence of interval-censored data. ?? 2010 The American Ornithologists' Union.

  4. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  5. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  6. An integrated theory of prospective time interval estimation : The role of cognition, attention, and learning

    NARCIS (Netherlands)

    Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John

    A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval

  7. Stoichiometric estimates of the biochemical conversion efficiencies in tsetse metabolism

    Directory of Open Access Journals (Sweden)

    Custer Adrian V

    2005-08-01

    Full Text Available Abstract Background The time varying flows of biomass and energy in tsetse (Glossina can be examined through the construction of a dynamic mass-energy budget specific to these flies but such a budget depends on efficiencies of metabolic conversion which are unknown. These efficiencies of conversion determine the overall yields when food or storage tissue is converted into body tissue or into metabolic energy. A biochemical approach to the estimation of these efficiencies uses stoichiometry and a simplified description of tsetse metabolism to derive estimates of the yields, for a given amount of each substrate, of conversion product, by-products, and exchanged gases. This biochemical approach improves on estimates obtained through calorimetry because the stoichiometric calculations explicitly include the inefficiencies and costs of the reactions of conversion. However, the biochemical approach still overestimates the actual conversion efficiency because the approach ignores all the biological inefficiencies and costs such as the inefficiencies of leaky membranes and the costs of molecular transport, enzyme production, and cell growth. Results This paper presents estimates of the net amounts of ATP, fat, or protein obtained by tsetse from a starting milligram of blood, and provides estimates of the net amounts of ATP formed from the catabolism of a milligram of fat along two separate pathways, one used for resting metabolism and one for flight. These estimates are derived from stoichiometric calculations constructed based on a detailed quantification of the composition of food and body tissue and on a description of the major metabolic pathways in tsetse simplified to single reaction sequences between substrates and products. The estimates include the expected amounts of uric acid formed, oxygen required, and carbon dioxide released during each conversion. The calculated estimates of uric acid egestion and of oxygen use compare favorably to

  8. An experimental evaluation of electrical skin conductivity changes in postmortem interval and its assessment for time of death estimation.

    Science.gov (United States)

    Cantürk, İsmail; Karabiber, Fethullah; Çelik, Safa; Şahin, M Feyzi; Yağmur, Fatih; Kara, Sadık

    2016-02-01

    In forensic medicine, estimation of the time of death (ToD) is one of the most important and challenging medico-legal problems. Despite the partial accomplishments in ToD estimations to date, the error margin of ToD estimation is still too large. In this study, electrical conductivity changes were experimentally investigated in the postmortem interval in human cases. Electrical conductivity measurements give some promising clues about the postmortem interval. A living human has a natural electrical conductivity; in the postmortem interval, intracellular fluids gradually leak out of cells. These leaked fluids combine with extra-cellular fluids in tissues and since both fluids are electrolytic, intracellular fluids help increase conductivity. Thus, the level of electrical conductivity is expected to increase with increased time after death. In this study, electrical conductivity tests were applied for six hours. The electrical conductivity of the cases exponentially increased during the tested time period, indicating a positive relationship between electrical conductivity and the postmortem interval. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  10. Theoretical implications of quantitative properties of interval timing and probability estimation in mouse and rat.

    Science.gov (United States)

    Kheifets, Aaron; Freestone, David; Gallistel, C R

    2017-07-01

    In three experiments with mice ( Mus musculus ) and rats (Rattus norvigicus), we used a switch paradigm to measure quantitative properties of the interval-timing mechanism. We found that: 1) Rodents adjusted the precision of their timed switches in response to changes in the interval between the short and long feed latencies (the temporal goalposts). 2) The variability in the timing of the switch response was reduced or unchanged in the face of large trial-to-trial random variability in the short and long feed latencies. 3) The adjustment in the distribution of switch latencies in response to changes in the relative frequency of short and long trials was sensitive to the asymmetry in the Kullback-Leibler divergence. The three results suggest that durations are represented with adjustable precision, that they are timed by multiple timers, and that there is a trial-by-trial (episodic) record of feed latencies in memory. © 2017 Society for the Experimental Analysis of Behavior.

  11. Statistical estimate of mercury removal efficiencies for air pollution control devices of municipal solid waste incinerators.

    Science.gov (United States)

    Takahashi, Fumitake; Kida, Akiko; Shimaoka, Takayuki

    2010-10-15

    Although representative removal efficiencies of gaseous mercury for air pollution control devices (APCDs) are important to prepare more reliable atmospheric emission inventories of mercury, they have been still uncertain because they depend sensitively on many factors like the type of APCDs, gas temperature, and mercury speciation. In this study, representative removal efficiencies of gaseous mercury for several types of APCDs of municipal solid waste incineration (MSWI) were offered using a statistical method. 534 data of mercury removal efficiencies for APCDs used in MSWI were collected. APCDs were categorized as fixed-bed absorber (FA), wet scrubber (WS), electrostatic precipitator (ESP), and fabric filter (FF), and their hybrid systems. Data series of all APCD types had Gaussian log-normality. The average removal efficiency with a 95% confidence interval for each APCD was estimated. The FA, WS, and FF with carbon and/or dry sorbent injection systems had 75% to 82% average removal efficiencies. On the other hand, the ESP with/without dry sorbent injection had lower removal efficiencies of up to 22%. The type of dry sorbent injection in the FF system, dry or semi-dry, did not make more than 1% difference to the removal efficiency. The injection of activated carbon and carbon-containing fly ash in the FF system made less than 3% difference. Estimation errors of removal efficiency were especially high for the ESP. The national average of removal efficiency of APCDs in Japanese MSWI plants was estimated on the basis of incineration capacity. Owing to the replacement of old APCDs for dioxin control, the national average removal efficiency increased from 34.5% in 1991 to 92.5% in 2003. This resulted in an additional reduction of about 0.86Mg emission in 2003. Further study using the methodology in this study to other important emission sources like coal-fired power plants will contribute to better emission inventories. Copyright © 2010 Elsevier B.V. All rights

  12. Estimating the Post-Mortem Interval of skeletonized remains: The use of Infrared spectroscopy and Raman spectro-microscopy

    Science.gov (United States)

    Creagh, Dudley; Cameron, Alyce

    2017-08-01

    When skeletonized remains are found it becomes a police task to determine to identify the body and establish the cause of death. It assists investigators if the Post-Mortem Interval (PMI) can be established. Hitherto no reliable qualitative method of estimating the PMI has been found. A quantitative method has yet to be developed. This paper shows that IR spectroscopy and Raman microscopy have the potential to form the basis of a quantitative method.

  13. Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces

    Science.gov (United States)

    Sellers, Eric W.; Wang, Xingyu

    2013-01-01

    Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331

  14. Assessing a disaggregated energy input: using confidence intervals around translog elasticity estimates

    International Nuclear Information System (INIS)

    Hisnanick, J.J.; Kyer, B.L.

    1995-01-01

    The role of energy in the production of manufacturing output has been debated extensively in the literature, particularly its relationship with capital and labor. In an attempt to provide some clarification in this debate, a two-step methodology was used. First under the assumption of a five-factor production function specification, we distinguished between electric and non-electric energy and assessed each component's relationship with capital and labor. Second, we calculated both the Allen and price elasticities and constructed 95% confidence intervals around these values. Our approach led to the following conclusions: that the disaggregation of the energy input into electric and non-electric energy is justified; that capital and electric energy and capital and non-electric energy are substitutes, while labor and electric energy and labor and non-electric energy are complements in production; and that capital and energy are substitutes, while labor and energy are complements. (author)

  15. Is high-intensity interval training a time-efficient exercise strategy to improve health and fitness?

    Science.gov (United States)

    Gillen, Jenna B; Gibala, Martin J

    2014-03-01

    Growing research suggests that high-intensity interval training (HIIT) is a time-efficient exercise strategy to improve cardiorespiratory and metabolic health. "All out" HIIT models such as Wingate-type exercise are particularly effective, but this type of training may not be safe, tolerable or practical for many individuals. Recent studies, however, have revealed the potential for other models of HIIT, which may be more feasible but are still time-efficient, to stimulate adaptations similar to more demanding low-volume HIIT models and high-volume endurance-type training. As little as 3 HIIT sessions per week, involving ≤10 min of intense exercise within a time commitment of ≤30 min per session, including warm-up, recovery between intervals and cool down, has been shown to improve aerobic capacity, skeletal muscle oxidative capacity, exercise tolerance and markers of disease risk after only a few weeks in both healthy individuals and people with cardiometabolic disorders. Additional research is warranted, as studies conducted have been relatively short-term, with a limited number of measurements performed on small groups of subjects. However, given that "lack of time" remains one of the most commonly cited barriers to regular exercise participation, low-volume HIIT is a time-efficient exercise strategy that warrants consideration by health practitioners and fitness professionals.

  16. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    OpenAIRE

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  17. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying

    2014-11-07

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  18. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying; Stein, Michael L.

    2014-01-01

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  19. Prognostic Estimation of Advanced Heart Failure With Low Left Ventricular Ejection Fraction and Wide QRS Interval.

    Science.gov (United States)

    Oh, Changmyung; Chang, Hyuk-Jae; Sung, Ji Min; Kim, Ji Ye; Yang, Wooin; Shim, Jiyoung; Kang, Seok-Min; Ha, Jongwon; Rim, Se-Joong; Chung, Namsik

    2012-10-01

    Cardiac resynchronization therapy (CRT) has been known to improve the outcome of advanced heart failure (HF) but is still underutilized in clinical practice. We investigated the prognosis of patients with advanced HF who were suitable for CRT but were treated with conventional strategies. We also developed a risk model to predict mortality to improve the facilitation of CRT. Patients with symptomatic HF with left ventricular ejection fraction ≤35% and QRS interval >120 ms were consecutively enrolled at cardiovascular hospital. After excluding those patients who had received device therapy, 239 patients (160 males, mean 67±11 years) were eventually recruited. During a follow-up of 308±236 days, 56 (23%) patients died. Prior stroke, heart rate >90 bpm, serum Na ≤135 mEq/L, and serum creatinine ≥1.5 mg/dL were identified as independent factors using Cox proportional hazards regression. Based on the risk model, points were assigned to each of the risk factors proportional to the regression coefficient, and patients were stratified into three risk groups: low- (0), intermediate-(1-5), and high-risk (>5 points). The 2-year mortality rates of each risk group were 5, 31, and 64 percent, respectively. The C statistic of the risk model was 0.78, and the model was validated in a cohort from a different institution where the C statistic was 0.80. The mortality of patients with advanced HF who were managed conventionally was effectively stratified using a risk model. It may be useful for clinicians to be more proactive about adopting CRT to improve patient prognosis.

  20. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  1. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  2. Efficient estimation of feedback effects with application to climate models

    International Nuclear Information System (INIS)

    Cacugi, D.G.; Hall, M.C.G.

    1984-01-01

    This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical

  3. Efficient AM Algorithms for Stochastic ML Estimation of DOA

    Directory of Open Access Journals (Sweden)

    Haihua Chen

    2016-01-01

    Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.

  4. Experimental study on source efficiencies for estimating surface contamination level

    International Nuclear Information System (INIS)

    Ichiji, Takeshi; Ogino, Haruyuki

    2008-01-01

    Source efficiency was measured experimentally for various materials, such as metals, nonmetals, flooring materials, sheet materials and other materials, contaminated by alpha and beta emitter radioactive nuclides. Five nuclides, 147 Pm, 60 Co, 137 Cs, 204 Tl and 90 Sr- 90 Y, were used as the beta emitters, and one nuclide 241 Am was used as the alpha emitter. The test samples were prepared by placing drops of the radioactive standardized solutions uniformly on the various materials using an automatic quantitative dispenser system from Musashi Engineering, Inc. After placing drops of the radioactive standardized solutions, the test materials were allowed to dry for more than 12 hours in a draft chamber with a hood. The radioactivity of each test material was about 30 Bq. Beta rays or alpha rays from the test materials were measured with a 2-pi gas flow proportional counter from Aloka Co., Ltd. The source efficiencies of the metals, nonmetals and sheet materials were higher than 0.5 in the case of contamination by the 137 Cs, 204 Tl and 90 Sr- 90 Y radioactive standardized solutions, higher than 0.4 in the case of contamination by the 60 Co radioactive standardized solution, and higher than 0.25 in the case of contamination by the alpha emitter the 241 Am radioactive standardized solution. These values were higher than those given in Japanese Industrial Standards (JIS) documents. In contrast, the source efficiencies of some permeable materials were lower than those given in JIS documents, because source efficiency varies depending on whether the materials or radioactive sources are wet or dry. This study provides basic data on source efficiency, which is useful for estimating the surface contamination level of materials. (author)

  5. Commercial Discount Rate Estimation for Efficiency Standards Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-04-13

    Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).

  6. StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.

    Science.gov (United States)

    Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A

    2017-10-15

    Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  7. Estimation of Maize photosynthesis Efficiency Under Deficit Irrigation and Mulch

    International Nuclear Information System (INIS)

    Al-Hadithi, S.

    2004-01-01

    This research aims at estimating maize photosynthesis efficiency under deficit irrigation and soil mulching. A split-split plot design experiment was conducted with three replicates during the fall season 2000 and spring season 2001 at the experimental Station of Soil Dept./ Iraq Atomic Energy Commission. The main plots were assigned to full and deficit irrigation treatments: (C) control. The deficit irrigation treatment included the omission of one irrigation at establishment (S1, 15 days), vegetation (S2, 35 days), flowering (S3, 40 days) and yield formation (S4, 30 days) stages. The sub-plots were allocated for the two varieties, Synthetic 5012 (V1) and Haybrid 2052 (V2). The sub-sub-plots were assigned to mulch (M1) with wheat straw and no mulch (M0). Results showed that the deficit irrigation did not affect photosynthesis efficiency in both seasons, which ranged between 1.90 to 2.15% in fall season and between 1.18 and 1.45% in spring season. The hybrid variety was superior 9.39 and 9.15% over synthetic variety in fall and spring seasons, respectively. Deficit irrigation, varieties and mulch had no significant effects on harvest index in both seasons. This indicates that the two varieties were stable in their partitioning efficiency of nutrient matter between plant organ and grains under the condition of this experiment. (Author) 21 refs., 3 figs., 6 tabs

  8. Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.

    Science.gov (United States)

    Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro

    2015-08-21

    Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.

  9. Efficient Implementation of a Symbol Timing Estimator for Broadband PLC

    Directory of Open Access Journals (Sweden)

    Francisco Nombela

    2015-08-01

    Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.

  10. Rates of development of immatures of three species of Chrysomya (Diptera: Calliphoridae) reared in different types of animal tissues: implications for estimating the postmortem interval.

    Science.gov (United States)

    Thyssen, Patricia Jacqueline; de Souza, Carina Mara; Shimamoto, Paula Midori; Salewski, Thais de Britto; Moretti, Thiago Carvalho

    2014-09-01

    Blowflies have major medical and sanitary importance because they can be vectors of viruses, bacteria, and helminths and are also causative agents of myiasis. Also, these flies, especially those belonging to the genus Chrysomya, are among the first insects to arrive at carcasses and are therefore valuable in providing data for the estimation of the minimum postmortem interval (PMImin). The PMImin can be calculated by assessing the weight, length, or development stage of blowfly larvae. Lack of information on the variables that might affect these parameters in different fly species can generate inaccuracies in estimating the PMImin. This study evaluated the effects of different types of bovine tissues (the liver, muscle, tongue, and stomach) and chicken heart on the development rates of larvae of Chrysomya albiceps Wiedemann, Chrysomya megacephala Fabricius, and Chrysomya putoria Wiedemann (Diptera: Calliphoridae). The efficiency of each rearing substrate was assessed by maggot weight gain (mg), larval development time (h), larval and pupal survival (%), and emergence interval (h). The development rates of larvae of all blowfly species studied here were directly influenced by the type of food substrate. Tissues that have high contents of protein and fat (muscle and heart) allowed the highest larval weight gain. For bovine liver, all Chrysomya species showed slower growth, by as much as 48 h, compared to the other tissues. Different rates of development are probably associated with specific energy requirements of calliphorids and the nutritional composition of each type of food.

  11. An efficient algebraic approach to observability analysis in state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)

    2010-03-15

    An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)

  12. Efficient Spectral Power Estimation on an Arbitrary Frequency Scale

    Directory of Open Access Journals (Sweden)

    F. Zaplata

    2015-04-01

    Full Text Available The Fast Fourier Transform is a very efficient algorithm for the Fourier spectrum estimation, but has the limitation of a linear frequency scale spectrum, which may not be suitable for every system. For example, audio and speech analysis needs a logarithmic frequency scale due to the characteristic of a human’s ear. The Fast Fourier Transform algorithms are not able to efficiently give the desired results and modified techniques have to be used in this case. In the following text a simple technique using the Goertzel algorithm allowing the evaluation of the power spectra on an arbitrary frequency scale will be introduced. Due to its simplicity the algorithm suffers from imperfections which will be discussed and partially solved in this paper. The implementation into real systems and the impact of quantization errors appeared to be critical and have to be dealt with in special cases. The simple method dealing with the quantization error will also be introduced. Finally, the proposed method will be compared to other methods based on its computational demands and its potential speed.

  13. The estimation of energy efficiency for hybrid refrigeration system

    International Nuclear Information System (INIS)

    Gazda, Wiesław; Kozioł, Joachim

    2013-01-01

    Highlights: ► We present the experimental setup and the model of the hybrid cooling system. ► We examine impact of the operating parameters of the hybrid cooling system on the energy efficiency indicators. ► A comparison of the final and the primary energy use for a combination of the cooling systems is carried out. ► We explain the relationship between the COP and PER values for the analysed cooling systems. -- Abstract: The concept of the air blast-cryogenic freezing method (ABCF) is based on an innovative hybrid refrigeration system with one common cooling space. The hybrid cooling system consists of a vapor compression refrigeration system and a cryogenic refrigeration system. The prototype experimental setup for this method on the laboratory scale is discussed. The application of the results of experimental investigations and the theoretical–empirical model makes it possible to calculate the cooling capacity as well as the final and primary energy use in the hybrid system. The energetic analysis has been carried out for the operating modes of the refrigerating systems for the required temperatures inside the cooling chamber of −5 °C, −10 °C and −15 °C. For the estimation of the energy efficiency the coefficient of performance COP and the primary energy ratio PER for the hybrid refrigeration system are proposed. A comparison of these coefficients for the vapor compression refrigeration and the cryogenic refrigeration system has also been presented.

  14. ESTIMATION OF EFFICIENCY PARTNERSHIP LARGE AND SMALL BUSINESS

    Directory of Open Access Journals (Sweden)

    Олег Васильевич Чабанюк

    2014-05-01

    Full Text Available In this  article, based on the definition of key factors and its components, developed an algorithm consistent, logically connected stages of the transition from the traditional enterprise to enterprise innovation typebased becoming intrapreneurship. Аnalysis of economic efficiency of innovative business idea is: based on the determination of experts the importance of the model parameters ensure the effectiveness of intrapreneurship by using methods of kvalimetricheskogo modeling expert estimates score calculated "efficiency intrapreneurship". On the author's projected optimum level indicator should exceed 0.5, but it should be noted that the achievement of this level is possible with the 2 - 3rd year of existence intraprenerskoy structure. The proposed method was tested in practice and can be used for the formation of intrapreneurship in large and medium-sized enterprises as one of the methods of implementation of the innovation activities of small businesses.DOI: http://dx.doi.org/10.12731/2218-7405-2013-10-50

  15. Public-Private Investment Partnerships: Efficiency Estimation Methods

    Directory of Open Access Journals (Sweden)

    Aleksandr Valeryevich Trynov

    2016-06-01

    Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in

  16. Estimating survival of dental fillings on the basis of interval-censored data and multi-state models

    DEFF Research Database (Denmark)

    Joly, Pierre; Gerds, Thomas A; Qvist, Vibeke

    2012-01-01

    We aim to compare the life expectancy of a filling in a primary tooth between two types of treatments. We define the probabilities that a dental filling survives without complication until the permanent tooth erupts from beneath (exfoliation). We relate the time to exfoliation of the tooth...... with all these particularities, we propose to use a parametric four-state model with three random effects to take into account the hierarchical cluster structure. For inference, right and interval censoring as well as left truncation have to be dealt with. With the proposed approach, we can conclude...... that the estimated probability that a filling survives without complication until exfoliation is larger for one treatment than for the other, for all ages of the child at the time of treatment....

  17. Using bacterial and necrophagous insect dynamics for post-mortem interval estimation during cold season: Novel case study in Romania.

    Science.gov (United States)

    Iancu, Lavinia; Carter, David O; Junkins, Emily N; Purcarea, Cristina

    2015-09-01

    Considering the biogeographical characteristics of forensic entomology, and the recent development of forensic microbiology as a complementary approach for post-mortem interval estimation, the current study focused on characterizing the succession of necrophagous insect species and bacterial communities inhabiting the rectum and mouth cavities of swine (Sus scrofa domesticus) carcasses during a cold season outdoor experiment in an urban natural environment of Bucharest, Romania. We monitored the decomposition process of three swine carcasses during a 7 month period (November 2012-May 2013) corresponding to winter and spring periods of a temperate climate region. The carcasses, protected by wire cages, were placed on the ground in a park type environment, while the meteorological parameters were constantly recorded. The succession of necrophagous Diptera and Coleoptera taxa was monitored weekly, both the adult and larval stages, and the species were identified both by morphological and genetic characterization. The structure of bacterial communities from swine rectum and mouth tissues was characterized during the same time intervals by denaturing gradient gel electrophoresis (DGGE) and sequencing of 16S rRNA gene fragments. We observed a shift in the structure of both insect and bacterial communities, primarily due to seasonal effects and the depletion of the carcass. A total of 14 Diptera and 6 Coleoptera species were recorded on the swine carcasses, from which Calliphora vomitoria and C. vicina (Diptera: Calliphoridae), Necrobia violacea (Coleoptera: Cleridae) and Thanatophilus rugosus (Coleoptera: Silphidae) were observed as predominant species. The first colonizing wave, primarily Calliphoridae, was observed after 15 weeks when the temperature increased to 13°C. This was followed by Muscidae, Fanniidae, Anthomyiidae, Sepsidae and Piophilidae. Families belonging to Coleoptera Order were observed at week 18 when temperatures raised above 18°C, starting with

  18. Virtual forensic entomology: improving estimates of minimum post-mortem interval with 3D micro-computed tomography.

    Science.gov (United States)

    Richards, Cameron S; Simonsen, Thomas J; Abel, Richard L; Hall, Martin J R; Schwyn, Daniel A; Wicklein, Martina

    2012-07-10

    We demonstrate how micro-computed tomography (micro-CT) can be a powerful tool for describing internal and external morphological changes in Calliphora vicina (Diptera: Calliphoridae) during metamorphosis. Pupae were sampled during the 1st, 2nd, 3rd and 4th quarter of development after the onset of pupariation at 23 °C, and placed directly into 80% ethanol for preservation. In order to find the optimal contrast, four batches of pupae were treated differently: batch one was stained in 0.5M aqueous iodine for 1 day; two for 7 days; three was tagged with a radiopaque dye; four was left unstained (control). Pupae stained for 7d in iodine resulted in the best contrast micro-CT scans. The scans were of sufficiently high spatial resolution (17.2 μm) to visualise the internal morphology of developing pharate adults at all four ages. A combination of external and internal morphological characters was shown to have the potential to estimate the age of blowfly pupae with a higher degree of accuracy and precision than using external morphological characters alone. Age specific developmental characters are described. The technique could be used as a measure to estimate a minimum post-mortem interval in cases of suspicious death where pupae are the oldest stages of insect evidence collected. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  19. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  20. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  1. Dynamics of Necrophagous Insect and Tissue Bacteria for Postmortem Interval Estimation During the Warm Season in Romania.

    Science.gov (United States)

    Iancu, Lavinia; Sahlean, Tiberiu; Purcarea, Cristina

    2016-01-01

    The estimation of postmortem interval (PMI) is affected by several factors including the cause of death, the place where the body lay after death, and the weather conditions during decomposition. Given the climatic differences among biogeographic locations, the understanding of necrophagous insect species biology and ecology is required when estimating PMI. The current experimental model was developed in Romania during the warm season in an outdoor location. The aim of the study was to identify the necrophagous insect species diversity and dynamics, and to detect the bacterial species present during decomposition in order to determine if their presence or incidence timing could be useful to estimate PMI. The decomposition process of domestic swine carcasses was monitored throughout a 14-wk period (10 July-10 October 2013), along with a daily record of meteorological parameters. The chronological succession of necrophagous entomofauna comprised nine Diptera species, with the dominant presence of Chrysomya albiceps (Wiedemann 1819) (Calliphoridae), while only two Coleoptera species were identified, Dermestes undulatus (L. 1758) and Creophilus maxillosus Brahm 1970. The bacterial diversity and dynamics from the mouth and rectum tissues, and third-instar dipteran larvae were identified using denaturing gradient gel electrophoresis analysis and sequencing of bacterial 16S rRNA gene fragments. Throughout the decomposition process, two main bacterial chronological groups were differentiated, represented by Firmicutes and Gammaproteobacteria. Twenty-six taxa from the rectal cavity and 22 from the mouth cavity were identified, with the dominant phylum in both these cavities corresponding to Firmicutes. The present data strengthen the postmortem entomological and microbial information for the warm season in this temperate-continental area, as well as the role of microbes in carcass decomposition. © The Authors 2015. Published by Oxford University Press on behalf of

  2. Most probable dimension value and most flat interval methods for automatic estimation of dimension from time series

    International Nuclear Information System (INIS)

    Corana, A.; Bortolan, G.; Casaleggio, A.

    2004-01-01

    We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems

  3. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    Directory of Open Access Journals (Sweden)

    Anthony Chan

    2008-01-01

    Full Text Available A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events. These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughly proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage. The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.

  4. Alteration in cardiac uncoupling proteins and eNOS gene expression following high-intensity interval training in favor of increasing mechanical efficiency

    OpenAIRE

    Fallahi, Ali Asghar; Shekarfroush, Shahnaz; Rahimi, Mostafa; Jalali, Amirhossain; Khoshbaten, Ali

    2016-01-01

    Objective(s): High-intensity interval training (HIIT) increases energy expenditure and mechanical energy efficiency. Although both uncoupling proteins (UCPs) and endothelial nitric oxide synthase (eNOS) affect the mechanical efficiency and antioxidant capacity, their effects are inverse. The aim of this study was to determine whether the alterations of cardiac UCP2, UCP3, and eNOS mRNA expression following HIIT are in favor of increased mechanical efficiency or decreased oxidative stress. Mat...

  5. Statistically Efficient Methods for Pitch and DOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    , it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...

  6. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  7. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  8. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  9. Global CO2 efficiency: Country-wise estimates using a stochastic cost frontier

    International Nuclear Information System (INIS)

    Herrala, Risto; Goel, Rajeev K.

    2012-01-01

    This paper examines global carbon dioxide (CO 2 ) efficiency by employing a stochastic cost frontier analysis of about 170 countries in 1997 and 2007. The main contribution lies in providing a new approach to environmental efficiency estimation, in which the efficiency estimates quantify the distance from the policy objective of minimum emissions. We are able to examine a very large pool of nations and provide country-wise efficiency estimates. We estimate three econometric models, corresponding with alternative interpretations of the Cancun vision (Conference of the Parties 2011). The models reveal progress in global environmental efficiency during a preceding decade. The estimates indicate vast differences in efficiency levels, and efficiency changes across countries. The highest efficiency levels are observed in Africa and Europe, while the lowest are clustered around China. The largest efficiency gains were observed in central and eastern Europe. CO 2 efficiency also improved in the US and China, the two largest emitters, but their ranking in terms of CO 2 efficiency deteriorated. Policy implications are discussed. - Highlights: ► We estimate global environmental efficiency in line with the Cancun vision, using a stochastic cost frontier. ► The study covers 170 countries during a 10 year period, ending in 2007. ► The biggest improvements occurred in Europe, and efficiency falls in South America. ► The efficiency ranking of US and China, the largest emitters, deteriorated. ► In 2007, highest efficiency was observed in Africa and Europe, and the lowest around China.

  10. Efficient collaborative sparse channel estimation in massive MIMO

    KAUST Repository

    Masood, Mudassir; Afify, Laila H.; Al-Naffouri, Tareq Y.

    2015-01-01

    We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.

  11. Efficient collaborative sparse channel estimation in massive MIMO

    KAUST Repository

    Masood, Mudassir

    2015-08-12

    We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.

  12. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.

    2014-01-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based

  13. Energy-Efficient Channel Estimation in MIMO Systems

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available The emergence of MIMO communications systems as practical high-data-rate wireless communications systems has created several technical challenges to be met. On the one hand, there is potential for enhancing system performance in terms of capacity and diversity. On the other hand, the presence of multiple transceivers at both ends has created additional cost in terms of hardware and energy consumption. For coherent detection as well as to do optimization such as water filling and beamforming, it is essential that the MIMO channel is known. However, due to the presence of multiple transceivers at both the transmitter and receiver, the channel estimation problem is more complicated and costly compared to a SISO system. Several solutions have been proposed to minimize the computational cost, and hence the energy spent in channel estimation of MIMO systems. We present a novel method of minimizing the overall energy consumption. Unlike existing methods, we consider the energy spent during the channel estimation phase which includes transmission of training symbols, storage of those symbols at the receiver, and also channel estimation at the receiver. We develop a model that is independent of the hardware or software used for channel estimation, and use a divide-and-conquer strategy to minimize the overall energy consumption.

  14. Efficient estimates of cochlear hearing loss parameters in individual listeners

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten

    2013-01-01

    It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...... estimates of the knee-point level. Further, it is explored whether it is possible to estimate the compression ratio using only on-frequency TMCs. 10 normal-hearing and 10 hearing-impaired listeners (with mild-to-moderate sensorineural hearing loss) were tested at 1, 2 and 4 kHz. The results showed...

  15. Chrysomya albiceps (Wiedemann and Hemilucilia segmentaria (Fabricius (Diptera, Calliphoridae used to estimate the postmortem interval in a forensic case in Minas Gerais, Brazil

    Directory of Open Access Journals (Sweden)

    Cecília Kosmann

    2011-12-01

    Full Text Available ABSTRACT. Chrysomya albiceps (Wiedemann and Hemilucilia segmentaria (Fabricius (Diptera, Calliphoridae used to estimate the postmortem interval in a forensic case in Minas Gerais, Brazil. The corpse of a man was found in a Brazilian highland savanna (cerrado in the state of Minas Gerais. Fly larvae were collected at the crime scene and arrived at the laboratory three days afterwards. From the eight pre-pupae, seven adults of Chrysomya albiceps (Wiedemann, 1819 emerged and, from the two larvae, two adults of Hemilucilia segmentaria (Fabricius, 1805 were obtained. As necrophagous insects use corpses as a feeding resource, their development rate can be used as a tool to estimate the postmortem interval. The post-embryonary development stage of the immature collected on the body was estimated as the difference between the total development time and the time required for them to become adults in the lab. The estimated age of the maggots from both species and the minimum postmortem interval were four days. This is the first time that H. segmentaria is used to estimate the postmortem interval in a forensic case.

  16. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  17. Agent-based Security and Efficiency Estimation in Airport Terminals

    NARCIS (Netherlands)

    Janssen, S.A.M.

    We investigate the use of an Agent-based framework to identify and quantify the relationship between security and efficiency within airport terminals. In this framework, we define a novel Security Risk Assessment methodology that explicitly models attacker and defender behavior in a security

  18. Thermodynamic framework for estimating the efficiencies of alkaline batteries

    Energy Technology Data Exchange (ETDEWEB)

    Pound, B G; Singh, R P; MacDonald, D D

    1986-06-01

    A thermodynamic framework has been developed to evaluate the efficiencies of alkaline battery systems for electrolyte (MOH) concentrations from 1 to 8 mol kg/sup -1/ and over the temperature range -10 to 120/sup 0/C. An analysis of the thermodynamic properties of concentrated LiOH, NaOH, and KOH solutions was carried out to provide data for the activity of water, the activity coefficient of the hydroxide ion, and the pH of the electrolyte. Potential-pH relations were then derived for various equilibrium phenomena for the metals Li, Al, Fe, Ni, and Zn in aqueous solutions and, using the data for the alkali metal hydroxides, equilibrium potentials were computed as a function of composition and temperature. These data were then used to calculate reversible cell voltages for a number of battery systems, assuming a knowledge of the cell reactions. Finally, some of the calculated cell voltages were compared with observed cell voltages to compute voltage efficiencies for various alkaline batteries. The voltage efficiencies of H/sub 2//Ni, Fe/Ni, and Zn/Ni test cells were found to be between 90 and 100%, implying that, at least at open circuit, there is little, if any, contribution from parasitic redox couples to the cell potentials for these systems. The efficiency of an Fe/air test cell was relatively low (72%). This is probably due to the less-than-theoretical voltage of the air electrode.

  19. Sampling strategies for efficient estimation of tree foliage biomass

    Science.gov (United States)

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  20. Cross sectional efficient estimation of stochastic volatility short rate models

    NARCIS (Netherlands)

    Danilov, Dmitri; Mandal, Pranab K.

    2001-01-01

    We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, as indicated by de Jong (2000), the EKF in this

  1. Cross sectional efficient estimation of stochastic volatility short rate models

    NARCIS (Netherlands)

    Danilov, Dmitri; Mandal, Pranab K.

    2002-01-01

    We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, the EKF in this situation leads to inconsistent

  2. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    Science.gov (United States)

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  3. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    Directory of Open Access Journals (Sweden)

    Junjie Ren

    2013-01-01

    Full Text Available Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9 occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF and the Guanxian-Jiangyou fault (GJF. However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS and Interferometric Synthetic Aperture Radar (InSAR data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3 × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  4. SCoPE: an efficient method of Cosmological Parameter Estimation

    International Nuclear Information System (INIS)

    Das, Santanu; Souradeep, Tarun

    2014-01-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data

  5. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  6. Robust efficient estimation of heart rate pulse from video

    Science.gov (United States)

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  7. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  8. Testing equality and interval estimation in binary responses when high dose cannot be used first under a three-period crossover design.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2015-01-01

    When comparing two doses of a new drug with a placebo, we may consider using a crossover design subject to the condition that the high dose cannot be administered before the low dose. Under a random-effects logistic regression model, we focus our attention on dichotomous responses when the high dose cannot be used first under a three-period crossover trial. We derive asymptotic test procedures for testing equality between treatments. We further derive interval estimators to assess the magnitude of the relative treatment effects. We employ Monte Carlo simulation to evaluate the performance of these test procedures and interval estimators in a variety of situations. We use the data taken as a part of trial comparing two different doses of an analgesic with a placebo for the relief of primary dysmenorrhea to illustrate the use of the proposed test procedures and estimators.

  9. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  10. Estimation of Economic Efficiency of Regional Touristic Complex

    Directory of Open Access Journals (Sweden)

    Kurchenkov Vladimir Viktorovich

    2015-09-01

    Full Text Available The article describes the features of the development of the regional touristic complex in modern conditions and determines the direction of realizing the potential of the regional market of tourist services. The authors reveal the multiplicative interrelation for analyzing the interaction of the primary and secondary sectors of the regional market of tourist services. The key indicators of efficiency are outlined, the extent of their relevance for assessing the potential of international tourism in the region is revealed. The authors calculate the relative indicators reflecting the dynamics of incomes from inbound, outbound and domestic tourism in relation to the total income from tourism activities in the region during the reporting period, usually for one calendar year. On the basis of these parameters, the classification of the regions of the Southern Federal District in terms of tourist attraction is carried out. The authors determine the reasons of the low tourist attractiveness of the Volgograd region in comparison with other regions of the Southern Federal District. It is substantiated that the potential of expanding tourism activity is not fully realized today in the Volgograd region. The technique of analysis and evaluation of the effectiveness of regional touristic complex on the basis of cluster approach is suggested. For analyzing the effectiveness of regional tourism cluster the authors propose to use indicators that reflect the overall performance of the regional tourism cluster, characterizing the impact of cluster development of the area, or the regional market, as well as evaluating the performance of each of the companies cooperating in the framework of the cluster. The article contains recommendations to the regional authorities on improving the efficiency of regional touristic complex in the short- and long-term prospects.

  11. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  12. Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation

    NARCIS (Netherlands)

    Segers, J.; van den Akker, R.; Werker, B.J.M.

    2014-01-01

    We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of

  13. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  14. Assessing Interval Estimation Methods for Hill Model Parameters in a High-Throughput Screening Context (IVIVE meeting)

    Science.gov (United States)

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maxi...

  15. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

    Science.gov (United States)

    Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

    2010-01-30

    In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

  16. A convenient method of obtaining percentile norms and accompanying interval estimates for self-report mood scales (DASS, DASS-21, HADS, PANAS, and sAD).

    Science.gov (United States)

    Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka

    2009-06-01

    A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.

  17. The effect of the number of transferred embryos, the interval between nuclear transfer and embryo transfer, and the transfer pattern on pig cloning efficiency.

    Science.gov (United States)

    Rim, Chol Ho; Fu, Zhixin; Bao, Lei; Chen, Haide; Zhang, Dan; Luo, Qiong; Ri, Hak Chol; Huang, Hefeng; Luan, Zhidong; Zhang, Yan; Cui, Chun; Xiao, Lei; Jong, Ui Myong

    2013-12-01

    To improve the efficiency of producing cloned pigs, we investigated the influence of the number of transferred embryos, the culturing interval between nuclear transfer (NT) and embryo transfer, and the transfer pattern (single oviduct or double oviduct) on cloning efficiency. The results demonstrated that transfer of either 150-200 or more than 200NT embryos compared to transfer of 100-150 embryos resulted in a significantly higher pregnancy rate (48 ± 16, 50 ± 16 vs. 29 ± 5%, pcloning efficiency is achieved by adjusting the number and in vitro culture time of reconstructed embryos as well as the embryo transfer pattern. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  19. Operator Bias in the Estimation of Arc Efficiency in Gas Tungsten Arc Welding

    Directory of Open Access Journals (Sweden)

    Fredrik Sikström

    2015-03-01

    Full Text Available In this paper the operator bias in the measurement process of arc efficiency in stationary direct current electrode negative gas tungsten arc welding is discussed. An experimental study involving 15 operators (enough to reach statistical significance has been carried out with the purpose to estimate the arc efficiency from a specific procedure for calorimetric experiments. The measurement procedure consists of three manual operations which introduces operator bias in the measurement process. An additional relevant experiment highlights the consequences of estimating the arc voltage by measuring the potential between the terminals of the welding power source instead of measuring the potential between the electrode contact tube and the workpiece. The result of the study is a statistical evaluation of the operator bias influence on the estimate, showing that operator bias is negligible in the estimate considered here. On the contrary the consequences of neglecting welding leads voltage drop results in a significant under estimation of the arc efficiency.

  20. Estimating shadow prices and efficiency analysis of productive inputs and pesticide use of vegetable production

    NARCIS (Netherlands)

    Singbo, Alphonse G.; Lansink, Alfons Oude; Emvalomatis, Grigorios

    2015-01-01

    This paper analyzes technical efficiency and the value of the marginal product of productive inputs vis-a-vis pesticide use to measure allocative efficiency of pesticide use along productive inputs. We employ the data envelopment analysis framework and marginal cost techniques to estimate

  1. Environmental efficiency with multiple environmentally detrimental variables : estimated with SFA and DEA

    NARCIS (Netherlands)

    Reinhard, S.; Lovell, C.A.K.; Thijssen, G.J.

    2000-01-01

    The objective of this paper is to estimate comprehensive environmental efficiency measures for Dutch dairy farms. The environmental efficiency scores are based on the nitrogen surplus, phosphate surplus and the total (direct and indirect) energy use of an unbalanced panel of dairy farms. We define

  2. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  3. The role of efficiency estimates in regulatory price reviews: Ofgem's approach to benchmarking electricity networks

    International Nuclear Information System (INIS)

    Pollitt, Michael

    2005-01-01

    Electricity regulators around the world make use of efficiency analysis (or benchmarking) to produce estimates of the likely amount of cost reduction which regulated electric utilities can achieve. This short paper examines the use of such efficiency estimates by the UK electricity regulator (Ofgem) within electricity distribution and transmission price reviews. It highlights the place of efficiency analysis within the calculation of X factors. We suggest a number of problems with the current approach and make suggestions for the future development of X factor setting. (author)

  4. An organic group contribution approach to radiative efficiency estimation of organic working fluid

    International Nuclear Information System (INIS)

    Zhang, Xinxin; Kobayashi, Noriyuki; He, Maogang; Wang, Jingfu

    2016-01-01

    Highlights: • We use group contribution method to estimate radiative efficiency. • CFC, HCFC, HFC, HFE, and PFC were estimated using this method. • In most cases, the estimation value has a good precision. • The method is reliable for the estimation of molecule with a symmetric structure. • This estimation method can offer good reference for working fluid development. - Abstract: The ratification of the Montreal Protocol in 1987 and the Kyoto Protocol in 1997 mark an environment protection era of the development of organic working fluid. Ozone depletion potential (ODP) and global warming potential (GWP) are two most important indices for the quantitative comparison of organic working fluid. Nowadays, more and more attention has been paid to GWP. The calculation of GWP is an extremely complicated process which involves interactions between surface and atmosphere such as atmospheric radiative transfer and atmospheric chemical reactions. GWP of a substance is related to its atmospheric abundance and is a variable in itself. However, radiative efficiency is an intermediate parameter for GWP calculation and it is a constant value used to describe inherent property of a substance. In this paper, the group contribution method was adopted to estimate the radiative efficiency of the organic substance which contains more than one carbon atom. In most cases, the estimation value and the standard value are in a good agreement. The biggest estimation error occurs in the estimation of the radiative efficiency of fluorinated ethers due to its plenty of structure groups and its complicated structure compared with hydrocarbon. This estimation method can be used to predict the radiative efficiency of newly developed organic working fluids.

  5. Impact of reduced marker set estimation of genomic relationship matrices on genomic selection for feed efficiency in Angus cattle

    Directory of Open Access Journals (Sweden)

    Northcutt Sally L

    2010-04-01

    Full Text Available Abstract Background Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. Conclusions This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.

  6. Demonstration of an efficient interpolation technique of inverse time and distance for Oceansat-2 wind measurements at 6-hourly intervals

    Directory of Open Access Journals (Sweden)

    J Swain

    2017-12-01

    Full Text Available Indian Space Research Organization had launched Oceansat-2 on 23 September 2009, and the scatterometer onboard was a space-borne sensor capable of providing ocean surface winds (both speed and direction over the globe for a mission life of 5 years. The observations of ocean surface winds from such a space-borne sensor are the potential source of data covering the global oceans and useful for driving the state-of-the-art numerical models for simulating ocean state if assimilated/blended with weather prediction model products. In this study, an efficient interpolation technique of inverse distance and time is demonstrated using the Oceansat-2 wind measurements alone for a selected month of June 2010 to generate gridded outputs. As the data are available only along the satellite tracks and there are obvious data gaps due to various other reasons, Oceansat-2 winds were subjected to spatio-temporal interpolation, and 6-hour global wind fields for the global oceans were generated over 1 × 1 degree grid resolution. Such interpolated wind fields can be used to drive the state-of-the-art numerical models to predict/hindcast ocean-state so as to experiment and test the utility/performance of satellite measurements alone in the absence of blended fields. The technique can be tested for other satellites, which provide wind speed as well as direction data. However, the accuracy of input winds is obviously expected to have a perceptible influence on the predicted ocean-state parameters. Here, some attempts are also made to compare the interpolated Oceansat-2 winds with available buoy measurements and it was found that they are reasonably in good agreement with a correlation coefficient of R  > 0.8 and mean deviation 1.04 m/s and 25° for wind speed and direction, respectively.

  7. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions

    Directory of Open Access Journals (Sweden)

    Rik Crutzen

    2017-07-01

    Full Text Available When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy, we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

  8. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions.

    Science.gov (United States)

    Crutzen, Rik; Peters, Gjalt-Jorn Ygram; Noijen, Judith

    2017-01-01

    When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

  9. Energy-efficient power allocation of two-hop cooperative systems with imperfect channel estimation

    KAUST Repository

    Amin, Osama

    2015-06-08

    Recently, much attention has been paid to the green design of wireless communication systems using energy efficiency (EE) metrics that should capture all energy consumption sources to deliver the required data. In this paper, we formulate an accurate EE metric for cooperative two-hop systems that use the amplify-and-forward relaying scheme. Different from the existing research that assumes the availability of perfect channel state information (CSI) at the communication cooperative nodes, we assume a practical scenario, where training pilots are used to estimate the channels. The estimated CSI can be used to adapt the available resources of the proposed system in order to maximize the EE. Two estimation strategies are assumed namely disintegrated channel estimation, which assumes the availability of channel estimator at the relay, and cascaded channel estimation, where the relay is not equipped with channel estimator and only forwards the received pilot(s) in order to let the destination estimate the cooperative link. The channel estimation cost is reflected on the EE metric by including the estimation error in the signal-to-noise term and considering the energy consumption during the estimation phase. Based on the formulated EE metric, we propose an energy-aware power allocation algorithm to maximize the EE of the cooperative system with channel estimation. Furthermore, we study the impact of the estimation parameters on the optimized EE performance via simulation examples.

  10. Alteration in cardiac uncoupling proteins and eNOS gene expression following high-intensity interval training in favor of increasing mechanical efficiency.

    Science.gov (United States)

    Fallahi, Ali Asghar; Shekarfroush, Shahnaz; Rahimi, Mostafa; Jalali, Amirhossain; Khoshbaten, Ali

    2016-03-01

    High-intensity interval training (HIIT) increases energy expenditure and mechanical energy efficiency. Although both uncoupling proteins (UCPs) and endothelial nitric oxide synthase (eNOS) affect the mechanical efficiency and antioxidant capacity, their effects are inverse. The aim of this study was to determine whether the alterations of cardiac UCP2, UCP3, and eNOS mRNA expression following HIIT are in favor of increased mechanical efficiency or decreased oxidative stress. Wistar rats were divided into five groups: control group (n=12), HIIT for an acute bout (AT1), short term HIIT for 3 and 5 sessions (ST3 and ST5), long-term training for 8 weeks (LT) (6 in each group). The rats of the training groups were made to run on a treadmill for 60 min in three stages: 6 min running for warm-up, 7 intervals of 7 min running on treadmill with a slope of 5° to 20° (4 min with an intensity of 80-110% VO2max and 3 min at 50-60% VO2max), and 5-min running for cool-down. The control group did not participate in any exercise program. Rats were sacrificed and the hearts were extracted to analyze the levels of UCP2, UCP3 and eNOS mRNA by RT-PCR. UCP3 expression was increased significantly following an acute training bout. Repeated HIIT for 8 weeks resulted in a significant decrease in UCPs mRNA and a significant increase in eNOS expression in cardiac muscle. This study indicates that Long term HIIT through decreasing UCPs mRNA and increasing eNOS mRNA expression may enhance energy efficiency and physical performance.

  11. RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION

    Directory of Open Access Journals (Sweden)

    Archana V

    2011-04-01

    Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator

  12. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang

    2014-02-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.

  13. Fourier Spot Volatility Estimator: Asymptotic Normality and Efficiency with Liquid and Illiquid High-Frequency Data

    Science.gov (United States)

    2015-01-01

    The recent availability of high frequency data has permitted more efficient ways of computing volatility. However, estimation of volatility from asset price observations is challenging because observed high frequency data are generally affected by noise-microstructure effects. We address this issue by using the Fourier estimator of instantaneous volatility introduced in Malliavin and Mancino 2002. We prove a central limit theorem for this estimator with optimal rate and asymptotic variance. An extensive simulation study shows the accuracy of the spot volatility estimates obtained using the Fourier estimator and its robustness even in the presence of different microstructure noise specifications. An empirical analysis on high frequency data (U.S. S&P500 and FIB 30 indices) illustrates how the Fourier spot volatility estimates can be successfully used to study intraday variations of volatility and to predict intraday Value at Risk. PMID:26421617

  14. Estimate of Technical Potential for Minimum Efficiency Performance Standards in 13 Major World Economies

    Energy Technology Data Exchange (ETDEWEB)

    Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-07-01

    As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.

  15. Robust and efficient parameter estimation in dynamic models of biological systems.

    Science.gov (United States)

    Gábor, Attila; Banga, Julio R

    2015-10-29

    Dynamic modelling provides a systematic framework to understand function in biological systems. Parameter estimation in nonlinear dynamic models remains a very challenging inverse problem due to its nonconvexity and ill-conditioning. Associated issues like overfitting and local solutions are usually not properly addressed in the systems biology literature despite their importance. Here we present a method for robust and efficient parameter estimation which uses two main strategies to surmount the aforementioned difficulties: (i) efficient global optimization to deal with nonconvexity, and (ii) proper regularization methods to handle ill-conditioning. In the case of regularization, we present a detailed critical comparison of methods and guidelines for properly tuning them. Further, we show how regularized estimations ensure the best trade-offs between bias and variance, reducing overfitting, and allowing the incorporation of prior knowledge in a systematic way. We illustrate the performance of the presented method with seven case studies of different nature and increasing complexity, considering several scenarios of data availability, measurement noise and prior knowledge. We show how our method ensures improved estimations with faster and more stable convergence. We also show how the calibrated models are more generalizable. Finally, we give a set of simple guidelines to apply this strategy to a wide variety of calibration problems. Here we provide a parameter estimation strategy which combines efficient global optimization with a regularization scheme. This method is able to calibrate dynamic models in an efficient and robust way, effectively fighting overfitting and allowing the incorporation of prior information.

  16. AN ESTIMATION OF TECHNICAL EFFICIENCY OF GARLIC PRODUCTION IN KHYBER PAKHTUNKHWA PAKISTAN

    Directory of Open Access Journals (Sweden)

    Nabeel Hussain

    2014-04-01

    Full Text Available This study was conducted to estimate the technical efficiency of farmers in garlic production in Khyber Pakhtunkhwa province, Pakistan. Data was randomly collected from 110 farmers using multistage sampling technique. Maximum likelihood estimation technique was used to estimate Cob-Douglas frontier production function. The analysis revealed that the estimated mean technical efficiency was 77 percent indicating that total output can be further increased with efficient use of resources and technology. The estimated gamma value was found to be 0.93 which shows 93% variation in garlic output due to inefficiency factors. The analysis further revealed that seed rate, tractor hours, fertilizer, FYM and weedicides were positive and statistically significant production factors. The results also show that age and education were statistically significant inefficiency factors, age having positive and education having negative relationship with the output of garlic. This study suggests that in order to increase the production of garlic by taking advantage of their high efficiency level, the government should invest in the research and development aspects for introducing good quality seeds to increase garlic productivity and should organize training programs to educate farmers about garlic production.

  17. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

    Science.gov (United States)

    Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

    2015-09-01

    This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. A new optimization tool path planning for 3-axis end milling of free-form surfaces based on efficient machining intervals

    Science.gov (United States)

    Vu, Duy-Duc; Monies, Frédéric; Rubio, Walter

    2018-05-01

    A large number of studies, based on 3-axis end milling of free-form surfaces, seek to optimize tool path planning. Approaches try to optimize the machining time by reducing the total tool path length while respecting the criterion of the maximum scallop height. Theoretically, the tool path trajectories that remove the most material follow the directions in which the machined width is the largest. The free-form surface is often considered as a single machining area. Therefore, the optimization on the entire surface is limited. Indeed, it is difficult to define tool trajectories with optimal feed directions which generate largest machined widths. Another limiting point of previous approaches for effectively reduce machining time is the inadequate choice of the tool. Researchers use generally a spherical tool on the entire surface. However, the gains proposed by these different methods developed with these tools lead to relatively small time savings. Therefore, this study proposes a new method, using toroidal milling tools, for generating toolpaths in different regions on the machining surface. The surface is divided into several regions based on machining intervals. These intervals ensure that the effective radius of the tool, at each cutter-contact points on the surface, is always greater than the radius of the tool in an optimized feed direction. A parallel plane strategy is then used on the sub-surfaces with an optimal specific feed direction for each sub-surface. This method allows one to mill the entire surface with efficiency greater than with the use of a spherical tool. The proposed method is calculated and modeled using Maple software to find optimal regions and feed directions in each region. This new method is tested on a free-form surface. A comparison is made with a spherical cutter to show the significant gains obtained with a toroidal milling cutter. Comparisons with CAM software and experimental validations are also done. The results show the

  19. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  20. OPTIMIZATION OF THE CRITERION FOR ESTIMATING THE TECHNOLOGY EFFICIENCY OF PACKING-CASE-PIECE LOADS DELIVERY

    OpenAIRE

    O. Severyn; O. Shulika

    2017-01-01

    The results of optimization of gravimetric coefficients for indexes included in the integral criterion of estimation of the efficiency of transport-technological charts of cargo delivery are resulted. The values of gravimetric coefficients are determined on the basis of two methods of experimental researches: questioning of respondents among the specialists of motor transport production and imitation design.

  1. Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies

    KAUST Repository

    Chen, Yi-Hau

    2009-03-01

    Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.

  2. Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies

    KAUST Repository

    Chen, Yi-Hau; Chatterjee, Nilanjan; Carroll, Raymond J.

    2009-01-01

    Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.

  3. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    Directory of Open Access Journals (Sweden)

    Darren Kidney

    Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will

  4. Post-Colonization Interval Estimates Using Multi-Species Calliphoridae Larval Masses and Spatially Distinct Temperature Data Sets: A Case Study

    Science.gov (United States)

    Weatherbee, Courtney R.; Pechal, Jennifer L.; Stamper, Trevor; Benbow, M. Eric

    2017-01-01

    Common forensic entomology practice has been to collect the largest Diptera larvae from a scene and use published developmental data, with temperature data from the nearest weather station, to estimate larval development time and post-colonization intervals (PCIs). To evaluate the accuracy of PCI estimates among Calliphoridae species and spatially distinct temperature sources, larval communities and ambient air temperature were collected at replicate swine carcasses (N = 6) throughout decomposition. Expected accumulated degree hours (ADH) associated with Cochliomyia macellaria and Phormia regina third instars (presence and length) were calculated using published developmental data sets. Actual ADH ranges were calculated using temperatures recorded from multiple sources at varying distances (0.90 m–7.61 km) from the study carcasses: individual temperature loggers at each carcass, a local weather station, and a regional weather station. Third instars greatly varied in length and abundance. The expected ADH range for each species successfully encompassed the average actual ADH for each temperature source, but overall under-represented the range. For both calliphorid species, weather station data were associated with more accurate PCI estimates than temperature loggers associated with each carcass. These results provide an important step towards improving entomological evidence collection and analysis techniques, and developing forensic error rates. PMID:28375172

  5. Post-Colonization Interval Estimates Using Multi-Species Calliphoridae Larval Masses and Spatially Distinct Temperature Data Sets: A Case Study

    Directory of Open Access Journals (Sweden)

    Courtney R. Weatherbee

    2017-04-01

    Full Text Available Common forensic entomology practice has been to collect the largest Diptera larvae from a scene and use published developmental data, with temperature data from the nearest weather station, to estimate larval development time and post-colonization intervals (PCIs. To evaluate the accuracy of PCI estimates among Calliphoridae species and spatially distinct temperature sources, larval communities and ambient air temperature were collected at replicate swine carcasses (N = 6 throughout decomposition. Expected accumulated degree hours (ADH associated with Cochliomyia macellaria and Phormia regina third instars (presence and length were calculated using published developmental data sets. Actual ADH ranges were calculated using temperatures recorded from multiple sources at varying distances (0.90 m–7.61 km from the study carcasses: individual temperature loggers at each carcass, a local weather station, and a regional weather station. Third instars greatly varied in length and abundance. The expected ADH range for each species successfully encompassed the average actual ADH for each temperature source, but overall under-represented the range. For both calliphorid species, weather station data were associated with more accurate PCI estimates than temperature loggers associated with each carcass. These results provide an important step towards improving entomological evidence collection and analysis techniques, and developing forensic error rates.

  6. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Wang, Suojin; Zhang, Xiangliang

    2016-01-01

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  7. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2016-11-08

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  8. A novel method for coil efficiency estimation: Validation with a 13C birdcage

    DEFF Research Database (Denmark)

    Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina

    2012-01-01

    Coil efficiency, defined as the B1 magnetic field induced at a given point on the square root of supplied power P, is an important parameter that characterizes both the transmit and receive performance of the radiofrequency (RF) coil. Maximizing coil efficiency will maximize also the signal......-to-noise ratio. In this work, we propose a novel method for RF coil efficiency estimation based on the use of a perturbing loop. The proposed method consists of loading the coil with a known resistor by inductive coupling and measuring the quality factor with and without the load. We tested the method...... by measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench...

  9. Estimating the magnitude of annual peak discharges with recurrence intervals between 1.1 and 3.0 years for rural, unregulated streams in West Virginia

    Science.gov (United States)

    Wiley, Jeffrey B.; Atkins, John T.; Newell, Dawn A.

    2002-01-01

    Multiple and simple least-squares regression models for the log10-transformed 1.5- and 2-year recurrence intervals of peak discharges with independent variables describing the basin characteristics (log10-transformed and untransformed) for 236 streamflow-gaging stations were evaluated, and the regression residuals were plotted as areal distributions that defined three regions in West Virginia designated as East, North, and South. Regional equations for the 1.1-, 1.2-, 1.3-, 1.4-, 1.5-, 1.6-, 1.7-, 1.8-, 1.9-, 2.0-, 2.5-, and 3-year recurrence intervals of peak discharges were determined by generalized least-squares regression. Log10-transformed drainage area was the most significant independent variable for all regions. Equations developed in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia. The accuracies of estimating equations are quantified by measuring the average prediction error (from 27.4 to 52.4 percent) and equivalent years of record (from 1.1 to 3.4 years).

  10. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    Science.gov (United States)

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Estimation of electric fields and currents from International Magnetospheric Study magnetometer data for the CDAW 6 intervals: Implications for substorm dynamics

    International Nuclear Information System (INIS)

    Kamide, Y.; Baumjohann, W.

    1985-01-01

    Using a recently developed numerical scheme combined with International Magnetospheric Study magnetometer data and the Rice University Ionospheric conductivity model as input, the global distribution of the key ionospheric parameters is estimated for the Coordinated Data Analysis Workshop (CDAW) 6 intervals. These outputs include ionospheric electric fields and currents, field-aligned currents and Joule heat production rate at high latitudes, and are compiled in the form of a color movie film, which demonstrates dynamics of substorm changes of the three-dimensional current system as well as of the associated potential pattern. The present paper gives, on the basis of the space-time distribution of the key parameters, the substorm time frame that can be referenced to in terms of the substorm phases when discussing some other magnetospheric and ionospheric records. The distinction between ''substorm expansion'' and ''enhanced convection'' current systems is presented on the basis of the conventional equivalent current and potential patterns and ''true'' ionospheric currents. Although the auroral electrojets flow rather contiguously throughout the dark sector, there are several separate source regions of Joule heating from the electrojet currents. This indicates that the relative importance of the ionospheric conductivity and the electric field in the ionospheric currents varies considerably depending upon latitude and local time. A possible difference in the generation mechanisms of isolated and continuous substorm activity is also discussed to some extent in the light of the two CDAW 6 intervals

  12. GC-MS analysis of cuticular lipids in recent and older scavenger insect puparia. An approach to estimate the postmortem interval (PMI).

    Science.gov (United States)

    Frere, B; Suchaud, F; Bernier, G; Cottin, F; Vincent, B; Dourel, L; Lelong, A; Arpino, P

    2014-02-01

    An analytical method was developed to characterize puparia cuticular lipids (hydrocarbons, waxes) and to compare the molecular distribution patterns in the extracts from either recent or older puparia. Acid-catalyzed transesterification and solvent extraction and purification, followed by combined gas chromatography coupled to mass spectrometry, were optimized for the determination of hydrocarbons and fatty acid ethyl esters from transesterified waxes, extracted from a single species of a fly scavenger (Hydrotaea aenescens Wiedemann, 1830). Comparison between recent (2012) or older (1997) puparia contents has highlighted significant composition differences, in particular, a general decrease of the chain length in the n-alkane distribution pattern and, on the contrary, an increase of the ester chain length. Both extracts contain traces of three hopane hydrocarbon congeners. Preliminary results evidence the change in puparia lipid composition over time, thus potentially providing new indices for estimating postmortem interval.

  13. Effects of malathion on the insect succession and the development of Chrysomya megacephala (Diptera: Calliphoridae) in the field and implications for estimating postmortem interval.

    Science.gov (United States)

    Yan-Wei, Shi; Xiao-Shan, Liu; Hai-Yang, Wang; Run-Jie, Zhang

    2010-03-01

    A field study on the effects of malathion on insect succession and the development of carrion flies on corpses, and its quantitative determination from the larvae on decomposing rabbit carrion was conducted. The rabbits were treated with malathion at concentrations of lethal, half-lethal and fourth-lethal doses. Malathion altered decomposition rates and species diversity: Chrysomya megacephala (Diptera: Calliphoridae) was the most abundant adult species in all the experiments; third instar larvae of Chrysomya rufifacies (Diptera: Calliphoridae) were not found on the toxic carcasses but were collected from the control; the appearance of beetles on the treated carcass was later by 1 to 3 days than on the control carcass. Development rate of the dominated species C. megacephala larvae and pupae was observed. Stepwise increases in the period of larval development, the maximum length of larvae, and weight of pupae were observed with increasing malathion concentrations. However, there was no significant difference in the duration of the pupal stage. The differences in development rate were sufficient to alter postmortem interval estimates based on larval development by 12 to 36 hours. The time of finding fresh pupae from the fourth-lethal carcass was 12 hours later than the control. Accumulations of the pesticide in larvae were observed, but no correlations were found between larvae concentrations and the initial quantity administered to rabbits.In conclusion, it is necessary to consider the effects of malathion present in decomposing bodies when estimating the postmortem interval based on entomological evidence. The results of this study have more practical implications for forensic investigations because it is under natural conditions.

  14. LocExpress: a web server for efficiently estimating expression of novel transcripts.

    Science.gov (United States)

    Hou, Mei; Tian, Feng; Jiang, Shuai; Kong, Lei; Yang, Dechang; Gao, Ge

    2016-12-22

    The temporal and spatial-specific expression pattern of a transcript in multiple tissues and cell types can indicate key clues about its function. While several gene atlas available online as pre-computed databases for known gene models, it's still challenging to get expression profile for previously uncharacterized (i.e. novel) transcripts efficiently. Here we developed LocExpress, a web server for efficiently estimating expression of novel transcripts across multiple tissues and cell types in human (20 normal tissues/cells types and 14 cell lines) as well as in mouse (24 normal tissues/cell types and nine cell lines). As a wrapper to RNA-Seq quantification algorithm, LocExpress efficiently reduces the time cost by making abundance estimation calls increasingly within the minimum spanning bundle region of input transcripts. For a given novel gene model, such local context-oriented strategy allows LocExpress to estimate its FPKMs in hundreds of samples within minutes on a standard Linux box, making an online web server possible. To the best of our knowledge, LocExpress is the only web server to provide nearly real-time expression estimation for novel transcripts in common tissues and cell types. The server is publicly available at http://loc-express.cbi.pku.edu.cn .

  15. Efficient estimation of dynamic density functions with an application to outlier detection

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Zhang, Xiangliang; Wang, Suojin

    2012-01-01

    In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.

  16. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  17. Efficiency of the estimators of multivariate distribution parameters from the one-dimensional observed frequencies

    International Nuclear Information System (INIS)

    Chernov, N.I.; Kurbatov, V.S.; Ososkov, G.A.

    1988-01-01

    Parameter estimation for multivariate probability distributions is studied in experiments where data are presented as one-dimensional hystograms. For this model a statistics defined as a quadratic form of the observed frequencies which has a limitig x 2 -distribution is proposed. The efficiency of the estimator minimizing the value of that statistics is proved whithin the class of all unibased estimates obtained via minimization of quadratic forms of observed frequencies. The elaborated method was applied to the physical problem of analysis of the secondary pion energy distribution in the isobar model of pion-nucleon interactions with the production of an additional pion. The numerical experiments showed that the accuracy of estimation is twice as much if comparing the conventional methods

  18. Spatially Explicit Estimation of Optimal Light Use Efficiency for Improved Satellite Data Driven Ecosystem Productivity Modeling

    Science.gov (United States)

    Madani, N.; Kimball, J. S.; Running, S. W.

    2014-12-01

    Remote sensing based light use efficiency (LUE) models, including the MODIS (MODerate resolution Imaging Spectroradiometer) MOD17 algorithm are commonly used for regional estimation and monitoring of vegetation gross primary production (GPP) and photosynthetic carbon (CO2) uptake. A common model assumption is that plants in a biome matrix operate at their photosynthetic capacity under optimal climatic conditions. A prescribed biome maximum light use efficiency parameter defines the maximum photosynthetic carbon conversion rate under prevailing climate conditions and is a large source of model uncertainty. Here, we used tower (FLUXNET) eddy covariance measurement based carbon flux data for estimating optimal LUE (LUEopt) over a North American domain. LUEopt was first estimated using tower observed daily carbon fluxes, meteorology and satellite (MODIS) observed fraction of photosynthetically active radiation (FPAR). LUEopt was then spatially interpolated over the domain using empirical models derived from independent geospatial data including global plant traits, surface soil moisture, terrain aspect, land cover type and percent tree cover. The derived LUEopt maps were then used as primary inputs to the MOD17 LUE algorithm for regional GPP estimation; these results were evaluated against tower observations and alternate MOD17 GPP estimates determined using Biome-specific LUEopt constants. Estimated LUEopt shows large spatial variability within and among different land cover classes indicated from a sparse North American tower network. Leaf nitrogen content and soil moisture are two important factors explaining LUEopt spatial variability. GPP estimated from spatially explicit LUEopt inputs shows significantly improved model accuracy against independent tower observations (R2 = 0.76; Mean RMSE plant trait information can explain spatial heterogeneity in LUEopt, leading to improved GPP estimates from satellite based LUE models.

  19. HIF1α protein and mRNA expression as a new marker for post mortem interval estimation in human gingival tissue.

    Science.gov (United States)

    Fais, Paolo; Mazzotti, Maria Carla; Teti, Gabriella; Boscolo-Berto, Rafael; Pelotti, Susi; Falconi, Mirella

    2018-06-01

    Estimating the post mortem interval (PMI) is still a crucial step in Forensic Pathology. Although several methods are available for assessing the PMI, a precise estimation is still quite unreliable and can be inaccurate. The present study aimed to investigate the immunohistochemical distribution and mRNA expression of hypoxia inducible factor (HIF-1α) in post mortem gingival tissues to establish a correlation between the presence of HIF-1α and the time since death, with the final goal of achieving a more accurate PMI estimation. Samples of gingival tissues were obtained from 10 cadavers at different PMIs (1-3 days, 4-5 days and 8-9 days), and were processed for immunohistochemistry and quantitative reverse transcription-polymerase chain reaction. The results showed a time-dependent correlation of HIF-1α protein and its mRNA with different times since death, which suggests that HIF-1α is a potential marker for PMI estimation. The results showed a high HIF-1α protein signal that was mainly localized in the stratum basale of the oral mucosa in samples collected at a short PMI (1-3 days). It gradually decreased in samples collected at a medium PMI (4-5 days), but it was not detected in samples collected at a long PMI (8-9 days). These results are in agreement with the mRNA data. These data indicate an interesting potential utility of Forensic Anatomy-based techniques, such as immunohistochemistry, as important complementary tools to be used in forensic investigations. © 2018 The Authors. Journal of Anatomy published by John Wiley & Sons Ltd on behalf of Anatomical Society.

  20. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  1. Efficient optimal joint channel estimation and data detection for massive MIMO systems

    KAUST Repository

    Alshamary, Haider Ali Jasim

    2016-08-15

    In this paper, we propose an efficient optimal joint channel estimation and data detection algorithm for massive MIMO wireless systems. Our algorithm is optimal in terms of the generalized likelihood ratio test (GLRT). For massive MIMO systems, we show that the expected complexity of our algorithm grows polynomially in the channel coherence time. Simulation results demonstrate significant performance gains of our algorithm compared with suboptimal non-coherent detection algorithms. To the best of our knowledge, this is the first algorithm which efficiently achieves GLRT-optimal non-coherent detections for massive MIMO systems with general constellations.

  2. Program Potential: Estimates of Federal Energy Cost Savings from Energy Efficient Procurement

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Margaret [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-09-17

    In 2011, energy used by federal buildings cost approximately $7 billion. Reducing federal energy use could help address several important national policy goals, including: (1) increased energy security; (2) lowered emissions of greenhouse gases and other air pollutants; (3) increased return on taxpayer dollars; and (4) increased private sector innovation in energy efficient technologies. This report estimates the impact of efficient product procurement on reducing the amount of wasted energy (and, therefore, wasted money) associated with federal buildings, as well as on reducing the needless greenhouse gas emissions associated with these buildings.

  3. Estimation of Gasoline Price Elasticities of Demand for Automobile Fuel Efficiency in Korea: A Hedonic Approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sung Tae [Sungkyunkwan University, Seoul (Korea); Lee, Myunghun [Keimyung University, Taegu (Korea)

    2001-03-01

    This paper estimates the gasoline price elasticities of demand for automobile fuel efficiency in Korea to examine indirectly whether the government policy of raising fuel prices is effective in inducing less consumption of fuel, relying on a hedonic technique developed by Atkinson and Halvorsen (1984). One of the advantages of this technique is that the data for a single year, without involving variation in the price of gasoline, is sufficient in implementing this study. Moreover, this technique enables us to circumvent the multicollinearity problem, which had reduced reliability of the results in previous hedonic studies. The estimated elasticities of demand for fuel efficiency with respect to the price of gasoline, on average, is 0.42. (author). 30 refs., 3 tabs.

  4. Development of a computationally efficient algorithm for attitude estimation of a remote sensing satellite

    Science.gov (United States)

    Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad

    2017-09-01

    This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.

  5. Development and validation of a new technique for estimating a minimum postmortem interval using adult blow fly (Diptera: Calliphoridae) carcass attendance.

    Science.gov (United States)

    Mohr, Rachel M; Tomberlin, Jeffery K

    2015-07-01

    Understanding the onset and duration of adult blow fly activity is critical to accurately estimating the period of insect activity or minimum postmortem interval (minPMI). Few, if any, reliable techniques have been developed and consequently validated for using adult fly activity to determine a minPMI. In this study, adult blow flies (Diptera: Calliphoridae) of Cochliomyia macellaria and Chrysomya rufifacies were collected from swine carcasses in rural central Texas, USA, during summer 2008 and Phormia regina and Calliphora vicina in the winter during 2009 and 2010. Carcass attendance patterns of blow flies were related to species, sex, and oocyte development. Summer-active flies were found to arrive 4-12 h after initial carcass exposure, with both C. macellaria and C. rufifacies arriving within 2 h of one another. Winter-active flies arrived within 48 h of one another. There was significant difference in degree of oocyte development on each of the first 3 days postmortem. These frequency differences allowed a minPMI to be calculated using a binomial analysis. When validated with seven tests using domestic and feral swine and human remains, the technique correctly estimated time of placement in six trials.

  6. Comparative analysis of bones, mites, soil chemistry, nematodes and soil micro-eukaryotes from a suspected homicide to estimate the post-mortem interval.

    Science.gov (United States)

    Szelecz, Ildikó; Lösch, Sandra; Seppey, Christophe V W; Lara, Enrique; Singer, David; Sorge, Franziska; Tschui, Joelle; Perotti, M Alejandra; Mitchell, Edward A D

    2018-01-08

    Criminal investigations of suspected murder cases require estimating the post-mortem interval (PMI, or time after death) which is challenging for long PMIs. Here we present the case of human remains found in a Swiss forest. We have used a multidisciplinary approach involving the analysis of bones and soil samples collected beneath the remains of the head, upper and lower body and "control" samples taken a few meters away. We analysed soil chemical characteristics, mites and nematodes (by microscopy) and micro-eukaryotes (by Illumina high throughput sequencing). The PMI estimate on hair 14 C-data via bomb peak radiocarbon dating gave a time range of 1 to 3 years before the discovery of the remains. Cluster analyses for soil chemical constituents, nematodes, mites and micro-eukaryotes revealed two clusters 1) head and upper body and 2) lower body and controls. From mite evidence, we conclude that the body was probably brought to the site after death. However, chemical analyses, nematode community analyses and the analyses of micro-eukaryotes indicate that decomposition took place at least partly on site. This study illustrates the usefulness of combining several lines of evidence for the study of homicide cases to better calibrate PMI inference tools.

  7. Efficient Bayesian Compressed Sensing-based Channel Estimation Techniques for Massive MIMO-OFDM Systems

    OpenAIRE

    Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza

    2017-01-01

    Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...

  8. Estimation of absorbed photosynthetically active radiation and vegetation net production efficiency using satellite data

    International Nuclear Information System (INIS)

    Hanan, N.P.; Prince, S.D.; Begue, A.

    1995-01-01

    The amount of photosynthetically active radiation (PAR) absorbed by green vegetation is an important determinant of photosynthesis and growth. Methods for the estimation of fractional absorption of PAR (iff PAR ) for areas greater than 1 km 2 using satellite data are discussed, and are applied to sites in the Sahel that have a sparse herb layer and tree cover of less than 5%. Using harvest measurements of seasonal net production, net production efficiencies are calculated. Variation in estimates of seasonal PAR absorption (APAR) caused by the atmospheric correction method and relationship between surface reflectances and iff PAR is considered. The use of maximum value composites of satellite NDVI to reduce the effect of the atmosphere is shown to produce inaccurate APAR estimates. In this data set, however, atmospheric correction using average optical depths was found to give good approximations of the fully corrected data. A simulation of canopy radiative transfer using the SAIL model was used to derive a relationship between canopy NDVI and iff PAR . Seasonal APAR estimates assuming a 1:1 relationship between iff PAR and NDVI overestimated the SAIL modeled results by up to 260%. The use of a modified 1:1 relationship, where iff PAR was assumed to be linearly related to NDVI scaled between minimum (soil) and maximum (infinite canopy) values, underestimated the SAIL modeled results by up to 35%. Estimated net production efficiencies (ϵ n , dry matter per unit APAR) fell in the range 0.12–1.61 g MJ −1 for above ground production, and in the range 0.16–1.88 g MJ −1 for total production. Sites with lower rainfall had reduced efficiencies, probably caused by physiological constraints on photosynthesis during dry conditions. (author)

  9. A possible approach to estimating the operational efficiency of multiprocessor systems

    International Nuclear Information System (INIS)

    Kuznetsov, N.Y.; Gorlach, S.P.; Sumskaya, A.A.

    1984-01-01

    This article presents a mathematical model that constructs the upper and lower estimates evaluating the efficiency of solution of a large class of problems using a multiprocessor system with a specific architecture. Efficiency depends on a system's architecture (e.g., the number of processors, memory volume, the number of communication links, commutation speed) and the types of problems it is intended to solve. The behavior of the model is considered in a stationary mode. The model is used to evaluate the efficiency of a particular algorithm implemented in a multiprocessor system. It is concluded that the model is flexible and enables the investigation of a broad class of problems in computational mathematics, including linear algebra and boundary-value problems of mathematical physics

  10. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  11. A virtually blind spectrum efficient channel estimation technique for mimo-ofdm system

    International Nuclear Information System (INIS)

    Ullah, M.O.

    2015-01-01

    Multiple-Input Multiple-Output antennas in conjunction with Orthogonal Frequency-Division Multiplexing is a dominant air interface for 4G and 5G cellular communication systems. Additionally, MIMO- OFDM based air interface is the foundation for latest wireless Local Area Networks, wireless Personal Area Networks, and digital multimedia broadcasting. Whether it is a single antenna or a multi-antenna OFDM system, accurate channel estimation is required for coherent reception. Training-based channel estimation methods require multiple pilot symbols and therefore waste a significant portion of channel bandwidth. This paper describes a virtually blind spectrum efficient channel estimation scheme for MIMO-OFDM systems which operates well below the Nyquist criterion. (author)

  12. Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle

    Directory of Open Access Journals (Sweden)

    Junpeng Shi

    2017-02-01

    Full Text Available In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS method for two-dimensional direction of arrival (2D DOA estimation with uniform rectangular arrays (URAs in a low-grazing angle (LGA condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.

  13. Efficiency Optimization Control of IPM Synchronous Motor Drives with Online Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Sadegh Vaez-Zadeh

    2011-04-01

    Full Text Available This paper describes an efficiency optimization control method for high performance interior permanent magnet synchronous motor drives with online estimation of motor parameters. The control system is based on an input-output feedback linearization method which provides high performance control and simultaneously ensures the minimization of the motor losses. The controllable electrical loss can be minimized by the optimal control of the armature current vector. It is shown that parameter variations except at near the nominal conditions have undesirable effect on the controller performance. Therefore, a parameter estimation method based on the second method of Lyapunov is presented which guarantees the stability and convergence of the estimation. The extensive simulation results show the feasibility of the proposed controller and observer and their desirable performances.

  14. Effects of High Intensity Interval versus Moderate Continuous Training on Markers of Ventilatory and Cardiac Efficiency in Coronary Heart Disease Patients

    Directory of Open Access Journals (Sweden)

    Gustavo G. Cardozo

    2015-01-01

    Full Text Available Background. We tested the hypothesis that high intensity interval training (HIIT would be more effective than moderate intensity continuous training (MIT to improve newly emerged markers of cardiorespiratory fitness in coronary heart disease (CHD patients, as the relationship between ventilation and carbon dioxide production (VE/VCO2 slope, oxygen uptake efficiency slope (OUES, and oxygen pulse (O2P. Methods. Seventy-one patients with optimized treatment were randomly assigned into HIIT (n=23, age = 56 ± 12 years, MIT (n=24, age = 62 ± 12 years, or nonexercise control group (CG (n=24, age = 64 ± 12 years. MIT performed 30 min of continuous aerobic exercise at 70–75% of maximal heart rate (HRmax, and HIIT performed 30 min sessions split in 2 min alternate bouts at 60%/90% HRmax (3 times/week for 16 weeks. Results. No differences among groups (before versus after were found for VE/VCO2 slope or OUES (P>0.05. After training the O2P slope increased in HIIT (22%, P0.05, while decreased in CG (−20%, P<0.05 becoming lower versus HIIT (P=0.03. Conclusion. HIIT was more effective than MIT for improving O2P slope in CHD patients, while VE/VCO2 slope and OUES were similarly improved by aerobic training regimens versus controls.

  15. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    Science.gov (United States)

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  16. The efficiency of different estimation methods of hydro-physical limits

    Directory of Open Access Journals (Sweden)

    Emma María Martínez

    2012-12-01

    Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.

  17. Estimating returns to scale and scale efficiency for energy consuming appliances

    Energy Technology Data Exchange (ETDEWEB)

    Blum, Helcio [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group; Okwelum, Edson O. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group

    2018-01-18

    Energy consuming appliances accounted for over 40% of the energy use and $17 billion in sales in the U.S. in 2014. Whether such amounts of money and energy were optimally combined to produce household energy services is not straightforwardly determined. The efficient allocation of capital and energy to provide an energy service has been previously approached, and solved with Data Envelopment Analysis (DEA) under constant returns to scale. That approach, however, lacks the scale dimension of the problem and may restrict the economic efficient models of an appliance available in the market when constant returns to scale does not hold. We expand on that approach to estimate returns to scale for energy using appliances. We further calculate DEA scale efficiency scores for the technically efficient models that comprise the economic efficient frontier of the energy service delivered, under different assumptions of returns to scale. We then apply this approach to evaluate dishwashers available in the market in the U.S. Our results show that (a) for the case of dishwashers scale matters, and (b) the dishwashing energy service is delivered under non-decreasing returns to scale. The results further demonstrate that this method contributes to increase consumers’ choice of appliances.

  18. Estimation efficiency of usage satellite derived and modelled biophysical products for yield forecasting

    Science.gov (United States)

    Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara

    2015-04-01

    Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.

  19. Improving Global Gross Primary Productivity Estimates by Computing Optimum Light Use Efficiencies Using Flux Tower Data

    Science.gov (United States)

    Madani, Nima; Kimball, John S.; Running, Steven W.

    2017-11-01

    In the light use efficiency (LUE) approach of estimating the gross primary productivity (GPP), plant productivity is linearly related to absorbed photosynthetically active radiation assuming that plants absorb and convert solar energy into biomass within a maximum LUE (LUEmax) rate, which is assumed to vary conservatively within a given biome type. However, it has been shown that photosynthetic efficiency can vary within biomes. In this study, we used 149 global CO2 flux towers to derive the optimum LUE (LUEopt) under prevailing climate conditions for each tower location, stratified according to model training and test sites. Unlike LUEmax, LUEopt varies according to heterogeneous landscape characteristics and species traits. The LUEopt data showed large spatial variability within and between biome types, so that a simple biome classification explained only 29% of LUEopt variability over 95 global tower training sites. The use of explanatory variables in a mixed effect regression model explained 62.2% of the spatial variability in tower LUEopt data. The resulting regression model was used for global extrapolation of the LUEopt data and GPP estimation. The GPP estimated using the new LUEopt map showed significant improvement relative to global tower data, including a 15% R2 increase and 34% root-mean-square error reduction relative to baseline GPP calculations derived from biome-specific LUEmax constants. The new global LUEopt map is expected to improve the performance of LUE-based GPP algorithms for better assessment and monitoring of global terrestrial productivity and carbon dynamics.

  20. Estimation of combustion flue gas acid dew point during heat recovery and efficiency gain

    Energy Technology Data Exchange (ETDEWEB)

    Bahadori, A. [Curtin University of Technology, Perth, WA (Australia)

    2011-06-15

    When cooling combustion flue gas for heat recovery and efficiency gain, the temperature must not be allowed to drop below the sulfur trioxide dew point. Below the SO{sub 3} dew point, very corrosive sulfuric acid forms and leads to operational hazards on metal surfaces. In the present work, simple-to-use predictive tool, which is easier than existing approaches, less complicated with fewer computations is formulated to arrive at an appropriate estimation of acid dew point during combustion flue gas cooling which depends on fuel type, sulfur content in fuel, and excess air levels. The resulting information can then be applied to estimate the acid dew point, for sulfur in various fuels up to 0.10 volume fraction in gas (0.10 mass fraction in liquid), excess air fractions up to 0.25, and elemental concentrations of carbon up to 3. The proposed predictive tool shows a very good agreement with the reported data wherein the average absolute deviation percent was found to be around 3.18%. This approach can be of immense practical value for engineers and scientists for a quick estimation of acid dew point during combustion flue gas cooling for heat recovery and efficiency gain for wide range of operating conditions without the necessity of any pilot plant setup and tedious experimental trials. In particular, process and combustion engineers would find the tool to be user friendly involving transparent calculations with no complex expressions for their applications.

  1. Analytical estimates and proof of the scale-free character of efficiency and improvement in Barabasi-Albert trees

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Bermejo, B. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)], E-mail: benito.hernandez@urjc.es; Marco-Blanco, J. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain); Romance, M. [Departamento de Matematica Aplicada, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)

    2009-02-23

    Estimates for the efficiency of a tree are derived, leading to new analytical expressions for Barabasi-Albert trees efficiency. These expressions are used to investigate the dynamic behaviour of such networks. It is proved that the preferential attachment leads to an asymptotic conservation of efficiency as the Barabasi-Albert trees grow.

  2. Analytical estimates and proof of the scale-free character of efficiency and improvement in Barabasi-Albert trees

    International Nuclear Information System (INIS)

    Hernandez-Bermejo, B.; Marco-Blanco, J.; Romance, M.

    2009-01-01

    Estimates for the efficiency of a tree are derived, leading to new analytical expressions for Barabasi-Albert trees efficiency. These expressions are used to investigate the dynamic behaviour of such networks. It is proved that the preferential attachment leads to an asymptotic conservation of efficiency as the Barabasi-Albert trees grow

  3. Estimating the energy and exergy utilization efficiencies for the residential-commercial sector: an application

    International Nuclear Information System (INIS)

    Utlu, Zafer; Hepbasli, Arif

    2006-01-01

    The main objectives in carrying out the present study are twofold, namely to estimate the energy and exergy utilization efficiencies for the residential-commercial sector and to compare those of various countries with each other. In this regard, Turkey is given as an illustrative example with its latest figures in 2002 since the data related to the following years are still being processed. Total energy and exergy inputs in this year are calculated to be 3257.20 and 3212.42 PJ, respectively. Annual fuel consumptions in space heating, water heating and cooking activities as well as electrical energy uses by appliances are also determined. The energy and exergy utilization efficiency values for the Turkish residential-commercial sector are obtained to be 55.58% and 9.33%, respectively. Besides this, Turkey's overall energy and exergy utilization efficiencies are found to be 46.02% and 24.99%, respectively. The present study clearly indicates the necessity of the planned studies toward increasing exergy utilization efficiencies in the sector studied

  4. Oviposition preferences of two forensically important blow fly species, Chrysomya megacephala and C. rufifacies (Diptera: Calliphoridae), and implications for postmortem interval estimation.

    Science.gov (United States)

    Yang, Shih-Tsai; Shiao, Shiuh-Feng

    2012-03-01

    Necrophagous blow fly species (Diptera: Calliphoridae) are the most important agents for estimating the postmortem interval (PMI) in forensic entomology. Nevertheless, the oviposition preferences of blow flies may cause a bias of PMI estimations because of a delay or acceleration of egg laying. Chrysomya megacephala (F.) and C. rufifacies (Macquart) are two predominant necrophagous blow fly species in Taiwan. Their larvae undergo rather intense competition, and the latter one can prey on the former under natural conditions. To understand the oviposition preferences of these two species, a dual-choice device was used to test the choice of oviposition sites by females. Results showed when pork liver with and without larvae of C. rufifacies was provided, C. megacephala preferred to lay eggs on the liver without larvae. However, C. megacephala showed no preference when pork liver with and without conspecific larvae or larvae of Hemipyrellia ligurriens (Wiedemann) was provided. These results indicate that females of C. megacephala try to avoid laying eggs around larvae of facultatively predaceous species of C. rufifacies. However, C. rufifacies showed significant oviposition preference for pork liver with larvae of C. megacephala or conspecific ones when compared with pork liver with no larvae. These results probably imply that conspecific larvae or larvae of C. megacephala may potentially be alternative food resources for C. rufifacies, so that its females prefer to lay eggs in their presence. When considering the size of the oviposition media, pork livers of a relatively small size were obviously unfavorable to both species. This may be because females need to find sufficient resources to meet the food demands of their larvae. In another experiment, neither blow fly species showed an oviposition preference for pork livers of different stages of decay. In addition, the oviposition preferences of both species to those media with larvae were greatly disturbed in a dark

  5. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  6. ESTIMATION OF EFFICIENCY OF OPERATING SYSTEM OF TAX PLANNING IN THE COMMERCIAL ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    Evgeniy A. Samsonov

    2014-01-01

    Full Text Available Present clause is devoted to scientific judgement and estimations of efficiency of stimulating mechanisms (tools of application of system of tax planning in the commercial organizations which allow to estimate разнонаправленное influence of taxes on final financial result of the organization, and also to predict change of business activity of the organization depending on tax loading. The big attention is given to complicated questions of features of management by the taxation and the order of reflection in the tax account of the facts of the economic activities arising between the state, on the one hand, and managing subjects - the commercial organizations - with another.

  7. Automatic sampling for unbiased and efficient stereological estimation using the proportionator in biological studies

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...... of view is given a weight (the size) proportional to the total amount of requested image analysis features in it. The fields of view sampled with known probabilities proportional to individual weight are the only ones seen by the observer who provides the correct count. Even though the image analysis...... cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas.  The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling....

  8. Investigating time-efficiency of forward masking paradigms for estimating basilar membrane input-output characteristics

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten

    2017-01-01

    -output (I/O) function have been proposed. However, such measures are very time consuming. The present study investigated possible modifications of the temporal masking curve (TMC) paradigm to improve time and measurement efficiency. In experiment 1, estimates of knee point (KP) and compression ratio (CR......”, was tested. In contrast to the standard TMC paradigm, the maker level was kept fixed and the “gap threshold” was obtained, such that the masker just masks a low-level (12 dB sensation level) signal. It is argued that this modification allows for better control of the tested stimulus level range, which...

  9. SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop

    Science.gov (United States)

    Zeyliger, Anatoly; Ermolaeva, Olga

    2013-04-01

    The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web

  10. Energy efficiency estimation of a steam powered LNG tanker using normal operating data

    Directory of Open Access Journals (Sweden)

    Sinha Rajendra Prasad

    2016-01-01

    Full Text Available A ship’s energy efficiency performance is generally estimated by conducting special sea trials of few hours under very controlled environmental conditions of calm sea, standard draft and optimum trim. This indicator is then used as the benchmark for future reference of the ship’s Energy Efficiency Performance (EEP. In practice, however, for greater part of operating life the ship operates in conditions which are far removed from original sea trial conditions and therefore comparing energy performance with benchmark performance indicator is not truly valid. In such situations a higher fuel consumption reading from the ship fuel meter may not be a true indicator of poor machinery performance or dirty underwater hull. Most likely, the reasons for higher fuel consumption may lie in factors other than the condition of hull and machinery, such as head wind, current, low load operations or incorrect trim [1]. Thus a better and more accurate approach to determine energy efficiency of the ship attributable only to main machinery and underwater hull condition will be to filter out the influence of all spurious and non-standard operating conditions from the ship’s fuel consumption [2]. The author in this paper identifies parameters of a suitable filter to be used on the daily report data of a typical LNG tanker of 33000 kW shaft power to remove effects of spurious and non-standard ship operations on its fuel consumption. The filtered daily report data has been then used to estimate actual fuel efficiency of the ship and compared with the sea trials benchmark performance. Results obtained using data filter show closer agreement with the benchmark EEP than obtained from the monthly mini trials . The data filtering method proposed in this paper has the advantage of using the actual operational data of the ship and thus saving cost of conducting special sea trials to estimate ship EEP. The agreement between estimated results and special sea trials EEP is

  11. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    Science.gov (United States)

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  12. Avoided cost estimation and post-reform funding allocation for California's energy efficiency programs

    International Nuclear Information System (INIS)

    Baskette, C.; Horii, B.; Price, S.; Kollman, E.

    2006-01-01

    This paper summarizes the first comprehensive estimation of California's electricity avoided costs since the state reformed its electricity market. It describes avoided cost estimates that vary by time and location, thus facilitating targeted design, funding, and marketing of demand-side management (DSM) and energy efficiency (EE) programs that could not have occurred under the previous methodology of system average cost estimation. The approach, data, and results reflect two important market structure changes: (a) wholesale spot and forward markets now supply electricity commodities to load serving entities; and (b) the evolution of an emissions market that internalizes and prices some of the externalities of electricity generation. The paper also introduces the multiplier effect of a price reduction due to DSM/EE implementation on electricity bills of all consumers. It affirms that area- and time-specific avoided cost estimates can improve the allocation of the state's public funding for DSM/EE programs, a finding that could benefit other parts of North America (e.g. Ontario and New York), which have undergone electricity deregulation. (author)

  13. Computationally Efficient 2D DOA Estimation for L-Shaped Array with Unknown Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Yang-Yang Dong

    2018-01-01

    Full Text Available Although L-shaped array can provide good angle estimation performance and is easy to implement, its two-dimensional (2D direction-of-arrival (DOA performance degrades greatly in the presence of mutual coupling. To deal with the mutual coupling effect, a novel 2D DOA estimation method for L-shaped array with low computational complexity is developed in this paper. First, we generalize the conventional mutual coupling model for L-shaped array and compensate the mutual coupling blindly via sacrificing a few sensors as auxiliary elements. Then we apply the propagator method twice to mitigate the effect of strong source signal correlation effect. Finally, the estimations of azimuth and elevation angles are achieved simultaneously without pair matching via the complex eigenvalue technique. Compared with the existing methods, the proposed method is computationally efficient without spectrum search or polynomial rooting and also has fine angle estimation performance for highly correlated source signals. Theoretical analysis and simulation results have demonstrated the effectiveness of the proposed method.

  14. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    Science.gov (United States)

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  15. The Use of 32P and 15N to Estimate Fertilizer Efficiency in Oil Palm

    International Nuclear Information System (INIS)

    Sisworo, Elsje L; Sisworo, Widjang H; Havid-Rasjid; Haryanto; Syamsul-Rizal; Poeloengan, Z; Kusnu-Martoyo

    2004-01-01

    Oil palm has become an important commodity for Indonesia reaching an area of 2.6 million ha at the end of 1998. It is mostly cultivated in highly weathered acid soil usually Ultisols and Oxisols which are known for their low fertility, concerning the major nutrients like N and P. This study most conducted to search for the most active root-zone of oil palm and applied urea fertilizer at such soils to obtain high N-efficiency. Carrier free KH 2 32 PO 4 solution was used to determine the active root-zone of oil palm by applying 32 P around the plant in twenty holes. After the most active root-zone have been determined, urea in one, two and three splits were respectively applied at this zone. To estimate N-fertilizer efficiency of urea labelled 15 N Ammonium Sulphate was used by adding them at the same amount of 16 g 15 N plan -1 . This study showed that the most active root-zone was found at a 1.5 m distance from the plant-stem and at 5 cm soil depth. For urea the highest N-efficiency was obtained from applying it at two splits. The use of 32 P was able to distinguish several root zones: 1.5 m - 2.5 m from the plant-stem at a 5 cm and 15 cm soil depth. Urea placed at the most active root-zone, which was at a 1.5 m distance from the plant-stem and at a 5 cm depth in one, two, and three splits respectively showed difference N-efficiency. The highest N-efficiency of urea was obtained when applying it in two splits at the most active root-zone. (author)

  16. A simple and efficient algorithm to estimate daily global solar radiation from geostationary satellite data

    International Nuclear Information System (INIS)

    Lu, Ning; Qin, Jun; Yang, Kun; Sun, Jiulin

    2011-01-01

    Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.

  17. Age-specific interval breast cancers in Japan. Estimation of the proper sensitivity of screening using a population-based cancer registry

    International Nuclear Information System (INIS)

    Suzuki, Akihiko; Kuriyama, Shinichi; Kawai, Masaaki

    2008-01-01

    The age-specific sensitivity of a screening program was investigated using a population-based cancer registry as a source of false-negative cancer cases. A population-based screening program for breast cancer was run using either clinical breast examinations (CBE) alone or mammography combined with CBE in the Miyagi Prefecture from 1997 to 2002. Interval cancers were newly identified by linking the screening records to the population-based cancer registry to estimate the number of false-negative cases of screening program. Among 112071 women screened by mammography combined with CBE, the number of detected cancers, false-negative cases and the sensitivity were 289, 22 and 92.9%, respectively, based on the reports from participating municipalities. The number of newly found false-negative cases and corrected sensitivity when using the registry were 34 and 83.8%, respectively. In detected cancers, the sensitivity of screening by mammography combined with CBE in women ranging from 40 to 49 years of age based on a population-based cancer registry was much lower than that in women 50-59 and 60-69 years of age (40-49: 18, 71.4%, 50-59: 19, 85.8%, 60-69: 19, 87.2%). These data suggest that the accurate outcome of an evaluation of breast cancer screening must include the use of a population-based cancer registry for detecting false-negative cases. Screening by mammography combined with CBE may therefore not be sufficiently sensitive for women ranging from 40 to 49 years of age. (author)

  18. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Estimating the Efficiency and Impacts of Petroleum Product Pricing Reforms in China

    Directory of Open Access Journals (Sweden)

    Chuxiong Deng

    2018-04-01

    Full Text Available The efficiency and effects analysis of a new pricing mechanism would have significant policy implications for the further design of a pricing mechanism in an emerging market. Unlike most of the existing literature, which focuses on the impacts to the macro-economy, this paper firstly uses an econometrics model to discuss the efficiency of the new pricing mechanism, and then establishes an augmented Phillips curve to estimate the impact of pricing reform on inflation in China. The results show that: (1 the new pricing mechanism would strengthen the linkage between Chinese oil prices and international oil prices; (2 oil price adjustments are still inadequate in China. (3 The lag in inflation is the most important factor that affects inflation, while the impact of the Chinese government’s price adjustments on inflation is limited and insignificant. In order to improve the efficiency of the petroleum products pricing mechanism and shorten lags, government should shorten the adjustment period and diminish the fluctuation threshold.

  20. An Efficient Estimation Method for Reducing the Axial Intensity Drop in Circular Cone-Beam CT

    Directory of Open Access Journals (Sweden)

    Lei Zhu

    2008-01-01

    Full Text Available Reconstruction algorithms for circular cone-beam (CB scans have been extensively studied in the literature. Since insufficient data are measured, an exact reconstruction is impossible for such a geometry. If the reconstruction algorithm assumes zeros for the missing data, such as the standard FDK algorithm, a major type of resulting CB artifacts is the intensity drop along the axial direction. Many algorithms have been proposed to improve image quality when faced with this problem of data missing; however, development of an effective and computationally efficient algorithm remains a major challenge. In this work, we propose a novel method for estimating the unmeasured data and reducing the intensity drop artifacts. Each CB projection is analyzed in the Radon space via Grangeat's first derivative. Assuming the CB projection is taken from a parallel beam geometry, we extract those data that reside in the unmeasured region of the Radon space. These data are then used as in a parallel beam geometry to calculate a correction term, which is added together with Hu’s correction term to the FDK result to form a final reconstruction. More approximations are then made on the calculation of the additional term, and the final formula is implemented very efficiently. The algorithm performance is evaluated using computer simulations on analytical phantoms. The reconstruction comparison with results using other existing algorithms shows that the proposed algorithm achieves a superior performance on the reduction of axial intensity drop artifacts with a high computation efficiency.

  1. Ultrasound elastography: efficient estimation of tissue displacement using an affine transformation model

    Science.gov (United States)

    Hashemi, Hoda Sadat; Boily, Mathieu; Martineau, Paul A.; Rivaz, Hassan

    2017-03-01

    Ultrasound elastography entails imaging mechanical properties of tissue and is therefore of significant clinical importance. In elastography, two frames of radio-frequency (RF) ultrasound data that are obtained while the tissue is undergoing deformation, and the time-delay estimate (TDE) between the two frames is used to infer mechanical properties of tissue. TDE is a critical step in elastography, and is challenging due to noise and signal decorrelation. This paper presents a novel and robust technique TDE using all samples of RF data simultaneously. We assume tissue deformation can be approximated by an affine transformation, and hence call our method ATME (Affine Transformation Model Elastography). The affine transformation model is utilized to obtain initial estimates of axial and lateral displacement fields. The affine transformation only has six degrees of freedom (DOF), and as such, can be efficiently estimated. A nonlinear cost function that incorporates similarity of RF data intensity and prior information of displacement continuity is formulated to fine-tune the initial affine deformation field. Optimization of this function involves searching for TDE of all samples of the RF data. The optimization problem is converted to a sparse linear system of equations, which can be solved in real-time. Results on simulation are presented for validation. We further collect RF data from in-vivo patellar tendon and medial collateral ligament (MCL), and show that ATME can be used to accurately track tissue displacement.

  2. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    Science.gov (United States)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  3. A Practical and Time-Efficient High-Intensity Interval Training Program Modifies Cardio-Metabolic Risk Factors in Adults with Risk Factors for Type II Diabetes

    Directory of Open Access Journals (Sweden)

    Bethan E. Phillips

    2017-09-01

    Full Text Available IntroductionRegular physical activity (PA can reduce the risk of developing type 2 diabetes, but adherence to time-orientated (150 min week−1 or more PA guidelines is very poor. A practical and time-efficient PA regime that was equally efficacious at controlling risk factors for cardio-metabolic disease is one solution to this problem. Herein, we evaluate a new time-efficient and genuinely practical high-intensity interval training (HIT protocol in men and women with pre-existing risk factors for type 2 diabetes.Materials and methodsOne hundred eighty-nine sedentary women (n = 101 and men (n = 88 with impaired glucose tolerance and/or a body mass index >27 kg m−2 [mean (range age: 36 (18–53 years] participated in this multi-center study. Each completed a fully supervised 6-week HIT protocol at work-loads equivalent to ~100 or ~125% V˙O2 max. Change in V˙O2 max was used to monitor protocol efficacy, while Actiheart™ monitors were used to determine PA during four, weeklong, periods. Mean arterial (blood pressure (MAP and fasting insulin resistance [homeostatic model assessment (HOMA-IR] represent key health biomarker outcomes.ResultsThe higher intensity bouts (~125% V˙O2 max used during a 5-by-1 min HIT protocol resulted in a robust increase in V˙O2 max (136 participants, +10.0%, p < 0.001; large size effect. 5-by-1 HIT reduced MAP (~3%; p < 0.001 and HOMA-IR (~16%; p < 0.01. Physiological responses were similar in men and women while a sizeable proportion of the training-induced changes in V˙O2 max, MAP, and HOMA-IR was retained 3 weeks after cessation of training. The supervised HIT sessions accounted for the entire quantifiable increase in PA, and this equated to 400 metabolic equivalent (MET min week−1. Meta-analysis indicated that 5-by-1 HIT matched the efficacy and variability of a time-consuming 30-week PA program on V˙O2 max, MAP, and HOMA-IR.ConclusionWith a total time-commitment of

  4. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments.

    Science.gov (United States)

    Teramoto, Reiji; Saito, Chiaki; Funahashi, Shin-ichi

    2014-06-30

    Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality.

  5. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  6. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim

    2016-05-11

    Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The

  7. Towards physiologically meaningful water-use efficiency estimates from eddy covariance data.

    Science.gov (United States)

    Knauer, Jürgen; Zaehle, Sönke; Medlyn, Belinda E; Reichstein, Markus; Williams, Christopher A; Migliavacca, Mirco; De Kauwe, Martin G; Werner, Christiane; Keitel, Claudia; Kolari, Pasi; Limousin, Jean-Marc; Linderson, Maj-Lena

    2018-02-01

    Intrinsic water-use efficiency (iWUE) characterizes the physiological control on the simultaneous exchange of water and carbon dioxide in terrestrial ecosystems. Knowledge of iWUE is commonly gained from leaf-level gas exchange measurements, which are inevitably restricted in their spatial and temporal coverage. Flux measurements based on the eddy covariance (EC) technique can overcome these limitations, as they provide continuous and long-term records of carbon and water fluxes at the ecosystem scale. However, vegetation gas exchange parameters derived from EC data are subject to scale-dependent and method-specific uncertainties that compromise their ecophysiological interpretation as well as their comparability among ecosystems and across spatial scales. Here, we use estimates of canopy conductance and gross primary productivity (GPP) derived from EC data to calculate a measure of iWUE (G 1 , "stomatal slope") at the ecosystem level at six sites comprising tropical, Mediterranean, temperate, and boreal forests. We assess the following six mechanisms potentially causing discrepancies between leaf and ecosystem-level estimates of G 1 : (i) non-transpirational water fluxes; (ii) aerodynamic conductance; (iii) meteorological deviations between measurement height and canopy surface; (iv) energy balance non-closure; (v) uncertainties in net ecosystem exchange partitioning; and (vi) physiological within-canopy gradients. Our results demonstrate that an unclosed energy balance caused the largest uncertainties, in particular if it was associated with erroneous latent heat flux estimates. The effect of aerodynamic conductance on G 1 was sufficiently captured with a simple representation. G 1 was found to be less sensitive to meteorological deviations between canopy surface and measurement height and, given that data are appropriately filtered, to non-transpirational water fluxes. Uncertainties in the derived GPP and physiological within-canopy gradients and their

  8. Impact of energy policy instruments on the estimated level of underlying energy efficiency in the EU residential sector

    International Nuclear Information System (INIS)

    Filippini, Massimo; Hunt, Lester C.; Zorić, Jelena

    2014-01-01

    The promotion of energy efficiency is seen as one of the top priorities of EU energy policy (EC, 2010). In order to design and implement effective energy policy instruments, it is necessary to have information on energy demand price and income elasticities in addition to sound indicators of energy efficiency. This research combines the approaches taken in energy demand modelling and frontier analysis in order to econometrically estimate the level of energy efficiency for the residential sector in the EU-27 member states for the period 1996 to 2009. The estimates for the energy efficiency confirm that the EU residential sector indeed holds a relatively high potential for energy savings from reduced inefficiency. Therefore, despite the common objective to decrease ‘wasteful’ energy consumption, considerable variation in energy efficiency between the EU member states is established. Furthermore, an attempt is made to evaluate the impact of energy-efficiency measures undertaken in the EU residential sector by introducing an additional set of variables into the model and the results suggest that financial incentives and energy performance standards play an important role in promoting energy efficiency improvements, whereas informative measures do not have a significant impact. - Highlights: • The level of energy efficiency of the EU residential sector is estimated. • Considerable potential for energy savings from reduced inefficiency is established. • The impact of introduced energy-efficiency policy measures is also evaluated. • Financial incentives are found to promote energy efficiency improvements. • Energy performance standards also play an important role

  9. Paper-based microfluidic devices on the crime scene: A simple tool for rapid estimation of post-mortem interval using vitreous humour.

    Science.gov (United States)

    Garcia, Paulo T; Gabriel, Ellen F M; Pessôa, Gustavo S; Santos Júnior, Júlio C; Mollo Filho, Pedro C; Guidugli, Ruggero B F; Höehr, Nelci F; Arruda, Marco A Z; Coltro, Wendell K T

    2017-06-29

    This paper describes for the first time the use of paper-based analytical devices at crime scenes to estimate the post-mortem interval (PMI), based on the colorimetric determination of Fe 2+ in vitreous humour (VH) samples. Experimental parameters such as the paper substrate, the microzone diameter, the sample volume and the 1,10-phenanthroline (o-phen) concentration were optimised in order to ensure the best analytical performance. Grade 1 CHR paper, microzone with diameter of 5 mm, a sample volume of 4 μL and an o-phen concentration of 0.05 mol/L were chosen as the optimum experimental conditions. A good linear response was observed for a concentration range of Fe 2+ between 2 and 10 mg/L and the calculated values for the limit of detection (LOD) and limit of quantification (LOQ) were 0.3 and 0.9 mg/L, respectively. The specificity of the Fe 2+ colorimetric response was tested in the presence of the main interfering agents and no significant differences were found. After selecting the ideal experimental conditions, four HV samples were investigated on paper-based devices. The concentration levels of Fe 2+ achieved for samples #1, #2, #3 and #4 were 0.5 ± 0.1, 0.7 ± 0.1, 1.2 ± 0.1 and 15.1 ± 0.1 mg/L, respectively. These values are in good agreement with those calculated by ICP-MS. It important to note that the concentration levels measured using both techniques are proportional to the PMI. The limitation of the proposed analytical device is that it is restricted to a PMI greater than 1 day. The capability of providing an immediate answer about the PMI on the crime scene without any sophisticated instrumentation is a great achievement in modern instrumentation for forensic chemistry. The strategy proposed in this study could be helpful in many criminal investigations. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Efficient Bayesian estimates for discrimination among topologically different systems biology models.

    Science.gov (United States)

    Hagen, David R; Tidor, Bruce

    2015-02-01

    A major effort in systems biology is the development of mathematical models that describe complex biological systems at multiple scales and levels of abstraction. Determining the topology-the set of interactions-of a biological system from observations of the system's behavior is an important and difficult problem. Here we present and demonstrate new methodology for efficiently computing the probability distribution over a set of topologies based on consistency with existing measurements. Key features of the new approach include derivation in a Bayesian framework, incorporation of prior probability distributions of topologies and parameters, and use of an analytically integrable linearization based on the Fisher information matrix that is responsible for large gains in efficiency. The new method was demonstrated on a collection of four biological topologies representing a kinase and phosphatase that operate in opposition to each other with either processive or distributive kinetics, giving 8-12 parameters for each topology. The linearization produced an approximate result very rapidly (CPU minutes) that was highly accurate on its own, as compared to a Monte Carlo method guaranteed to converge to the correct answer but at greater cost (CPU weeks). The Monte Carlo method developed and applied here used the linearization method as a starting point and importance sampling to approach the Bayesian answer in acceptable time. Other inexpensive methods to estimate probabilities produced poor approximations for this system, with likelihood estimation showing its well-known bias toward topologies with more parameters and the Akaike and Schwarz Information Criteria showing a strong bias toward topologies with fewer parameters. These results suggest that this linear approximation may be an effective compromise, providing an answer whose accuracy is near the true Bayesian answer, but at a cost near the common heuristics.

  11. Estimating Origin-Destination Matrices Using AN Efficient Moth Flame-Based Spatial Clustering Approach

    Science.gov (United States)

    Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali

    2017-09-01

    Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.

  12. ESTIMATING ORIGIN-DESTINATION MATRICES USING AN EFFICIENT MOTH FLAME-BASED SPATIAL CLUSTERING APPROACH

    Directory of Open Access Journals (Sweden)

    A. A. Heidari

    2017-09-01

    Full Text Available Automated fare collection (AFC systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO is utilized and evaluated for the first time as a new metaheuristic algorithm (MA in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO and genetic algorithm (GA. The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.

  13. A harmonized calculation model for transforming EU bottom-up energy efficiency indicators into empirical estimates of policy impacts

    International Nuclear Information System (INIS)

    Horowitz, Marvin J.; Bertoldi, Paolo

    2015-01-01

    This study is an impact analysis of European Union (EU) energy efficiency policy that employs both top-down energy consumption data and bottom-up energy efficiency statistics or indicators. As such, it may be considered a contribution to the effort called for in the EU's 2006 Energy Services Directive (ESD) to develop a harmonized calculation model. Although this study does not estimate the realized savings from individual policy measures, it does provide estimates of realized energy savings for energy efficiency policy measures in aggregate. Using fixed effects panel models, the annual cumulative savings in 2011 of combined household and manufacturing sector electricity and natural gas usage attributed to EU energy efficiency policies since 2000 is estimated to be 1136 PJ; the savings attributed to energy efficiency policies since 2006 is estimated to be 807 PJ, or the equivalent of 5.6% of 2011 EU energy consumption. As well as its contribution to energy efficiency policy analysis, this study adds to the development of methods that can improve the quality of information provided by standardized energy efficiency and sustainable resource indexes. - Highlights: • Impact analysis of European Union energy efficiency policy. • Harmonization of top-down energy consumption and bottom-up energy efficiency indicators. • Fixed effects models for Member States for household and manufacturing sectors and combined electricity and natural gas usage. • EU energy efficiency policies since 2000 are estimated to have saved 1136 Petajoules. • Energy savings attributed to energy efficiency policies since 2006 are 5.6 percent of 2011 combined electricity and natural gas usage.

  14. Estimation of postmortem interval (PMI) based on empty puparia of Phormia regina (Meigen) (Diptera: Calliphoridae) and third larval stage of Necrodes littoralis (L.) (Coleoptera: Silphidae) - Advantages of using different PMI indicators.

    Science.gov (United States)

    Bajerlein, D; Taberski, D; Matuszewski, S

    2018-04-01

    On 16 July 2015, a body of a 64-year-old man in advanced decomposition was found in an open area of the suburb of Śrem (western Poland). Postmortem interval (PMI) was estimated by forensic pathologist for 3-6 weeks. Insects were sampled from the cadaver and the soil from below the cadaver. Empty puparia of Phormia regina were the most developmentally advanced specimens of blowflies. Moreover, third instar larva of Necrodes littoralis was collected directly from the cadaver. For the estimation of minimum PMI from puparia of P. regina, thermal summation method was used to estimate the total immature development interval of this species. In the case of larval N. littoralis, the pre-appearance interval (PAI) was estimated using temperature method and the development interval (DI) using thermal summation method. Average daily temperatures from the nearby weather station were used, as well as the weather station temperatures corrected by 1 °C and 2 °C. The estimates were as follows: 36-38 days using empty puparia of P. regina and 37-40 days using larva of N. littoralis (for the uncorrected temperatures), 31-34 days using both P. regina and N. littoralis (temperatures corrected by +1 °C), 24-27 days using P. regina and 28-29 days using N. littoralis (temperatures corrected by +2 °C). It was concluded that death occurred 24-40 days before the body was found and most probably 24-34 days before the body was found. This is the first report when PMI was approximated by the age estimates combined with the PAI estimates. Moreover, the case demonstrates the advantages of using different entomological indicators and an urgent need for the more robust developmental model for N. littoralis, as it proved to be highly useful for the estimation of PMI. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. An improved routine for the fast estimate of ion cyclotron heating efficiency in tokamak plasmas

    International Nuclear Information System (INIS)

    Brambilla, M.

    1992-02-01

    The subroutine ICEVAL for the rapid simulation of Ion Cyclotron Heating in tokamak plasmas is based on analytic estimates of the wave behaviour near resonances, and on drastic but reasonable simplifications of the real geometry. The subroutine has been rewritten to improve the model and to facilitate its use as input in transport codes. In the new version the influence of quasilinear minority heating on the damping efficiency is taken into account using the well-known Stix analytic approximation. Among other improvements are: a) the possibility of considering plasmas with more than two ion species; b) inclusion of Landau, Transit Time and collisional damping on the electrons non localised at resonances; c) better models for the antenna spectrum and for the construction of the power deposition profiles. The results of ICEVAL are compared in detail with those of the full-wave code FELICE for the case of Hydrogen minority heating in a Deuterium plasma; except for details which depend on the excitation of global eigenmodes, agreement is excellent. ICEVAL is also used to investigate the enhancement of the absorption efficiency due to quasilinear heating of the minority ions. The effect is a strongly non-linear function of the available power, and decreases rapidly with increasing concentration. For parameters typical of Asdex Upgrade plasmas, about 4 MW are required to produce a significant increase of the single-pass absorption at concentrations between 10 and 20%. (orig.)

  16. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh

    2014-06-01

    We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.

  17. Feed Forward Artificial Neural Network Model to Estimate the TPH Removal Efficiency in Soil Washing Process

    Directory of Open Access Journals (Sweden)

    Hossein Jafari Mansoorian

    2017-01-01

    Full Text Available Background & Aims of the Study: A feed forward artificial neural network (FFANN was developed to predict the efficiency of total petroleum hydrocarbon (TPH removal from a contaminated soil, using soil washing process with Tween 80. The main objective of this study was to assess the performance of developed FFANN model for the estimation of   TPH removal. Materials and Methods: Several independent repressors including pH, shaking speed, surfactant concentration and contact time were used to describe the removal of TPH as a dependent variable in a FFANN model. 85% of data set observations were used for training the model and remaining 15% were used for model testing, approximately. The performance of the model was compared with linear regression and assessed, using Root of Mean Square Error (RMSE as goodness-of-fit measure Results: For the prediction of TPH removal efficiency, a FANN model with a three-hidden-layer structure of 4-3-1 and a learning rate of 0.01 showed the best predictive results. The RMSE and R2 for the training and testing steps of the model were obtained to be 2.596, 0.966, 10.70 and 0.78, respectively. Conclusion: For about 80% of the TPH removal efficiency can be described by the assessed regressors the developed model. Thus, focusing on the optimization of soil washing process regarding to shaking speed, contact time, surfactant concentration and pH can improve the TPH removal performance from polluted soils. The results of this study could be the basis for the application of FANN for the assessment of soil washing process and the control of petroleum hydrocarbon emission into the environments.

  18. Towards the Estimation of an Efficient Benchmark Portfolio: The Case of Croatian Emerging Market

    Directory of Open Access Journals (Sweden)

    Dolinar Denis

    2017-04-01

    Full Text Available The fact that cap-weighted indices provide an inefficient risk-return trade-off is well known today. Various research approaches evolved suggesting alternative to cap-weighting in an effort to come up with a more efficient market index benchmark. In this paper we aim to use such an approach and focus on the Croatian capital market. We apply statistical shrinkage method suggested by Ledoit and Wolf (2004 to estimate the covariance matrix and follow the work of Amenc et al. (2011 to obtain estimates of expected returns that rely on risk-return trade-off. Empirical findings for the proposed portfolio optimization include out-of-sample and robustness testing. This way we compare the performance of the capital-weighted benchmark to the alternative and ensure that consistency is achieved in different volatility environments. Research findings do not seem to support relevant research results for the developed markets but rather complement earlier research (Zoričić et al., 2014.

  19. Estimation of Radiative Efficiency of Chemicals with Potentially Significant Global Warming Potential.

    Science.gov (United States)

    Betowski, Don; Bevington, Charles; Allison, Thomas C

    2016-01-19

    Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.

  20. Magnetic Resonance Fingerprinting with short relaxation intervals.

    Science.gov (United States)

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

  1. Validation by theoretical approach to the experimental estimation of efficiency for gamma spectrometry of gas in 100 ml standard flask

    International Nuclear Information System (INIS)

    Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.

    1996-01-01

    Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs

  2. The efficiency of different search strategies in estimating parsimony jackknife, bootstrap, and Bremer support

    Directory of Open Access Journals (Sweden)

    Müller Kai F

    2005-10-01

    Full Text Available Abstract Background For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife, and Bremer support (Decay indices. The recent literature reveals that parameter settings that are quite commonly employed are not those that are recommended by theoretical considerations and by previous empirical studies. The optimal search strategy to be applied during resampling was previously addressed solely via standard search strategies available in PAUP*. The question of a compromise between search extensiveness and improved support accuracy for Bremer support received even less attention. A set of experiments was conducted on different datasets to find an empirical cut-off point at which increased search extensiveness does not significantly change Bremer support and jackknife or bootstrap proportions any more. Results For the number of replicates needed for accurate estimates of support in resampling plans, a diagram is provided that helps to address the question whether apparently different support values really differ significantly. It is shown that the use of random addition cycles and parsimony ratchet iterations during bootstrapping does not translate into higher support, nor does any extension of the search extensiveness beyond the rather moderate effort of TBR (tree bisection and reconnection branch swapping plus saving one tree per replicate. Instead, in case of very large matrices, saving more than one shortest tree per iteration and using a strict consensus tree of these yields decreased support compared to saving only one tree. This can be interpreted as a small risk of overestimating support but should be more than compensated by other factors that counteract an enhanced type I error. With regard to Bremer support, a rule of thumb can be derived stating that not much is gained relative to the surplus computational effort when searches are extended beyond 20 ratchet iterations per

  3. Setting thresholds to varying blood pressure monitoring intervals differentially affects risk estimates associated with white-coat and masked hypertension in the population

    DEFF Research Database (Denmark)

    Asayama, Kei; Thijs, Lutgarde; Li, Yan

    2014-01-01

    Outcome-driven recommendations about time intervals during which ambulatory blood pressure should be measured to diagnose white-coat or masked hypertension are lacking. We cross-classified 8237 untreated participants (mean age, 50.7 years; 48.4% women) enrolled in 12 population studies, using ≥14...

  4. Setting thresholds to varying blood pressure monitoring intervals differentially affects risk estimates associated with white-coat and masked hypertension in the population.

    Science.gov (United States)

    Asayama, Kei; Thijs, Lutgarde; Li, Yan; Gu, Yu-Mei; Hara, Azusa; Liu, Yan-Ping; Zhang, Zhenyu; Wei, Fang-Fei; Lujambio, Inés; Mena, Luis J; Boggia, José; Hansen, Tine W; Björklund-Bodegård, Kristina; Nomura, Kyoko; Ohkubo, Takayoshi; Jeppesen, Jørgen; Torp-Pedersen, Christian; Dolan, Eamon; Stolarz-Skrzypek, Katarzyna; Malyutina, Sofia; Casiglia, Edoardo; Nikitin, Yuri; Lind, Lars; Luzardo, Leonella; Kawecka-Jaszcz, Kalina; Sandoya, Edgardo; Filipovský, Jan; Maestre, Gladys E; Wang, Jiguang; Imai, Yutaka; Franklin, Stanley S; O'Brien, Eoin; Staessen, Jan A

    2014-11-01

    Outcome-driven recommendations about time intervals during which ambulatory blood pressure should be measured to diagnose white-coat or masked hypertension are lacking. We cross-classified 8237 untreated participants (mean age, 50.7 years; 48.4% women) enrolled in 12 population studies, using ≥140/≥90, ≥130/≥80, ≥135/≥85, and ≥120/≥70 mm Hg as hypertension thresholds for conventional, 24-hour, daytime, and nighttime blood pressure. White-coat hypertension was hypertension on conventional measurement with ambulatory normotension, the opposite condition being masked hypertension. Intervals used for classification of participants were daytime, nighttime, and 24 hours, first considered separately, and next combined as 24 hours plus daytime or plus nighttime, or plus both. Depending on time intervals chosen, white-coat and masked hypertension frequencies ranged from 6.3% to 12.5% and from 9.7% to 19.6%, respectively. During 91 046 person-years, 729 participants experienced a cardiovascular event. In multivariable analyses with normotension during all intervals of the day as reference, hazard ratios associated with white-coat hypertension progressively weakened considering daytime only (1.38; P=0.033), nighttime only (1.43; P=0.0074), 24 hours only (1.21; P=0.20), 24 hours plus daytime (1.24; P=0.18), 24 hours plus nighttime (1.15; P=0.39), and 24 hours plus daytime and nighttime (1.16; P=0.41). The hazard ratios comparing masked hypertension with normotension were all significant (Pcoat hypertension requires setting thresholds simultaneously to 24 hours, daytime, and nighttime blood pressure. Although any time interval suffices to diagnose masked hypertension, as proposed in current guidelines, full 24-hour recordings remain standard in clinical practice. © 2014 American Heart Association, Inc.

  5. Setting Thresholds to Varying Blood Pressure Monitoring Intervals Differentially Affects Risk Estimates Associated With White-Coat and Masked Hypertension in the Population

    Science.gov (United States)

    Asayama, Kei; Thijs, Lutgarde; Li, Yan; Gu, Yu-Mei; Hara, Azusa; Liu, Yan-Ping; Zhang, Zhenyu; Wei, Fang-Fei; Lujambio, Inés; Mena, Luis J.; Boggia, José; Hansen, Tine W.; Björklund-Bodegård, Kristina; Nomura, Kyoko; Ohkubo, Takayoshi; Jeppesen, Jørgen; Torp-Pedersen, Christian; Dolan, Eamon; Stolarz-Skrzypek, Katarzyna; Malyutina, Sofia; Casiglia, Edoardo; Nikitin, Yuri; Lind, Lars; Luzardo, Leonella; Kawecka-Jaszcz, Kalina; Sandoya, Edgardo; Filipovský, Jan; Maestre, Gladys E.; Wang, Jiguang; Imai, Yutaka; Franklin, Stanley S.; O’Brien, Eoin; Staessen, Jan A.

    2015-01-01

    Outcome-driven recommendations about time intervals during which ambulatory blood pressure should be measured to diagnose white-coat or masked hypertension are lacking. We cross-classified 8237 untreated participants (mean age, 50.7 years; 48.4% women) enrolled in 12 population studies, using ≥140/≥90, ≥130/≥80, ≥135/≥85, and ≥120/≥70 mm Hg as hypertension thresholds for conventional, 24-hour, daytime, and nighttime blood pressure. White-coat hypertension was hypertension on conventional measurement with ambulatory normotension, the opposite condition being masked hypertension. Intervals used for classification of participants were daytime, nighttime, and 24 hours, first considered separately, and next combined as 24 hours plus daytime or plus nighttime, or plus both. Depending on time intervals chosen, white-coat and masked hypertension frequencies ranged from 6.3% to 12.5% and from 9.7% to 19.6%, respectively. During 91 046 person-years, 729 participants experienced a cardiovascular event. In multivariable analyses with normotension during all intervals of the day as reference, hazard ratios associated with white-coat hypertension progressively weakened considering daytime only (1.38; P=0.033), nighttime only (1.43; P=0.0074), 24 hours only (1.21; P=0.20), 24 hours plus daytime (1.24; P=0.18), 24 hours plus nighttime (1.15; P=0.39), and 24 hours plus daytime and nighttime (1.16; P=0.41). The hazard ratios comparing masked hypertension with normotension were all significant (Phypertension requires setting thresholds simultaneously to 24 hours, daytime, and nighttime blood pressure. Although any time interval suffices to diagnose masked hypertension, as proposed in current guidelines, full 24-hour recordings remain standard in clinical practice. PMID:25135185

  6. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  7. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  8. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    Science.gov (United States)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure

  9. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    Energy Technology Data Exchange (ETDEWEB)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2015-01-15

    suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.

  10. On the estimation of the steam generator maintenance efficiency by the means of probabilistic fracture mechanics

    International Nuclear Information System (INIS)

    Cizelj, L.

    1994-10-01

    In this report, an original probabilistic model aimed to assess the efficiency of particular maintenance strategy in terms of tube failure probability is proposed. The model concentrates on axial through wall cracks in the residual stress dominated tube expansion transition zone. It is based on the recent developments in probabilistic fracture mechanics and accounts for scatter in material, geometry and crack propagation data. Special attention has been paid to model the uncertainties connected to non-destructive examination technique (e.g., measurement errors, non-detection probability). First and second order reliability methods (FORM and SORM) have been implemented to calculate the failure probabilities. This is the first time that those methods are applied to the reliability analysis of components containing stress-corrosion cracks. In order to predict the time development of the tube failure probabilities, an original linear elastic fracture mechanics based crack propagation model has been developed. It accounts for the residual and operating stresses together. Also, the model accounts for scatter in residual and operational stresses due to the random variations in tube geometry and material data. Due to the lack of reliable crack velocity vs load data, the non-destructive examination records of the crack propagation have been employed to estimate the velocities at the crack tips. (orig./GL) [de

  11. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    Science.gov (United States)

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  12. A laboratory method to estimate the efficiency of plant extract to neutralize soil acidity

    Directory of Open Access Journals (Sweden)

    Marcelo E. Cassiolato

    2002-06-01

    Full Text Available Water-soluble plant organic compounds have been proposed to be efficient in alleviating soil acidity. Laboratory methods were evaluated to estimate the efficiency of plant extracts to neutralize soil acidity. Plant samples were dried at 65ºC for 48 h and ground to pass 1 mm sieve. Plant extraction procedure was: transfer 3.0 g of plant sample to a becker, add 150 ml of deionized water, shake for 8 h at 175 rpm and filter. Three laboratory methods were evaluated: sigma (Ca+Mg+K of the plant extracts; electrical conductivity of the plant extracts and titration of plant extracts with NaOH solution between pH 3 to 7. These methods were compared with the effect of the plant extracts on acid soil chemistry. All laboratory methods were related with soil reaction. Increasing sigma (Ca+Mg+K, electrical conductivity and the volume of NaOH solution spent to neutralize H+ ion of the plant extracts were correlated with the effect of plant extract on increasing soil pH and exchangeable Ca and decreasing exchangeable Al. It is proposed the electrical conductivity method for estimating the efficiency of plant extract to neutralize soil acidity because it is easily adapted for routine analysis and uses simple instrumentations and materials.Tem sido proposto que os compostos orgânicos de plantas solúveis em água são eficientes na amenização da acidez do solo. Foram avaliados métodos de laboratório para estimar a eficiência dos extratos de plantas na neutralização da acidez do solo. Os materiais de plantas foram secos a 65º C por 48 horas, moídos e passados em peneira de 1mm. Utilizou-se o seguinte procedimento para obtenção do extrato de plantas: transferir 3.0 g da amostra de planta para um becker, adicionar 150 ml de água deionizada, agitar por 8h a 175 rpm e filtrar. Avaliaram-se três métodos de laboratório: sigma (Ca + Mg + K do extrato de planta, condutividade elétrica (CE do extrato de planta e titulação do extrato de planta com solu

  13. Practical Considerations about Expected A Posteriori Estimation in Adaptive Testing: Adaptive A Priori, Adaptive Correction for Bias, and Adaptive Integration Interval.

    Science.gov (United States)

    Raiche, Gilles; Blais, Jean-Guy

    In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…

  14. Estimation of technical efficiency and it's determinants in the hybrid maize production in district chiniot: a cobb douglas model approach

    International Nuclear Information System (INIS)

    Naqvi, S.A.A.; Ashfaq, M.

    2014-01-01

    High yielding crop like maize is very important for countries like Pakistan, which is third cereal crop after wheat and rice. Maize accounts for 4.8 percent of the total cropped area and 4.82 percent of the value of agricultural production. It is grown all over the country but major areas are Sahiwal, Okara and Faisalabad. Chiniot is one of the distinct agroecological domains of central Punjab for the maize cultivation, that's why this district was selected for the study and the technical efficiency of hybrid maize farmers was estimated. The primary data of 120 farmers, 40 farmers from each of the three tehsils of Chiniot were collected in the year 2011. Causes of low yields for various farmers than the others, while using the same input bundle were estimated. The managerial factors causing the inefficiency of production were also measured. The average technical efficiency was estimated to be 91 percent, while it was found to be 94.8, 92.7 and 90.8 for large, medium and small farmers, respectively. Stochastic frontier production model was used to measure technical efficiency. Statistical software Frontier 4.1 was used to analyse the data to generate inferences because the estimates of efficiency were produced as a direct output from package. It was concluded that the efficiency can be enhanced by covering the inefficiency from the environmental variables, farmers personal characteristics and farming conditions. (author)

  15. Estimating photosynthetic radiation use efficiency using incident light and photosynthesis of individual leaves.

    Science.gov (United States)

    Rosati, A; Dejong, T M

    2003-06-01

    It has been theorized that photosynthetic radiation use efficiency (PhRUE) over the course of a day is constant for leaves throughout a canopy if leaf nitrogen content and photosynthetic properties are adapted to local light so that canopy photosynthesis over a day is optimized. To test this hypothesis, 'daily' photosynthesis of individual leaves of Solanum melongena plants was calculated from instantaneous rates of photosynthesis integrated over the daylight hours. Instantaneous photosynthesis was estimated from the photosynthetic responses to photosynthetically active radiation (PAR) and from the incident PAR measured on individual leaves during clear and overcast days. Plants were grown with either abundant or scarce N fertilization. Both net and gross daily photosynthesis of leaves were linearly related to daily incident PAR exposure of individual leaves, which implies constant PhRUE over a day throughout the canopy. The slope of these relationships (i.e. PhRUE) increased with N fertilization. When the relationship was calculated for hourly instead of daily periods, the regressions were curvilinear, implying that PhRUE changed with time of the day and incident radiation. Thus, linearity (i.e. constant PhRUE) was achieved only when data were integrated over the entire day. Using average PAR in place of instantaneous incident PAR increased the slope of the relationship between daily photosynthesis and incident PAR of individual leaves, and the regression became curvilinear. The slope of the relationship between daily gross photosynthesis and incident PAR of individual leaves increased for an overcast compared with a clear day, but the slope remained constant for net photosynthesis. This suggests that net PhRUE of all leaves (and thus of the whole canopy) may be constant when integrated over a day, not only when the incident PAR changes with depth in the canopy, but also when it varies on the same leaf owing to changes in daily incident PAR above the canopy. The

  16. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  17. The efficiency of modified jackknife and ridge type regression estimators: a comparison

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2008-09-01

    Full Text Available A common problem in multiple regression models is multicollinearity, which produces undesirable effects on the least squares estimator. To circumvent this problem, two well known estimation procedures are often suggested in the literature. They are Generalized Ridge Regression (GRR estimation suggested by Hoerl and Kennard iteb8 and the Jackknifed Ridge Regression (JRR estimation suggested by Singh et al. iteb13. The GRR estimation leads to a reduction in the sampling variance, whereas, JRR leads to a reduction in the bias. In this paper, we propose a new estimator namely, Modified Jackknife Ridge Regression Estimator (MJR. It is based on the criterion that combines the ideas underlying both the GRR and JRR estimators. We have investigated standard properties of this new estimator. From a simulation study, we find that the new estimator often outperforms the LASSO, and it is superior to both GRR and JRR estimators, using the mean squared error criterion. The conditions under which the MJR estimator is better than the other two competing estimators have been investigated.

  18. Efficient estimation of the robustness region of biological models with oscillatory behavior.

    Directory of Open Access Journals (Sweden)

    Mochamad Apri

    Full Text Available Robustness is an essential feature of biological systems, and any mathematical model that describes such a system should reflect this feature. Especially, persistence of oscillatory behavior is an important issue. A benchmark model for this phenomenon is the Laub-Loomis model, a nonlinear model for cAMP oscillations in Dictyostelium discoideum. This model captures the most important features of biomolecular networks oscillating at constant frequencies. Nevertheless, the robustness of its oscillatory behavior is not yet fully understood. Given a system that exhibits oscillating behavior for some set of parameters, the central question of robustness is how far the parameters may be changed, such that the qualitative behavior does not change. The determination of such a "robustness region" in parameter space is an intricate task. If the number of parameters is high, it may be also time consuming. In the literature, several methods are proposed that partially tackle this problem. For example, some methods only detect particular bifurcations, or only find a relatively small box-shaped estimate for an irregularly shaped robustness region. Here, we present an approach that is much more general, and is especially designed to be efficient for systems with a large number of parameters. As an illustration, we apply the method first to a well understood low-dimensional system, the Rosenzweig-MacArthur model. This is a predator-prey model featuring satiation of the predator. It has only two parameters and its bifurcation diagram is available in the literature. We find a good agreement with the existing knowledge about this model. When we apply the new method to the high dimensional Laub-Loomis model, we obtain a much larger robustness region than reported earlier in the literature. This clearly demonstrates the power of our method. From the results, we conclude that the biological system underlying is much more robust than was realized until now.

  19. Effect of LET on the efficiency of dose re-estimation in LiF using uv photo-transfer

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, J A; Baker, D M; Marshall, M; Budd, T [UKAEA Atomic Energy Research Establishment, Harwell. Environmental and Medical Sciences Div.

    1980-09-01

    Glow curves from TLD600 and TLD700 extruded rods exposed to ..gamma..-, X- and neutron radiations have been compared before and after uv photo-transfer. Re-estimation efficiency increases with LET by an amount which varies from batch to batch.

  20. Oracle Efficient Estimation and Forecasting with the Adaptive LASSO and the Adaptive Group LASSO in Vector Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Callot, Laurent

    We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...

  1. 用Delta法估计多维测验合成信度的置信区间%Estimating the Confidence Interval of Composite Reliability of a Multidimensional Test With the Delta Method

    Institute of Scientific and Technical Information of China (English)

    叶宝娟; 温忠麟

    2012-01-01

    Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the

  2. Estimation of the reproductive number and the serial interval in early phase of the 2009 influenza A/H1N1 pandemic in the USA

    NARCIS (Netherlands)

    White, Laura Forsberg; Wallinga, Jacco; Finelli, Lyn; Reed, Carrie; Riley, Steven; Lipsitch, Marc; Pagano, Marcello

    2009-01-01

    Background The United States was the second country to have a major outbreak of novel influenza A/H1N1 in what has become a new pandemic. Appropriate public health responses to this pandemic depend in part on early estimates of key epidemiological parameters of the virus in defined populations.

  3. Surveillance test interval optimization

    International Nuclear Information System (INIS)

    Cepin, M.; Mavko, B.

    1995-01-01

    Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level

  4. To Estimation of Efficient Usage of Organic Fuel in the Cycle of Steam Power Installations

    Directory of Open Access Journals (Sweden)

    A. P. Nesenchuk

    2013-01-01

    Full Text Available Tendencies of power engineering development in the world were shown in this article. There were carried out the thermodynamic Analysis of efficient usage of different types of fuel. This article shows the obtained result, which reflects that low-calorie fuel (from the point of thermodynamics is more efficient to use at steam power stations then high-energy fuel.

  5. An Integrated Approach for Estimating the Energy Efficiency of Seventeen Countries

    Directory of Open Access Journals (Sweden)

    Chia-Nan Wang

    2017-10-01

    Full Text Available Increased energy efficiency is one of the most effective ways to achieve climate change mitigation. This study aims to evaluate the energy efficiency of seventeen countries. The evaluation is based on an integrated method that combines the super slack-based (super SBM model and the Malmquist productivity index (MPI to investigate the energy efficiency of seventeen countries during the period of 2010–2015. The results in this study are that the United States, Columbia, Japan, China, and Saudi Arabia perform the best in energy efficiency, whereas Brazil, Russia, Indonesia, and India perform the worst during the entire sample period. The energy efficiency of these countries arrived mainly from technological improvement. The study provides suggestions for the seventeen countries’ government to control the energy consumption and contribute to environmental protection.

  6. Estimating the Acquisition Price of Enshi Yulu Young Tea Shoots Using Near-Infrared Spectroscopy by the Back Propagation Artificial Neural Network Model in Conjunction with Backward Interval Partial Least Squares Algorithm

    Science.gov (United States)

    Wang, Sh.-P.; Gong, Z.-M.; Su, X.-Zh.; Liao, J.-Zh.

    2017-09-01

    Near infrared spectroscopy and the back propagation artificial neural network model in conjunction with backward interval partial least squares algorithm were used to estimate the purchasing price of Enshi yulu young tea shoots. The near-infrared spectra regions most relevant to the tea shoots price model (5700.5-5935.8, 7613.6-7848.9, 8091.8-8327.1, 8331-8566.2, 9287.5-9522.5, and 9526.6-9761.9 cm-1) were selected using backward interval partial least squares algorithm. The first five principal components that explained 99.96% of the variability in those selected spectral data were then used to calibrate the back propagation artificial neural tea shoots purchasing price model. The performance of this model (coefficient of determination for prediction 0.9724; root-mean-square error of prediction 4.727) was superior to those of the back propagation artificial neural model (coefficient of determination for prediction 0.8653, root-mean-square error of prediction 5.125) and the backward interval partial least squares model (coefficient of determination for prediction 0.5932, root-mean-square error of prediction 25.125). The acquisition price model with the combined backward interval partial least squares-back propagation artificial neural network algorithms can evaluate the price of Enshi yulu tea shoots accurately, quickly and objectively.

  7. Validation of an efficient visual method for estimating leaf area index ...

    African Journals Online (AJOL)

    This study aimed to evaluate the accuracy and applicability of a visual method for estimating LAI in clonal Eucalyptus grandis × E. urophylla plantations and to compare it with hemispherical photography, ceptometer and LAI-2000® estimates. Destructive sampling for direct determination of the actual LAI was performed in ...

  8. Interval selection with machine-dependent intervals

    OpenAIRE

    Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

    2013-01-01

    We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

  9. Application of independent component analysis for speech-music separation using an efficient score function estimation

    Science.gov (United States)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  10. A probabilistic approach for representation of interval uncertainty

    International Nuclear Information System (INIS)

    Zaman, Kais; Rangavajhala, Sirisha; McDonald, Mark P.; Mahadevan, Sankaran

    2011-01-01

    In this paper, we propose a probabilistic approach to represent interval data for input variables in reliability and uncertainty analysis problems, using flexible families of continuous Johnson distributions. Such a probabilistic representation of interval data facilitates a unified framework for handling aleatory and epistemic uncertainty. For fitting probability distributions, methods such as moment matching are commonly used in the literature. However, unlike point data where single estimates for the moments of data can be calculated, moments of interval data can only be computed in terms of upper and lower bounds. Finding bounds on the moments of interval data has been generally considered an NP-hard problem because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the bounds on second and higher moments of interval data. With numerical examples, we show that the proposed bounding algorithms are scalable in polynomial time with respect to increasing number of intervals. Using the bounds on moments computed using the proposed approach, we fit a family of Johnson distributions to interval data. Furthermore, using an optimization approach based on percentiles, we find the bounding envelopes of the family of distributions, termed as a Johnson p-box. The idea of bounding envelopes for the family of Johnson distributions is analogous to the notion of empirical p-box in the literature. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the computationally expensive nested analysis that is typically required in the presence of interval variables, the proposed probabilistic representation enables inexpensive optimization-based strategies to estimate bounds on an output quantity of interest.

  11. Efficient Semiparametric Marginal Estimation for the Partially Linear Additive Model for Longitudinal/Clustered Data

    KAUST Repository

    Carroll, Raymond; Maity, Arnab; Mammen, Enno; Yu, Kyusang

    2009-01-01

    We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.

  12. Efficient Semiparametric Marginal Estimation for the Partially Linear Additive Model for Longitudinal/Clustered Data

    KAUST Repository

    Carroll, Raymond

    2009-04-23

    We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.

  13. The Approach to an Estimation of a Local Area Network Functioning Efficiency

    Directory of Open Access Journals (Sweden)

    M. M. Taraskin

    2010-09-01

    Full Text Available In the article authors call attention to a choice of system of metrics, which permits to take a qualitative assessment of local area network functioning efficiency in condition of computer attacks.

  14. An estimation of the energy and exergy efficiencies for the energy resources consumption in the transportation sector in Malaysia

    International Nuclear Information System (INIS)

    Saidur, R.; Sattar, M.A.; Masjuki, H.H.; Ahmed, S.; Hashim, U.

    2007-01-01

    The purpose of this work is to apply the useful energy and exergy analysis models for different modes of transport in Malaysia and to compare the result with a few countries. In this paper, energy and exergy efficiencies of the various sub-sectors are presented by considering the energy and exergy flows from 1995 to 2003. Respective flow diagrams to find the overall energy and exergy efficiencies of Malaysian transportation sector are also presented. The estimated overall energy efficiency ranges from 22.74% (1999) to 22.98% (1998) with a mean of 22.82+/-0.06% and that of overall exergy efficiency ranges from 22.44% (2000) to 22.82% (1998) with a mean of 22.55+/-0.12%. The results are compared with respect to present energy and exergy efficiencies in each sub-sector. The transportation sector used about 40% of the total energy consumed in 2002. Therefore, it is important to identify the energy and exergy flows and the pertinent losses. The road sub-sector has appeared to be the most efficient one compared to the air and marine sub-sectors. Also found that the energy and exergy efficiencies of Malaysian transportation sector are lower than that of Turkey but higher than Norway

  15. DEREGULATION, FINANCIAL CRISIS, AND BANK EFFICIENCY IN TAIWAN: AN ESTIMATION OF UNDESIRABLE OUTPUTS

    OpenAIRE

    Liao, Chang-Sheng

    2018-01-01

    Purpose- This study investigates the undesirable impacts of outputson bank efficiency and contributes to the literature by assessing howregulation policies and other events impact bank efficiency in Taiwan inregards to deregulation, financial crisis, and financial reform from 1993 to2011. Methodology- In order to effectively deal with both undesirableand desirable outputs, this study follows Seiford and Zhu (2002), who recommendusing the standard data envelopment analysis model to measure per...

  16. An Efficient Code-Timing Estimator for DS-CDMA Systems over Resolvable Multipath Channels

    Directory of Open Access Journals (Sweden)

    Jian Li

    2005-04-01

    Full Text Available We consider the problem of training-based code-timing estimation for the asynchronous direct-sequence code-division multiple-access (DS-CDMA system. We propose a modified large-sample maximum-likelihood (MLSML estimator that can be used for the code-timing estimation for the DS-CDMA systems over the resolvable multipath channels in closed form. Simulation results show that MLSML can be used to provide a high correct acquisition probability and a high estimation accuracy. Simulation results also show that MLSML can have very good near-far resistant capability due to employing a data model similar to that for adaptive array processing where strong interferences can be suppressed.

  17. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2016-01-01

    application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The third application

  18. Accounting for the decrease of photosystem photochemical efficiency with increasing irradiance to estimate quantum yield of leaf photosynthesis.

    Science.gov (United States)

    Yin, Xinyou; Belay, Daniel W; van der Putten, Peter E L; Struik, Paul C

    2014-12-01

    Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (Φ CO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation have often been attributed either to light absorptance by non-photosynthetic pigments or to some data points being beyond the linear range of the irradiance response, both causing an underestimation of Φ CO2LL. We demonstrate here that a decrease in photosystem (PS) photochemical efficiency with increasing irradiance, even at very low levels, is another source of error that causes a systematic underestimation of Φ CO2LL. A model method accounting for this error was developed, and was used to estimate Φ CO2LL from simultaneous measurements of gas exchange and chlorophyll fluorescence on leaves using various combinations of species, CO2, O2, or leaf temperature levels. The conventional linear regression method under-estimated Φ CO2LL by ca. 10-15%. Differences in the estimated Φ CO2LL among measurement conditions were generally accounted for by different levels of photorespiration as described by the Farquhar-von Caemmerer-Berry model. However, our data revealed that the temperature dependence of PSII photochemical efficiency under low light was an additional factor that should be accounted for in the model.

  19. Diagnostic Efficiency of MR Imaging of the Knee. Relationship to time Interval between MR and Arthroscopy; Eficacia diagnostica de la RM de rodilla. Relacion con el intervalo de tiempo entre la RM y la artroscopia

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, M. C.; Recondo, J. A.; Aperribay, M.; Gervas, C.; Fernandez, E.; Alustiza, J. M.

    2003-07-01

    To evaluate the efficiency of magnetic resonance (MR) in the diagnosis of knee lesions and how the results are influenced by the time interval between MR and arthroscopy. 248 knees studied by MR were retrospectively analyzed, as well as those which also underwent arthroscopy. Arthroscopy was considered to be the gold standard, MR diagnostic capacity was evaluated for both meniscal and cruciate ligament lesions. Sensitivity, specificity and Kappa index were calculated for the set of all knees included in the study (248), for those in which the time between MR and arthroscopy was less than or equal to three months (134) and for those in which the time between both procedures was less than or equal to one month. Sensitivity, specificity and Kappa index of the MR had global values of 96.5%, 70% and 71%, respectively. When the interval between MR and arthroscopy was less than or equal to three months, sensitivity, specificity and Kappa index were 95.5%, 75% and 72%, respectively. When it was less than or equal to one month, sensitivity was 100%, specificity was 87.5% and Kappa index was 91%. MR is an excellent tool for the diagnosis of knee lesions. Higher MR values of sensitivity, specificity and Kappa index are obtained when the time interval between both procedures is kept to a minimum. (Author) 11 refs.

  20. Bandwidth efficient channel estimation method for airborne hyperspectral data transmission in sparse doubly selective communication channels

    Science.gov (United States)

    Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.

    2017-10-01

    A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.

  1. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    Science.gov (United States)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  2. Estimation of Resource Productivity and Efficiency: An Extended Evaluation of Sustainability Related to Material Flow

    Directory of Open Access Journals (Sweden)

    Pin-Chih Wang

    2014-09-01

    Full Text Available This study is intended to conduct an extended evaluation of sustainability based on the material flow analysis of resource productivity. We first present updated information on the material flow analysis (MFA database in Taiwan. Essential indicators are selected to quantify resource productivity associated with the economy-wide MFA of Taiwan. The study also applies the IPAT (impact-population-affluence-technology master equation to measure trends of material use efficiency in Taiwan and to compare them with those of other Asia-Pacific countries. An extended evaluation of efficiency, in comparison with selected economies by applying data envelopment analysis (DEA, is conducted accordingly. The Malmquist Productivity Index (MPI is thereby adopted to quantify the patterns and the associated changes of efficiency. Observations and summaries can be described as follows. Based on the MFA of the Taiwanese economy, the average growth rates of domestic material input (DMI; 2.83% and domestic material consumption (DMC; 2.13% in the past two decades were both less than that of gross domestic product (GDP; 4.95%. The decoupling of environmental pressures from economic growth can be observed. In terms of the decomposition analysis of the IPAT equation and in comparison with 38 other economies, the material use efficiency of Taiwan did not perform as well as its economic growth. The DEA comparisons of resource productivity show that Denmark, Germany, Luxembourg, Malta, Netherlands, United Kingdom and Japan performed the best in 2008. Since the MPI consists of technological change (frontier-shift or innovation and efficiency change (catch-up, the change in efficiency (catch-up of Taiwan has not been accomplished as expected in spite of the increase in its technological efficiency.

  3. Econometric estimation of investment utilization, adjustment costs, and technical efficiency in Danish pig farms using hyperbolic distance functions

    DEFF Research Database (Denmark)

    Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund

    2014-01-01

    Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs...... of investment activities by the maximum likelihood method so that we can estimate the adjustment costs that occur in the year of the investment and the three following years. Our results show that investments are associated with significant adjustment costs, especially in the year in which the investment...

  4. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  5. Estimating the cost of saving electricity through U.S. utility customer-funded energy efficiency programs

    International Nuclear Information System (INIS)

    Hoffman, Ian M.; Goldman, Charles A.; Rybka, Gregory; Leventis, Greg; Schwartz, Lisa; Sanstad, Alan H.; Schiller, Steven

    2017-01-01

    The program administrator and total cost of saved energy allow comparison of the cost of efficiency across utilities, states, and program types, and can identify potential performance improvements. Comparing program administrator cost with the total cost of saved energy can indicate the degree to which programs leverage investment by participants. Based on reported total costs and savings information for U.S. utility efficiency programs from 2009 to 2013, we estimate the savings-weighted average total cost of saved electricity across 20 states at $0.046 per kilowatt-hour (kW h), comparing favorably with energy supply costs and retail rates. Programs targeted on the residential market averaged $0.030 per kW h compared to $0.053 per kW h for non-residential programs. Lighting programs, with an average total cost of $0.018 per kW h, drove lower savings costs in the residential market. We provide estimates for the most common program types and find that program administrators and participants on average are splitting the costs of efficiency in half. More consistent, standardized and complete reporting on efficiency programs is needed. Differing definitions and quantification of costs, savings and savings lifetimes pose challenges for comparing program results. Reducing these uncertainties could increase confidence in efficiency as a resource among planners and policymakers. - Highlights: • The cost of saved energy allows comparisons among energy resource investments. • Findings from the most expansive collection yet of total energy efficiency program costs. • The weighted average total cost of saved electricity was $0.046 for 20 states in 2009–2013. • Averages in the residential and non-residential sectors were $0.030 and $0.053 per kW h, respectively. • Results strongly indicate need for more consistent, reliable and complete reporting on efficiency programs.

  6. Valid and efficient manual estimates of intracranial volume from magnetic resonance images

    International Nuclear Information System (INIS)

    Klasson, Niklas; Olsson, Erik; Rudemo, Mats; Eckerström, Carl; Malmgren, Helge; Wallin, Anders

    2015-01-01

    Manual segmentations of the whole intracranial vault in high-resolution magnetic resonance images are often regarded as very time-consuming. Therefore it is common to only segment a few linearly spaced intracranial areas to estimate the whole volume. The purpose of the present study was to evaluate how the validity of intracranial volume estimates is affected by the chosen interpolation method, orientation of the intracranial areas and the linear spacing between them. Intracranial volumes were manually segmented on 62 participants from the Gothenburg MCI study using 1.5 T, T 1 -weighted magnetic resonance images. Estimates of the intracranial volumes were then derived using subsamples of linearly spaced coronal, sagittal or transversal intracranial areas from the same volumes. The subsamples of intracranial areas were interpolated into volume estimates by three different interpolation methods. The linear spacing between the intracranial areas ranged from 2 to 50 mm and the validity of the estimates was determined by comparison with the entire intracranial volumes. A progressive decrease in intra-class correlation and an increase in percentage error could be seen with increased linear spacing between intracranial areas. With small linear spacing (≤15 mm), orientation of the intracranial areas and interpolation method had negligible effects on the validity. With larger linear spacing, the best validity was achieved using cubic spline interpolation with either coronal or sagittal intracranial areas. Even at a linear spacing of 50 mm, cubic spline interpolation on either coronal or sagittal intracranial areas had a mean absolute agreement intra-class correlation with the entire intracranial volumes above 0.97. Cubic spline interpolation in combination with linearly spaced sagittal or coronal intracranial areas overall resulted in the most valid and robust estimates of intracranial volume. Using this method, valid ICV estimates could be obtained in less than five

  7. The effect of volume and quenching on estimation of counting efficiencies in liquid scintillation counting

    International Nuclear Information System (INIS)

    Knoche, H.W.; Parkhurst, A.M.; Tam, S.W.

    1979-01-01

    The effect of volume on the liquid scintillation counting performance of 14 C-samples has been investigated. A decrease in counting efficiency was observed for samples with volumes below about 6 ml and those above about 18 ml when unquenched samples were assayed. Two quench-correction methods, sample channels ratio and external standard channels ratio, and three different liquid scintillation counters, were used in an investigation to determine the magnitude of the error in predicting counting efficiencies when small volume samples (2 ml) with different levels of quenching were assayed. The 2 ml samples exhibited slightly greater standard deviations of the difference between predicted and determined counting efficiencies than did 15 ml samples. Nevertheless, the magnitude of the errors indicate that if the sample channels ratio method of quench correction is employed, 2 ml samples may be counted in conventional counting vials with little loss in counting precision. (author)

  8. Review of Evaluation, Measurement and Verification Approaches Used to Estimate the Load Impacts and Effectiveness of Energy Efficiency Programs

    Energy Technology Data Exchange (ETDEWEB)

    Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill; Goldman, Charles A.; Schiller, Steven R.

    2010-04-14

    Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase from $3.1 billion in 2008 to $7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy

  9. Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation

    Directory of Open Access Journals (Sweden)

    M. Agatonović

    2012-12-01

    Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.

  10. Bias and efficiency loss in regression estimates due to duplicated observations: a Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Francesco Sarracino

    2017-04-01

    Full Text Available Recent studies documented that survey data contain duplicate records. We assess how duplicate records affect regression estimates, and we evaluate the effectiveness of solutions to deal with duplicate records. Results show that the chances of obtaining unbiased estimates when data contain 40 doublets (about 5% of the sample range between 3.5% and 11.5% depending on the distribution of duplicates. If 7 quintuplets are present in the data (2% of the sample, then the probability of obtaining biased estimates ranges between 11% and 20%. Weighting the duplicate records by the inverse of their multiplicity, or dropping superfluous duplicates outperform other solutions in all considered scenarios. Our results illustrate the risk of using data in presence of duplicate records and call for further research on strategies to analyze affected data.

  11. Efficient spectral estimation by MUSIC and ESPRIT with application to sparse FFT

    Directory of Open Access Journals (Sweden)

    Daniel ePotts

    2016-02-01

    Full Text Available In spectral estimation, one has to determine all parameters of an exponential sum for finitely many (noisysampled data of this exponential sum.Frequently used methods for spectral estimation are MUSIC (MUltiple SIgnal Classification and ESPRIT (Estimation of Signal Parameters viaRotational Invariance Technique.For a trigonometric polynomial of large sparsity, we present a new sparse fast Fourier transform byshifted sampling and using MUSIC resp. ESPRIT, where the ESPRIT based method has lower computational cost.Later this technique is extended to a new reconstruction of a multivariate trigonometric polynomial of large sparsity for given (noisy values sampled on a reconstructing rank-1 lattice. Numerical experiments illustrate thehigh performance of these procedures.

  12. Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators

    International Nuclear Information System (INIS)

    Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens

    2012-01-01

    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations

  13. Estimating the Efficiency of Therapy Groups in a College Counseling Center

    Science.gov (United States)

    Weatherford, Ryan D.

    2017-01-01

    College counseling centers are facing rapidly increasing demands for services and are tasked to find efficient ways of providing adequate services while managing limited space. The use of therapy groups has been proposed as a method of managing demand. This brief report examines the clinical time savings of a traditional group therapy program in a…

  14. Monitoring energy efficiency of condensing boilers via hybrid first-principle modelling and estimation

    NARCIS (Netherlands)

    Satyavada, Harish; Baldi, S.

    2018-01-01

    The operating principle of condensing boilers is based on exploiting heat from flue gases to pre-heat cold water at the inlet of the boiler: by condensing into liquid form, flue gases recover their latent heat of vaporization, leading to 10–12% increased efficiency with respect to traditional

  15. Efficient Estimation of Sensitivities for Counterparty Credit Risk with the Finite Difference Monte Carlo Method

    NARCIS (Netherlands)

    de Graaf, C.S.L.; Kandhai, D.; Sloot, P.M.A.

    According to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method for the

  16. Efficient estimation of sensitivities for counterparty credit risk with the finite difference Monte Carlo method

    NARCIS (Netherlands)

    C.S.L. de Graaf (Kees); B.D. Kandhai; P.M.A. Sloot

    2017-01-01

    htmlabstractAccording to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method

  17. Estimating the Efficiency of Michigan's Rural and Urban Public School Districts

    Science.gov (United States)

    Maranowski, Rita

    2012-01-01

    This study examined student achievement in Michigan public school districts to determine if rural school districts are demonstrating greater financial efficiency by producing higher levels of student achievement than school districts in other geographic locations with similar socioeconomics. Three models were developed using multiple regression…

  18. Estimation of efficiency of dust suppressing works at 30-km zone near the Chernobyl' NPP

    International Nuclear Information System (INIS)

    Bakin, R.I.; Tkachenko, A.V.; Sukhoruchkin, A.K.

    1989-01-01

    Data on efficiency of dust suppressing works at 30-km zone near NPP are analyzed. It is necessary: to reduce radionuclide content in the air in the spring, when the weather is dry, to conduct dust suppressing works on roads and sections of surface with nonfixed ground; in the summer, to wash roads every day. 3 figs

  19. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh; Hong, Byungwoo

    2014-01-01

    that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper

  20. Estimation of the economical and ecological efficiency of the solar heat supply in Russia

    International Nuclear Information System (INIS)

    Marchenko, O.V.; Solomin, S.V.

    2001-01-01

    One carried out numerical study of application efficiency of solar heat supply systems in the climatic conditions of Russia with regard to their economical competitiveness with organic fuel heat conventional sources and role in reduction of greenhouse gas releases. One defined the regions where (under certain conditions) application of solar energy to generate low-potential heat may be reasonable [ru

  1. A simple method for estimation of coagulation efficiency in mixed aerosols. [environmental pollution control

    Science.gov (United States)

    Dimmick, R. L.; Boyd, A.; Wolochow, H.

    1975-01-01

    Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.

  2. ESTIMATION OF EFFICIENCY OF MACHINERY FOR PRE-SOWING TREATMENT OF RADISH SEEDS FOR SEED PRODUCTION

    Directory of Open Access Journals (Sweden)

    S. M. Sirota

    2016-01-01

    Full Text Available The results of pre-sowing treatment of radish seeds aimed at increasing of seed production, yield and its productivity in protected area are presented. The density fractionation of radish seeds by gravity separator is recommended for improvement of planting material quality and increasing of utilization efficiency of frame area.

  3. Estimating the Value of Price Risk Reduction in Energy Efficiency Investments in Buildings

    Directory of Open Access Journals (Sweden)

    Pekka Tuominen

    2017-10-01

    Full Text Available This paper presents a method for calculating the value of price risk reduction to a consumer that can be achieved with investments in energy efficiency. The value of price risk reduction is discussed to some length in general terms in the literature reviewed but, so far, no methodology for calculating the value has been presented. Here we suggest such a method. The problem of valuating price risk reduction is approached using a variation of the Black–Scholes model by considering a hypothetical financial instrument that a consumer would purchase to insure herself against unexpected price hikes. This hypothetical instrument is then compared with an actual energy efficiency investment that reaches the same level of price risk reduction. To demonstrate the usability of the method, case examples are calculated for typical single-family houses in Finland. The results show that the price risk entailed in household energy consumption can be reduced by a meaningful amount with energy efficiency investments, and that the monetary value of this reduction can be calculated. It is argued that this often-overlooked benefit of energy efficiency investments merits more consideration in future studies.

  4. Comparison of different approaches of radiation use efficiency of biomass formation estimation in Mountain Norway spruce

    Czech Academy of Sciences Publication Activity Database

    Krupková, Lenka; Marková, I.; Havránková, Kateřina; Pokorný, Radek; Urban, Otmar; Šigut, Ladislav; Pavelka, Marian; Cienciala, E.; Marek, Michal V.

    2017-01-01

    Roč. 31, č. 1 (2017), s. 325-337 ISSN 0931-1890 R&D Projects: GA MŠk(CZ) LO1415; GA MŠk(CZ) LM2015061 Institutional support: RVO:67179843 Keywords : Solar radiation * Biomass increment * Carbon flux * light use efficiency Subject RIV: GK - Forestry OBOR OECD: Forestry Impact factor: 1.842, year: 2016

  5. Estimation of Transpiration and Water Use Efficiency Using Satellite and Field Observations

    Science.gov (United States)

    Choudhury, Bhaskar J.; Quick, B. E.

    2003-01-01

    Structure and function of terrestrial plant communities bring about intimate relations between water, energy, and carbon exchange between land surface and atmosphere. Total evaporation, which is the sum of transpiration, soil evaporation and evaporation of intercepted water, couples water and energy balance equations. The rate of transpiration, which is the major fraction of total evaporation over most of the terrestrial land surface, is linked to the rate of carbon accumulation because functioning of stomata is optimized by both of these processes. Thus, quantifying the spatial and temporal variations of the transpiration efficiency (which is defined as the ratio of the rate of carbon accumulation and transpiration), and water use efficiency (defined as the ratio of the rate of carbon accumulation and total evaporation), and evaluation of modeling results against observations, are of significant importance in developing a better understanding of land surface processes. An approach has been developed for quantifying spatial and temporal variations of transpiration, and water-use efficiency based on biophysical process-based models, satellite and field observations. Calculations have been done using concurrent meteorological data derived from satellite observations and four dimensional data assimilation for four consecutive years (1987-1990) over an agricultural area in the Northern Great Plains of the US, and compared with field observations within and outside the study area. The paper provides substantive new information about interannual variation, particularly the effect of drought, on the efficiency values at a regional scale.

  6. Drainage estimation to aquifer and water use irrigation efficiency in semi-arid zone for a long period of time

    Science.gov (United States)

    Jiménez-Martínez, J.; Molinero-Huguet, J.; Candela, L.

    2009-04-01

    Water requirements for different crop types according to soil type and climate conditions play not only an important role in agricultural efficiency production, though also for water resources management and control of pollutants in drainage water. The key issue to attain these objectives is the irrigation efficiency. Application of computer codes for irrigation simulation constitutes a fast and inexpensive approach to study optimal agricultural management practices. To simulate daily water balance in the soil, vadose zone and aquifer the VisualBALAN V. 2.0 code was applied to an experimental area under irrigation characterized by its aridity. The test was carried out in three experimental plots for annual row crops (lettuce and melon), perennial vegetables (artichoke), and fruit trees (citrus) under common agricultural practices in open air for October 1999-September 2008. Drip irrigation was applied to crops production due to the scarcity of water resources and the need for water conservation. Water level change was monitored in the top unconfined aquifer for each experimental plot. Results of water balance modelling show a good agreement between observed and estimated water level values. For the study period, mean drainage obtained values were 343 mm, 261 mm and 205 mm for lettuce and melon, artichoke and citrus respectively. Assessment of water use efficiency was based on the IE indicator proposed by the ASCE Task Committee. For the modelled period, water use efficiency was estimated as 73, 71 and 78 % of the applied dose (irrigation + precipitation) for lettuce and melon, artichoke and citrus, respectively.

  7. Estimation of hospital efficiency--do different definitions and casemix measures for hospital output affect the results?

    Science.gov (United States)

    Vitikainen, Kirsi; Street, Andrew; Linna, Miika

    2009-02-01

    Hospital efficiency has been the subject of numerous health economics studies, but there is little evidence on how the chosen output and casemix measures affect the efficiency results. The aim of this study is to examine the robustness of efficiency results due to these factors. Comparison is made between activities and episode output measures, and two different output grouping systems (Classic and FullDRG). Non-parametric data envelopment analysis is used as an analysis technique. The data consist of all public acute care hospitals in Finland in 2005 (n=40). Efficiency estimates were not found to be highly sensitive to the choice between episode and activity descriptions of output, but more so to the choice of DRG grouping system. Estimates are most sensitive to scale assumptions, with evidence of decreasing returns to scale in larger hospitals. Episode measures are generally to be preferred to activity measures because these better capture the patient pathway, while FullDRGs are preferred to Classic DRGs particularly because of the better description of outpatient output in the former grouping system. Attention should be paid to reducing the extent of scale inefficiency in Finland.

  8. Estimation on separation efficiency of aluminum from base-cap of spent fluorescent lamp in hammer crusher unit.

    Science.gov (United States)

    Rhee, Seung-Whee

    2017-09-01

    In order to separate aluminum from the base-cap of spent fluorescent lamp (SFL), the separation efficiency of hammer crusher unit is estimated by introducing a binary separation theory. The base-cap of SFL is composed by glass fragment, binder, ferrous metal, copper and aluminum. The hammer crusher unit to recover aluminum from the base-cap consists of 3stages of hammer crusher, magnetic separator and vibrating screen. The optimal conditions of rotating speed and operating time in the hammer crusher unit are decided at each stage. At the optimal conditions, the aluminum yield and the separation efficiency of hammer crusher unit are estimated by applying a sequential binary separation theory at each stage. And the separation efficiency between hammer crusher unit and roll crush system is compared to show the performance of aluminum recovery from the base-cap of SFL. Since the separation efficiency can be increased to 99% at stage 3, from the experimental results, it is found that aluminum from the base-cap can be sufficiently recovered by the hammer crusher unit. Copyright © 2017. Published by Elsevier Ltd.

  9. Technical note: Instantaneous sampling intervals validated from continuous video observation for behavioral recording of feedlot lambs.

    Science.gov (United States)

    Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L

    2017-11-01

    When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.

  10. How to efficiently obtain accurate estimates of flower visitation rates by pollinators

    NARCIS (Netherlands)

    Fijen, Thijs P.M.; Kleijn, David

    2017-01-01

    Regional declines in insect pollinators have raised concerns about crop pollination. Many pollinator studies use visitation rate (pollinators/time) as a proxy for the quality of crop pollination. Visitation rate estimates are based on observation durations that vary significantly between studies.

  11. Economical efficiency estimation of the power system with an accelerator breeder

    International Nuclear Information System (INIS)

    Rublev, O.V.; Komin, A.V.

    1990-01-01

    The review deals with economical indices of nuclear power system with an accelerator breeder producing secondary nuclear fuel. Electric power cost was estimated by the method of discounted cost. Power system with accelerator breeder compares unfavourably with traditional nuclear power systems with respect to its capitalized cost

  12. Pricing stock options under stochastic volatility and interest rates with efficient method of moments estimation

    NARCIS (Netherlands)

    Jiang, George J.; Sluis, Pieter J. van der

    1999-01-01

    While the stochastic volatility (SV) generalization has been shown to improve the explanatory power over the Black-Scholes model, empirical implications of SV models on option pricing have not yet been adequately tested. The purpose of this paper is to first estimate a multivariate SV model using

  13. An integrative modeling approach for the efficient estimation of cross sectional tibial stresses during locomotion.

    Science.gov (United States)

    Derrick, Timothy R; Edwards, W Brent; Fellin, Rebecca E; Seay, Joseph F

    2016-02-08

    The purpose of this research was to utilize a series of models to estimate the stress in a cross section of the tibia, located 62% from the proximal end, during walking. Twenty-eight male, active duty soldiers walked on an instrumented treadmill while external force data and kinematics were recorded. A rigid body model was used to estimate joint moments and reaction forces. A musculoskeletal model was used to gather muscle length, muscle velocity, moment arm and orientation information. Optimization procedures were used to estimate muscle forces and finally internal bone forces and moments were applied to an inhomogeneous, subject specific bone model obtained from CT scans to estimate stress in the bone cross section. Validity was assessed by comparison to stresses calculated from strain gage data in the literature and sensitivity was investigated using two simplified versions of the bone model-a homogeneous model and an ellipse approximation. Peak compressive stress occurred on the posterior aspect of the cross section (-47.5 ± 14.9 MPa). Peak tensile stress occurred on the anterior aspect (27.0 ± 11.7 MPa) while the location of peak shear was variable between subjects (7.2 ± 2.4 MPa). Peak compressive, tensile and shear stresses were within 0.52 MPa, 0.36 MPa and 3.02 MPa respectively of those calculated from the converted strain gage data. Peak values from a inhomogeneous model of the bone correlated well with homogeneous model (normal: 0.99; shear: 0.94) as did the normal ellipse model (r=0.89-0.96). However, the relationship between shear stress in the inhomogeneous model and ellipse model was less accurate (r=0.64). The procedures detailed in this paper provide a non-invasive and relatively quick method of estimating cross sectional stress that holds promise for assessing injury and osteogenic stimulus in bone during normal physical activity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI

    Energy Technology Data Exchange (ETDEWEB)

    Jen, M [Chang Gung University, Taoyuan City, Taiwan (China); Yan, F; Tseng, Y; Chen, C [Taipei Medical University - Shuang Ho Hospital, Ministry of Health and Welf, New Taipei City, Taiwan (China); Lin, C [GE Healthcare, Taiwan (China); GE Healthcare China, Beijing (China); Liu, H [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtraction of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.

  15. A Methodology for the Estimation of the Wind Generator Economic Efficiency

    Science.gov (United States)

    Zaleskis, G.

    2017-12-01

    Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.

  16. Efficient quality-eactor estimation of a vertical cavity employing a high-contrast grating

    DEFF Research Database (Denmark)

    Taghizadeh, Alireza; Mørk, Jesper; Chung, Il-Sug

    2017-01-01

    Hybrid vertical cavity lasers employing high-contrast grating reflectors are attractive for Si-integrated light source applications. Here, a method for reducing a three-dimensional (3D) optical simulation of this laser structure to lower-dimensional simulations is suggested, which allows for very...... fast and approximate analysis of the quality-factor of the 3D cavity. This approach enables us to efficiently optimize the laser cavity design without performing cumbersome 3D simulations....

  17. Estimating Photosynthetic Radiation Use Efficiency Using Incident Light and Photosynthesis of Individual Leaves

    OpenAIRE

    ROSATI, A.; DEJONG, T. M.

    2003-01-01

    It has been theorized that photosynthetic radiation use efficiency (PhRUE) over the course of a day is constant for leaves throughout a canopy if leaf nitrogen content and photosynthetic properties are adapted to local light so that canopy photosynthesis over a day is optimized. To test this hypothesis, ‘daily’ photosynthesis of individual leaves of Solanum melongena plants was calculated from instantaneous rates of photosynthesis integrated over the daylight hours. Instantaneous photosynthes...

  18. Estimation of the drift eliminator efficiency using numerical and experimental methods

    Directory of Open Access Journals (Sweden)

    Stodůlka Jiří

    2016-01-01

    Full Text Available The purpose of the drift eliminators is to prevent water from escaping in significant amounts the cooling tower. They are designed to catch the droplets dragged by the tower draft and the efficiency given by the shape of the eliminator is the main evaluation criteria. The ability to eliminate the escaping water droplets is studied using CFD and using the experimental IPI method.

  19. Estimating the power efficiency of the thermal power plant modernization by using combined-cycle technologies

    International Nuclear Information System (INIS)

    Hovhannisyan, L.S.; Harutyunyan, N.R.

    2013-01-01

    The power efficiency of the thermal power plant (TPP) modernization by using combined-cycle technologies is introduced. It is shown that it is possible to achieve the greatest decrease in the specific fuel consumption at modernizing the TPP at the expense of introducing progressive 'know-how' of the electric power generation: for TPP on gas, it is combined-cycle, gas-turbine superstructures of steam-power plants and gas-turbines with heat utilization

  20. Thermal efficiency and particulate pollution estimation of four biomass fuels grown on wasteland

    Energy Technology Data Exchange (ETDEWEB)

    Kandpal, J.B.; Madan, M. [Indian Inst. of Tech., New Delhi (India). Centre for Rural Development and Technology

    1996-10-01

    The thermal performance and concentration of suspended particulate matter were studied for 1-hour combustion of four biomass fuels, namely Acacia nilotica, Leucaena leucocepholea, Jatropha curcus, and Morus alba grown in wasteland. Among the four biomass fuels, the highest thermal efficiency was achieved with Acacia nilotica. The suspended particulate matter concentration for 1-hour combustion of four biomass fuels ranged between 850 and 2,360 {micro}g/m{sup 3}.

  1. An Efficient Power Estimation Methodology for Complex RISC Processor-based Platforms

    OpenAIRE

    Rethinagiri , Santhosh Kumar; Ben Atitallah , Rabie; Dekeyser , Jean-Luc; Niar , Smail; Senn , Eric

    2012-01-01

    International audience; In this contribution, we propose an efficient power estima- tion methodology for complex RISC processor-based plat- forms. In this methodology, the Functional Level Power Analysis (FLPA) is used to set up generic power models for the different parts of the system. Then, a simulation framework based on virtual platform is developed to evalu- ate accurately the activities used in the related power mod- els. The combination of the two parts above leads to a het- erogeneou...

  2. Estimating the impact of transport efficiency on trade costs: Evidence from Chinese agricultural traders

    OpenAIRE

    Li, Zhigang; Yu, Xiaohua; Zeng, Yinchu

    2011-01-01

    Using a unique survey data on agricultural traders in China in 2004, this study provides direct evidence on significance of interregional transport costs and their key determinants. Our major findings are as follows: (1) the trade barriers within China are dominated by transport-related costs but not artificial barriers, approximated by tolls and fines; (2) Labor and fuels costs are the most significant component of transport costs; (3) road quality is very important for transportation effici...

  3. Efficient Ensemble State-Parameters Estimation Techniques in Ocean Ecosystem Models: Application to the North Atlantic

    Science.gov (United States)

    El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.

    2016-02-01

    Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate

  4. Estimating cost efficiency of Turkish commercial banks under unobserved heterogeneity with stochastic frontier models

    Directory of Open Access Journals (Sweden)

    Hakan Gunes

    2016-12-01

    Full Text Available This study aims to investigate the cost efficiency of Turkish commercial banks over the restructuring period of the Turkish banking system, which coincides with the 2008 financial global crisis and the 2010 European sovereign debt crisis. To this end, within the stochastic frontier framework, we employ true fixed effects model, where the unobserved bank heterogeneity is integrated in the inefficiency distribution at a mean level. To select the cost function with the most appropriate inefficiency correlates, we first adopt a search algorithm and then utilize the model averaging approach to verify that our results are not exposed to model selection bias. Overall, our empirical results reveal that cost efficiencies of Turkish banks have improved over time, with the effects of the 2008 and 2010 crises remaining rather limited. Furthermore, not only the cost efficiency scores but also impacts of the crises on those scores appear to vary with regard to bank size and ownership structure, in accordance with much of the existing literature.

  5. How Long Is Long Enough? Estimation of Slip-Rate and Earthquake Recurrence Interval on a Simple Plate-Boundary Fault Using 3D Paleoseismic Trenching

    Science.gov (United States)

    Wechsler, N.; Rockwell, T. K.; Klinger, Y.; Agnon, A.; Marco, S.

    2012-12-01

    Models used to forecast future seismicity make fundamental assumptions about the behavior of faults and fault systems in the long term, but in many cases this long-term behavior is assumed using short-term and perhaps non-representative observations. The question arises - how long of a record is long enough to represent actual fault behavior, both in terms of recurrence of earthquakes and of moment release (aka slip-rate). We test earthquake recurrence and slip models via high-resolution three-dimensional trenching of the Beteiha (Bet-Zayda) site on the Dead Sea Transform (DST) in northern Israel. We extend the earthquake history of this simple plate boundary fault to establish slip rate for the past 3-4kyr, to determine the amount of slip per event and to study the fundamental behavior, thereby testing competing rupture models (characteristic, slip-patch, slip-loading, and Gutenberg Richter type distribution). To this end we opened more than 900m of trenches, mapped 8 buried channels and dated more than 80 radiocarbon samples. By mapping buried channels, offset by the DST on both sides of the fault, we obtained for each an estimate of displacement. Coupled with fault crossing trenches to determine event history, we construct earthquake and slip history for the fault for the past 2kyr. We observe evidence for a total of 9-10 surface-rupturing earthquakes with varying offset amounts. 6-7 events occurred in the 1st millennium, compared to just 2-3 in the 2nd millennium CE. From our observations it is clear that the fault is not behaving in a periodic fashion. A 4kyr old buried channel yields a slip rate of 3.5-4mm/yr, consistent with GPS rates for this segment. Yet in spite of the apparent agreement between GPS, Pleistocene to present slip rate, and the lifetime rate of the DST, the past 800-1000 year period appears deficit in strain release. Thus, in terms of moment release, most of the fault has remained locked and is accumulating elastic strain. In contrast, the

  6. Interval methods: An introduction

    DEFF Research Database (Denmark)

    Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

    2006-01-01

    This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...

  7. Estimation of the energy efficiency of cryogenic filled tank use in different systems and devices

    International Nuclear Information System (INIS)

    Blagin, E.V.; Dovgyallo, A.I.; Nekrasova, S.O.; Sarmin, D.V.; Uglanov, D.A.

    2016-01-01

    Highlights: • The cryogenic fueling tank is a device for storage and gasification of working fluid. • Potential energy of pressure can be converted to electricity by circuit of turbines. • It is possible to compensate up to 8% of energy consumed for liquefaction. - Abstract: This article presents a device for storage and gasification of cryogenic working fluid. This device is called cryogenic fueling tank. Working fluid pressure increases during the gasification and potential energy of this pressure can be used in different ways. The ways of integrating the cryogenic fueling tank into existing energy plants are described in this article. The estimation of the cryogenic fueling tank application in the gasification facility as well as in the onboard power system was carried out. This estimation shows that application of such tank as well as a circuit of turbines allows generating up to near 8% of energy which was consumed during gas liquefaction. The estimation of the additionally generated electric energy value was also carried out for each of the cases.

  8. An Improved Weise’s Rule for Efficient Estimation of Stand Quadratic Mean Diameter

    Directory of Open Access Journals (Sweden)

    Róbert Sedmák

    2015-07-01

    Full Text Available The main objective of this study was to explore the accuracy of Weise’s rule of thumb applied to an estimation of the quadratic mean diameter of a forest stand. Virtual stands of European beech (Fagus sylvatica L. across a range of structure types were stochastically generated and random sampling was simulated. We compared the bias and accuracy of stand quadratic mean diameter estimates, employing different ranks of measured stems from a set of the 10 trees nearest to the sampling point. We proposed several modifications of the original Weise’s rule based on the measurement and averaging of two different ranks centered to a target rank. In accordance with the original formulation of the empirical rule, we recommend the application of the measurement of the 6th stem in rank corresponding to the 55% sample percentile of diameter distribution, irrespective of mean diameter size and degree of diameter dispersion. The study also revealed that the application of appropriate two-measurement modifications of Weise’s method, the 4th and 8th ranks or 3rd and 9th ranks averaged to the 6th central rank, should be preferred over the classic one-measurement estimation. The modified versions are characterised by an improved accuracy (about 25% without statistically significant bias and measurement costs comparable to the classic Weise method.

  9. Convex Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

  10. Efficiency estimation of using phased program of caries prevention in children domiciled in Transcarpathian region

    Directory of Open Access Journals (Sweden)

    Klitynska Oksana V.

    2016-01-01

    Full Text Available Background: Caries is a pathological process that occurs in the hard tissues of the teeth after eruption and reduced quality of life due to significant complications, especially in children. An extremely high incidence of dental caries among children living permanently in Transcarpathian region requires a comprehensive prevention program. The aim of this study was to determine the efficiency of complex caries prevention program among children permanently living in the area of biogeochemical fluorine deficiency. Aim of the study: To evaluate efficiency level of using phased program of caries prevention among children of different age groups domiciled in Transcarpathian region. Material and Methods: On examination of 346 children aged 3-8 years, among which 163 (46.9% boys and 183 (53.1% girls, a phased program of complex prophylaxis was created, covering the basic dental diseases in children living permanently in deficiency conditions. The program included: hygienic education of preschool children and their parents; exogenous medicament prevention; early identification and treatment of caries using conventional methods according to treatment protocols; endogenous non-medical prevention, nutrition correction have proved its effectiveness. Results: The indicator of caries prevention efficiency of the proposed scheme for children 5-7 (3-5 years is 69.5%; for children 8-10 age group (6-8 years - 66.9%. Conclusion: The main strategy of pediatric dental services in Ukraine should be created for the children population (aged up 18 years through national and regional programs for the primary prevention of main dental diseases with providing adequate financing in sufficient volume to preserve the nation's dental health for the next 20 years.

  11. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  12. Feasibility study and energy efficiency estimation of geothermal power station based on medium enthalpy water

    Directory of Open Access Journals (Sweden)

    Borsukiewicz-Gozdur Aleksandra

    2007-01-01

    Full Text Available In the work presented are the results of investigations regarding the effectiveness of operation of power plant fed by geothermal water with the flow rate of 100, 150, and 200 m3/h and temperatures of 70, 80, and 90 °C, i. e. geothermal water with the parameters available in some towns of West Pomeranian region as well as in Stargard Szczecinski (86.4 °C, Poland. The results of calculations regard the system of geothermal power plant with possibility of utilization of heat for technological purposes. Analyzed are possibilities of application of different working fluids with respect to the most efficient utilization of geothermal energy. .

  13. Estimating front-wave velocity of infectious diseases: a simple, efficient method applied to bluetongue.

    Science.gov (United States)

    Pioz, Maryline; Guis, Hélène; Calavas, Didier; Durand, Benoît; Abrial, David; Ducrot, Christian

    2011-04-20

    Understanding the spatial dynamics of an infectious disease is critical when attempting to predict where and how fast the disease will spread. We illustrate an approach using a trend-surface analysis (TSA) model combined with a spatial error simultaneous autoregressive model (SAR(err) model) to estimate the speed of diffusion of bluetongue (BT), an infectious disease of ruminants caused by bluetongue virus (BTV) and transmitted by Culicoides. In a first step to gain further insight into the spatial transmission characteristics of BTV serotype 8, we used 2007-2008 clinical case reports in France and TSA modelling to identify the major directions and speed of disease diffusion. We accounted for spatial autocorrelation by combining TSA with a SAR(err) model, which led to a trend SAR(err) model. Overall, BT spread from north-eastern to south-western France. The average trend SAR(err)-estimated velocity across the country was 5.6 km/day. However, velocities differed between areas and time periods, varying between 2.1 and 9.3 km/day. For more than 83% of the contaminated municipalities, the trend SAR(err)-estimated velocity was less than 7 km/day. Our study was a first step in describing the diffusion process for BT in France. To our knowledge, it is the first to show that BT spread in France was primarily local and consistent with the active flight of Culicoides and local movements of farm animals. Models such as the trend SAR(err) models are powerful tools to provide information on direction and speed of disease diffusion when the only data available are date and location of cases.

  14. Efficient three-dimensional reconstruction of aquatic vegetation geometry: Estimating morphological parameters influencing hydrodynamic drag

    Science.gov (United States)

    Liénard, Jean; Lynn, Kendra; Strigul, Nikolay; Norris, Benjamin K.; Gatziolis, Demetrios; Mullarney, Julia C.; Bryan, Karin, R.; Henderson, Stephen M.

    2016-09-01

    Aquatic vegetation can shelter coastlines from energetic waves and tidal currents, sometimes enabling accretion of fine sediments. Simulation of flow and sediment transport within submerged canopies requires quantification of vegetation geometry. However, field surveys used to determine vegetation geometry can be limited by the time required to obtain conventional caliper and ruler measurements. Building on recent progress in photogrammetry and computer vision, we present a method for reconstructing three-dimensional canopy geometry. The method was used to survey a dense canopy of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Photogrammetric estimation of geometry required 1) taking numerous photographs at low tide from multiple viewpoints around 1 m2 quadrats, 2) computing relative camera locations and orientations by triangulation of key features present in multiple images and reconstructing a dense 3D point cloud, and 3) extracting pneumatophore locations and diameters from the point cloud data. Step 3) was accomplished by a new 'sector-slice' algorithm, yielding geometric parameters every 5 mm along a vertical profile. Photogrammetric analysis was compared with manual caliper measurements. In all 5 quadrats considered, agreement was found between manual and photogrammetric estimates of stem number, and of number × mean diameter, which is a key parameter appearing in hydrodynamic models. In two quadrats, pneumatophores were encrusted with numerous barnacles, generating a complex geometry not resolved by hand measurements. In remaining cases, moderate agreement between manual and photogrammetric estimates of stem diameter and solid volume fraction was found. By substantially reducing measurement time in the field while capturing in greater detail the 3D structure, photogrammetry has potential to improve input to hydrodynamic models, particularly for simulations of flow through large-scale, heterogenous canopies.

  15. Estimating the changes in the distribution of energy efficiency in the U.S. automobile assembly industry

    International Nuclear Information System (INIS)

    Boyd, Gale A.

    2014-01-01

    This paper describes the EPA's voluntary ENERGY STAR program and the results of the automobile manufacturing industry's efforts to advance energy management as measured by the updated ENERGY STAR Energy Performance Indicator (EPI). A stochastic single-factor input frontier estimation using the gamma error distribution is applied to separately estimate the distribution of the electricity and fossil fuel efficiency of assembly plants using data from 2003 to 2005 and then compared to model results from a prior analysis conducted for the 1997–2000 time period. This comparison provides an assessment of how the industry has changed over time. The frontier analysis shows a modest improvement (reduction) in “best practice” for electricity use and a larger one for fossil fuels. This is accompanied by a large reduction in the variance of fossil fuel efficiency distribution. The results provide evidence of a shift in the frontier, in addition to some “catching up” of poor performing plants over time. - Highlights: • A non-public dataset of U.S. auto manufacturing plants is compiled. • A stochastic frontier with a gamma distribution is applied to plant level data. • Electricity and fuel use are modeled separately. • Comparison to prior analysis reveals a shift in the frontier and “catching up”. • Results are used by ENERGY STAR to award energy efficiency plant certifications

  16. Estimating the efficiency from Brazilian banks: a bootstrapped Data Envelopment Analysis (DEA

    Directory of Open Access Journals (Sweden)

    Ana Elisa Périco

    2016-01-01

    Full Text Available Abstract The Brazilian banking sector went through several changes in its structure over the past few years. Such changes are related to fusions and acquisitions, as well as the largest market opening to foreign banks. The objective of this paper is to analyze, by applying the bootstrap DEA, the efficiency of banks in Brazil in 2010-2013. The methodology was applied to 30 largest banking organizations in a financial intermediation approach. In that model, the resources entering a bank in the form of deposits and total assets are classified as inputs and besides these manual labor is also considered as a resource capable of generating results. For the output variable, credit operations represent the most appropriate alternative, considering the role of the bank as a financial intermediary. In this work, the matter of the best classification among retail banks and banks specialized in credit has little relevance. The low relevance in this type of comparison is a result of analysis by segments (segments were analyzed separately. The results presented here point to an average level of efficiency for the large Brazilian banks in the period. This scenario requires efforts to reduce expenses but also to increase revenues.

  17. SASKTRAN: A spherical geometry radiative transfer code for efficient estimation of limb scattered sunlight

    International Nuclear Information System (INIS)

    Bourassa, A.E.; Degenstein, D.A.; Llewellyn, E.J.

    2008-01-01

    The inversion of satellite-based observations of limb scattered sunlight for the retrieval of constituent species requires an efficient and accurate modelling of the measurement. We present the development of the SASKTRAN radiative transfer model for the prediction of limb scatter measurements at optical wavelengths by method of successive orders along rays traced in a spherical atmosphere. The component of the signal due to the first two scattering events of the solar beam is accounted for directly along rays traced in the three-dimensional geometry. Simplifying assumptions in successive scattering orders provide computational optimizations without severely compromising the accuracy of the solution. SASKTRAN is designed for the analysis of measurements from the OSIRIS instrument and the implementation of the algorithm is efficient such that the code is suitable for the inversion of OSIRIS profiles on desktop computers. SASKTRAN total limb radiance profiles generally compare better with Monte-Carlo reference models over a large range of solar conditions than the approximate spherical and plane-parallel models typically used for inversions

  18. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  19. Unsupervised Learning for Efficient Texture Estimation From Limited Discrete Orientation Data

    Science.gov (United States)

    Niezgoda, Stephen R.; Glover, Jared

    2013-11-01

    The estimation of orientation distribution functions (ODFs) from discrete orientation data, as produced by electron backscatter diffraction or crystal plasticity micromechanical simulations, is typically achieved via techniques such as the Williams-Imhof-Matthies-Vinel (WIMV) algorithm or generalized spherical harmonic expansions, which were originally developed for computing an ODF from pole figures measured by X-ray or neutron diffraction. These techniques rely on ad-hoc methods for choosing parameters, such as smoothing half-width and bandwidth, and for enforcing positivity constraints and appropriate normalization. In general, such approaches provide little or no information-theoretic guarantees as to their optimality in describing the given dataset. In the current study, an unsupervised learning algorithm is proposed which uses a finite mixture of Bingham distributions for the estimation of ODFs from discrete orientation data. The Bingham distribution is an antipodally-symmetric, max-entropy distribution on the unit quaternion hypersphere. The proposed algorithm also introduces a minimum message length criterion, a common tool in information theory for balancing data likelihood with model complexity, to determine the number of components in the Bingham mixture. This criterion leads to ODFs which are less likely to overfit (or underfit) the data, eliminating the need for a priori parameter choices.

  20. [Macroscopical estimation of the post mortem interval (PMI) and exclusion of the forensically relevant resting period--a comparison of data presented in the literature with recent osteological findings].

    Science.gov (United States)

    Holley, Stephanie; Fiedler, Sabine; Graw, Matthias

    2008-01-01

    The aim of the present study was to determine to what extent macroscopical parameters mentioned in the literature are suitable for the estimation of the post mortem interval (PMI) and particularly for the exclusion of the forensically relevant resting period for recent bone material. The macroscopical examination of recent bone material with a known PMI showed that only one published parameter (relics of adipocere in the cross section of the compacta) was consistent with our findings for this particular resting period (27-28 years). Other macroscopical parameters presented in the literature were contradictory to the results observed in this study. Among those are the rigidity of bones, the adhesion of soft tissue, the filling of the marrow cavity, and the permeation of the epiphyses with adipocere. Concerning the exclusion of the forensically relevant resting period, a similar result was observed. This study identified some diagnostic findings in bones with a resting period of less than 50 years which according to the literature should only be present after a resting period of more than 50 years. These features included the lack of macroscopical traces of adipocere, degradation of the compacta surface, detachment of the cortical substance, the ability of bone to be broken with bare hands, and superficial usures. Moreover, in one-third of our cases we identified some intra-individual differences not previously described in the literature. In addition to the other results, those intra-individual differences make an estimation of the PMI more difficult. However it should be noted that those published parameters were collected from bone material which was stored in a "relatively arid sand-grit-clay soil of the broken stone layer of Munich". The bones in the present study were stored in acidic and clayey-loamy soil, partly with lateral water flow. In conclusion, the present study demonstrates that one should use caution estimating the post mortem interval and excluding

  1. Efficiency estimation method of three-wired AC to DC line transfer

    Science.gov (United States)

    Solovev, S. V.; Bardanov, A. I.

    2018-05-01

    The development of power semiconductor converters technology expands the scope of their application to medium voltage distribution networks (6-35 kV). Particularly rectifiers and inverters of appropriate power capacity complement the topology of such voltage level networks with the DC links and lines. The article presents a coefficient that allows taking into account the increase of transmission line capacity depending on the parameters of it. The application of the coefficient is presented by the example of transfer three-wired AC line to DC in various methods. Dependences of the change in the capacity from the load power factor of the line and the reactive component of the resistance of the transmission line are obtained. Conclusions are drawn about the most efficient ways of converting a three-wired AC line to direct current.

  2. Evaluation of OiW Measurement Technologies for Deoiling Hydrocyclone Efficiency Estimation and Control

    DEFF Research Database (Denmark)

    Løhndorf, Petar Durdevic; Pedersen, Simon; Yang, Zhenyu

    2016-01-01

    Offshore oil and gas industry has been active in the North Sea for more than half a century, contributing to the economy and facilitating a low oil import rate in the producing countries. The peak production was reached in the early 2000s, and since then the oil production has been decreasing while...... to reach the desired oil production capacity, consequently the discharged amount of oil increases.This leads to oceanic pollution, which has been linked to various negative effects in the marine life. The current legislation requires a maximum oil discharge of 30 parts per million (PPM). The oil in water...... a novel control technology which is based on online and dynamic OiW measurements. This article evaluates some currently available on- line measuring technologies to measure OiW, and the possibility to use these techniques for hydrocyclone efficiency evaluation, model development and as a feedback...

  3. Estimating and understanding the efficiency of nanoparticles in enhancing the conductivity of carbon nanotube/polymer composites

    KAUST Repository

    Mora Cordova, Angel

    2018-05-22

    Carbon nanotubes (CNTs) have been widely used to improve the electrical conductivity of polymers. However, not all CNTs actively participate in the conduction of electricity since they have to be close to each other to form a conductive network. The amount of active CNTs is rarely discussed as it is not captured by percolation theory. However, this amount is a very important information that could be used in a definition of loading efficiency for CNTs (and, in general, for any nanofiller). Thus, we develop a computational tool to quantify the amount of CNTs that actively participates in the conductive network. We then use this quantity to propose a definition of loading efficiency. We compare our results with an expression presented in the literature for the fraction of percolated CNTs (although not presented as a definition of efficiency). We found that this expression underestimates the fraction of percolated CNTs. We thus propose an improved estimation. We also study how efficiency changes with CNT loading and the CNT aspect ratio. We use this concept to study the size of the representative volume element (RVE) for polymers loaded with CNTs, which has received little attention in the past. Here, we find the size of RVE based on both loading efficiency and electrical conductivity such that the scales of “morphological” and “functional” RVEs can be compared. Additionally, we study the relations between particle and network properties (such as efficiency, CNT conductivity and junction resistance) and the conductivity of CNT/polymer composites. We present a series of recommendations to improve the conductivity of a composite based on our simulation results.

  4. Estimating and understanding the efficiency of nanoparticles in enhancing the conductivity of carbon nanotube/polymer composites

    KAUST Repository

    Mora Cordova, Angel; Han, Fei; Lubineau, Gilles

    2018-01-01

    Carbon nanotubes (CNTs) have been widely used to improve the electrical conductivity of polymers. However, not all CNTs actively participate in the conduction of electricity since they have to be close to each other to form a conductive network. The amount of active CNTs is rarely discussed as it is not captured by percolation theory. However, this amount is a very important information that could be used in a definition of loading efficiency for CNTs (and, in general, for any nanofiller). Thus, we develop a computational tool to quantify the amount of CNTs that actively participates in the conductive network. We then use this quantity to propose a definition of loading efficiency. We compare our results with an expression presented in the literature for the fraction of percolated CNTs (although not presented as a definition of efficiency). We found that this expression underestimates the fraction of percolated CNTs. We thus propose an improved estimation. We also study how efficiency changes with CNT loading and the CNT aspect ratio. We use this concept to study the size of the representative volume element (RVE) for polymers loaded with CNTs, which has received little attention in the past. Here, we find the size of RVE based on both loading efficiency and electrical conductivity such that the scales of “morphological” and “functional” RVEs can be compared. Additionally, we study the relations between particle and network properties (such as efficiency, CNT conductivity and junction resistance) and the conductivity of CNT/polymer composites. We present a series of recommendations to improve the conductivity of a composite based on our simulation results.

  5. An efficient method for estimating bioavailability of arsenic in soils: a comparison with acid leachates

    Energy Technology Data Exchange (ETDEWEB)

    Ng, J.C.; Hertle, A.; Seawright, A.A. [Queensland Univ., Brisbane (Australia). National Research Centre for Environmental Toxicology; Mcdougall, K.W. [Wollongbar Agricultural Institute (Australia)

    1997-12-31

    With the view of estimating bioavailability of metals from contaminated sites and risk assessment, a rat model is used for a comparative bioavailability test in which groups of rats were given via the oral route a slurry of arsenic contaminated soils, a solution of sodium arsenate or sodium arsenite, or calcium arsenite spiked wheat flour. Blood samples are collected 96 hours after dosing for the arsenic determination. The comparative bioavailability (CBA) is calculated from the ratio of arsenic results obtained from the soil group and arsenic control group dosed with sodium arsenate or arsenite. CBA results show a good correlation with 0.5 M HCl and 1.0 M HCl acid leachates. The rat model process to be a sensitive indicator using the blood for the study of bioavailability of arsenic in soils

  6. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert

  7. Efficient Estimation of Extreme Non-linear Roll Motions using the First-order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2007-01-01

    In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... frequency domain methods can be applied. To non-linear responses like the roll motion, standard methods like direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...

  8. An unbiased stereological method for efficiently quantifying the innervation of the heart and other organs based on total length estimations

    DEFF Research Database (Denmark)

    Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela

    2010-01-01

    Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...

  9. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    Science.gov (United States)

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    Directory of Open Access Journals (Sweden)

    Ionut Bebu

    2016-06-01

    Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

  11. Efficient synthesis of tension modulation in strings and membranes based on energy estimation.

    Science.gov (United States)

    Avanzini, Federico; Marogna, Riccardo; Bank, Balázs

    2012-01-01

    String and membrane vibrations cannot be considered as linear above a certain amplitude due to the variation in string or membrane tension. A relevant special case is when the tension is spatially constant and varies in time only in dependence of the overall string length or membrane surface. The most apparent perceptual effect of this tension modulation phenomenon is the exponential decay of pitch in time. Pitch glides due to tension modulation are an important timbral characteristic of several musical instruments, including the electric guitar and tom-tom drum, and many ethnic instruments. This paper presents a unified formulation to the tension modulation problem for one-dimensional (1-D) (string) and two-dimensional (2-D) (membrane) cases. In addition, it shows that the short-time average of the tension variation, which is responsible for pitch glides, is approximately proportional to the system energy. This proportionality allows the efficient physics-based sound synthesis of pitch glides. The proposed models require only slightly more computational resources than linear models as opposed to earlier tension-modulated models of higher complexity. © 2012 Acoustical Society of America.

  12. A rapid, sensitive, and cost-efficient assay to estimate viability of potato cyst nematodes.

    Science.gov (United States)

    van den Elsen, Sven; Ave, Maaike; Schoenmakers, Niels; Landeweert, Renske; Bakker, Jaap; Helder, Johannes

    2012-02-01

    Potato cyst nematodes (PCNs) are quarantine organisms, and they belong to the economically most relevant pathogens of potato worldwide. Methodologies to assess the viability of their cysts, which can contain 200 to 500 eggs protected by the hardened cuticle of a dead female, are either time and labor intensive or lack robustness. We present a robust and cost-efficient viability assay based on loss of membrane integrity upon death. This assay uses trehalose, a disaccharide present at a high concentration in the perivitelline fluid of PCN eggs, as a viability marker. Although this assay can detect a single viable egg, the limit of detection for regular field samples was higher, ≈10 viable eggs, due to background signals produced by other soil components. On the basis of 30 nonviable PCN samples from The Netherlands, a threshold level was defined (ΔA(trehalose) = 0.0094) below which the presence of >10 viable eggs is highly unlikely (true for ≈99.7% of the observations). This assay can easily be combined with a subsequent DNA-based species determination. The presence of trehalose is a general phenomenon among cyst nematodes; therefore, this method can probably be used for (for example) soybean, sugar beet, and cereal cyst nematodes as well.

  13. Estimation of efficiency of hydrotransport pipelines polyurethane coating application in comparison with steel pipelines

    Science.gov (United States)

    Aleksandrov, V. I.; Vasilyeva, M. A.; Pomeranets, I. B.

    2017-10-01

    The paper presents analytical calculations of specific pressure loss in hydraulic transport of the Kachkanarsky GOK iron ore processing tailing slurry. The calculations are based on the results of the experimental studies on specific pressure loss dependence upon hydraulic roughness of pipelines internal surface lined with polyurethane coating. The experiments proved that hydraulic roughness of polyurethane coating is by the factor of four smaller than that of steel pipelines, resulting in a decrease of hydraulic resistance coefficients entered into calculating formula of specific pressure loss - the Darcy-Weisbach formula. Relative and equivalent roughness coefficients are calculated for pipelines with polyurethane coating and without it. Comparative calculations show that hydrotransport pipelines polyurethane coating application is conductive to a specific energy consumption decrease in hydraulic transport of the Kachkanarsky GOC iron ore processing tailings slurry by the factor of 1.5. The experiments were performed on a laboratory hydraulic test rig with a view to estimate the character and rate of physical roughness change in pipe samples with polyurethane coating. The experiments showed that during the following 484 hours of operation, roughness changed in all pipe samples inappreciably. As a result of processing of the experimental data by the mathematical statistics methods, an empirical formula was obtained for the calculation of operating roughness of polyurethane coating surface, depending on the pipeline operating duration with iron ore processing tailings slurry.

  14. A simplified model of natural and mechanical removal to estimate cleanup equipment efficiency

    International Nuclear Information System (INIS)

    Lehr, W.

    2001-01-01

    Oil spill response organizations rely on modelling to make decisions in offshore response operations. Models are used to test different cleanup strategies and to measure the expected cost of cleanup and the reduction in environmental impact. The oil spill response community has traditionally used the concept of worst case scenario in developing contingency plans for spill response. However, there are many drawbacks to this approach. The Hazardous Materials Response Division of the National Oceanic and Atmospheric Administration in Cooperation with the U.S. Navy Supervisor of Salvage and Diving has developed a Trajectory Analysis Planner (TAP) which will give planners the tool to try out different cleanup strategies and equipment configurations based upon historical wind and current conditions instead of worst-case scenarios. The spill trajectory model is a classic example in oil spill modelling that uses advanced non-linear three-dimensional hydrodynamical sub-models to estimate surface currents under conditions where oceanographic initial conditions are not accurately known and forecasts of wind stress are unreliable. In order to get better answers, it is often necessary to refine input values rather than increasing the sophistication of the hydrodynamics. This paper described another spill example where the level of complexity of the algorithms needs to be evaluated with regard to the reliability of the input, the sensitivity of the answers to input and model parameters, and the comparative reliability of other algorithms in the model. 9 refs., 1 fig

  15. On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences

    Directory of Open Access Journals (Sweden)

    Jeyarajan Thiyagalingam

    2011-01-01

    Full Text Available Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results.

  16. Low-rank Kalman filtering for efficient state estimation of subsurface advective contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2012-04-01

    Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.

  17. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

    Directory of Open Access Journals (Sweden)

    Manuel G Scotto

    2003-12-01

    Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

  18. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    Science.gov (United States)

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  19. A Production Efficiency Model-Based Method for Satellite Estimates of Corn and Soybean Yields in the Midwestern US

    Directory of Open Access Journals (Sweden)

    Andrew E. Suyker

    2013-11-01

    Full Text Available Remote sensing techniques that provide synoptic and repetitive observations over large geographic areas have become increasingly important in studying the role of agriculture in global carbon cycles. However, it is still challenging to model crop yields based on remotely sensed data due to the variation in radiation use efficiency (RUE across crop types and the effects of spatial heterogeneity. In this paper, we propose a production efficiency model-based method to estimate corn and soybean yields with MODerate Resolution Imaging Spectroradiometer (MODIS data by explicitly handling the following two issues: (1 field-measured RUE values for corn and soybean are applied to relatively pure pixels instead of the biome-wide RUE value prescribed in the MODIS vegetation productivity product (MOD17; and (2 contributions to productivity from vegetation other than crops in mixed pixels are deducted at the level of MODIS resolution. Our estimated yields statistically correlate with the national survey data for rainfed counties in the Midwestern US with low errors for both corn (R2 = 0.77; RMSE = 0.89 MT/ha and soybeans (R2 = 0.66; RMSE = 0.38 MT/ha. Because the proposed algorithm does not require any retrospective analysis that constructs empirical relationships between the reported yields and remotely sensed data, it could monitor crop yields over large areas.

  20. Development of electrical efficiency measurement techniques for 10 kW-class SOFC system: Part II. Uncertainty estimation

    International Nuclear Information System (INIS)

    Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru

    2009-01-01

    Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as ±0.12% at 95% level of confidence. Micro-gas chromatography with/without CH 4 quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty ±1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as ±0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within ±1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably

  1. Estimation of Efficiency of the Cooling Channel of the Nozzle Blade of Gas-Turbine Engines

    Science.gov (United States)

    Vikulin, A. V.; Yaroslavtsev, N. L.; Zemlyanaya, V. A.

    2018-02-01

    The main direction of improvement of gas-turbine plants (GTP) and gas-turbine engines (GTE) is increasing the gas temperature at the turbine inlet. For the solution of this problem, promising systems of intensification of heat exchange in cooled turbine blades are developed. With this purpose, studies of the efficiency of the cooling channel of the nozzle blade in the basic modification and of the channel after constructive measures for improvement of the cooling system by the method of calorimetry in a liquid-metal thermostat were conducted. The combined system of heat-exchange intensification with the complicated scheme of branched channels is developed; it consists of a vortex matrix and three rows of inclined intermittent trip strips. The maximum value of hydraulic resistance ξ is observed at the first row of the trip strips, which is connected with the effect of dynamic impact of airflow on the channel walls, its turbulence, and rotation by 117° at the inlet to the channels formed by the trip strips. These factors explain the high value of hydraulic resistance equal to 3.7-3.4 for the first row of the trip strips. The obtained effect was also confirmed by the results of thermal tests, i.e., the unevenness of heat transfer on the back and on the trough of the blade is observed at the first row of the trip strips, which amounts 8-12%. This unevenness has a fading character; at the second row of the trip strips, it amounts to 3-7%, and it is almost absent at the third row. At the area of vortex matrix, the intensity of heat exchange on the blade back is higher as compared to the trough, which is explained by the different height of the matrix ribs on its opposite sides. The design changes in the nozzle blade of basic modification made it possible to increase the intensity of heat exchange by 20-50% in the area of the vortex matrix and by 15-30% on the section of inclined intermittent trip strips. As a result of research, new criteria dependences for the

  2. Estimation of the potential efficiency of a multijunction solar cell at a limit balance of photogenerated currents

    Energy Technology Data Exchange (ETDEWEB)

    Mintairov, M. A., E-mail: mamint@mail.ioffe.ru; Evstropov, V. V.; Mintairov, S. A.; Shvarts, M. Z.; Timoshina, N. Kh.; Kalyuzhnyy, N. A. [Russian Academy of Sciences, Ioffe Physical-Technical Institute (Russian Federation)

    2015-05-15

    A method is proposed for estimating the potential efficiency which can be achieved in an initially unbalanced multijunction solar cell by the mutual convergence of photogenerated currents: to extract this current from a relatively narrow band-gap cell and to add it to a relatively wide-gap cell. It is already known that the properties facilitating relative convergence are inherent to such objects as bound excitons, quantum dots, donor-acceptor pairs, and others located in relatively wide-gap cells. In fact, the proposed method is reduced to the problem of obtaining such a required light current-voltage (I–V) characteristic which corresponds to the equality of all photogenerated short-circuit currents. Two methods for obtaining the required light I–V characteristic are used. The first one is selection of the spectral composition of the radiation incident on the multijunction solar cell from an illuminator. The second method is a double shift of the dark I–V characteristic: a current shift J{sub g} (common set photogenerated current) and a voltage shift (−J{sub g}R{sub s}), where R{sub s} is the series resistance. For the light and dark I–V characteristics, a general analytical expression is derived, which considers the effect of so-called luminescence coupling in multijunction solar cells. The experimental I–V characteristics are compared with the calculated ones for a three-junction InGaP/GaAs/Ge solar cell with R{sub s} = 0.019 Ω cm{sup 2} and a maximum factual efficiency of 36.9%. Its maximum potential efficiency is estimated as 41.2%.

  3. Spectral and Energy Efficient Low-Overhead Uplink and Downlink Channel Estimation for 5G Massive MIMO Systems

    Directory of Open Access Journals (Sweden)

    Imran Khan

    2018-01-01

    Full Text Available Uplink and Downlink channel estimation in massive Multiple Input Multiple Output (MIMO systems is an intricate issue because of the increasing channel matrix dimensions. The channel feedback overhead using traditional codebook schemes is very large, which consumes more bandwidth and decreases the overall system efficiency. The purpose of this paper is to decrease the channel estimation overhead by taking the advantage of sparse attributes and also to optimize the Energy Efficiency (EE of the system. To cope with this issue, we propose a novel approach by using Compressed-Sensing (CS, Block Iterative-Support-Detection (Block-ISD, Angle-of-Departure (AoD and Structured Compressive Sampling Matching Pursuit (S-CoSaMP algorithms to reduce the channel estimation overhead and compare them with the traditional algorithms. The CS uses temporal-correlation of time-varying channels to produce Differential-Channel Impulse Response (DCIR among two CIRs that are adjacent in time-slots. DCIR has greater sparsity than the conventional CIRs as it can be easily compressed. The Block-ISD uses spatial-correlation of the channels to obtain the block-sparsity which results in lower pilot-overhead. AoD quantizes the channels whose path-AoDs variation is slower than path-gains and such information is utilized for reducing the overhead. S-CoSaMP deploys structured-sparsity to obtain reliable Channel-State-Information (CSI. MATLAB simulation results show that the proposed CS based algorithms reduce the feedback and pilot-overhead by a significant percentage and also improve the system capacity as compared with the traditional algorithms. Moreover, the EE level increases with increasing Base Station (BS density, UE density and lowering hardware impairments level.

  4. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  5. Metamodel for Efficient Estimation of Capacity-Fade Uncertainty in Li-Ion Batteries for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jaewook Lee

    2015-06-01

    Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.

  6. Cilioprotists as biological indicators for estimating the efficiency of using Gravel Bed Hydroponics System in domestic wastewater treatment.

    Science.gov (United States)

    El-Serehy, Hamed A; Bahgat, Magdy M; Al-Rasheid, Khaled; Al-Misned, Fahad; Mortuza, Golam; Shafik, Hesham

    2014-07-01

    Interest has increased over the last several years in using different methods for treating sewage. The rapid population growth in developing countries (Egypt, for example, with a population of more than 87 millions) has created significant sewage disposal problems. There is therefore a growing need for sewage treatment solutions with low energy requirements and using indigenous materials and skills. Gravel Bed Hydroponics (GBH) as a constructed wetland system for sewage treatment has been proved effective for sewage treatment in several Egyptian villages. The system provided an excellent environment for a wide range of species of ciliates (23 species) and these organisms were potentially very useful as biological indicators for various saprobic conditions. Moreover, the ciliates provided excellent means for estimating the efficiency of the system for sewage purification. Results affirmed the ability of this system to produce high quality effluent with sufficient microbial reduction to enable the production of irrigation quality water.

  7. Chapter 21: Estimating Net Savings - Common Practices. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Violette, Daniel M. [Navigant, Boulder, CO (United States); Rathbun, Pamela [Tetra Tech, Madison, WI (United States)

    2017-11-02

    This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and compares the current industry practices for determining net energy savings but does not prescribe methods.

  8. Efficiency in the Worst Production Situation Using Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Md. Kamrul Hossain

    2013-01-01

    Full Text Available Data envelopment analysis (DEA measures relative efficiency among the decision making units (DMU without considering noise in data. The least efficient DMU indicates that it is in the worst situation. In this paper, we measure efficiency of individual DMU whenever it losses the maximum output, and the efficiency of other DMUs is measured in the observed situation. This efficiency is the minimum efficiency of a DMU. The concept of stochastic data envelopment analysis (SDEA is a DEA method which considers the noise in data which is proposed in this study. Using bounded Pareto distribution, we estimate the DEA efficiency from efficiency interval. Small value of shape parameter can estimate the efficiency more accurately using the Pareto distribution. Rank correlations were estimated between observed efficiencies and minimum efficiency as well as between observed and estimated efficiency. The correlations are indicating the effectiveness of this SDEA model.

  9. Programming with Intervals

    Science.gov (United States)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  10. Plant Friendly Input Design for Parameter Estimation in an Inertial System with Respect to D-Efficiency Constraints

    Directory of Open Access Journals (Sweden)

    Wiktor Jakowluk

    2014-11-01

    Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content  from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.

  11. Estimating the Influence of Housing Energy Efficiency and Overheating Adaptations on Heat-Related Mortality in the West Midlands, UK

    Directory of Open Access Journals (Sweden)

    Jonathon Taylor

    2018-05-01

    Full Text Available Mortality rates rise during hot weather in England, and projected future increases in heatwave frequency and intensity require the development of heat protection measures such as the adaptation of housing to reduce indoor overheating. We apply a combined building physics and health model to dwellings in the West Midlands, UK, using an English Housing Survey (EHS-derived stock model. Regional temperature exposures, heat-related mortality risk, and space heating energy consumption were estimated for 2030s, 2050s, and 2080s medium emissions climates prior to and following heat mitigating, energy-efficiency, and occupant behaviour adaptations. Risk variation across adaptations, dwellings, and occupant types were assessed. Indoor temperatures were greatest in converted flats, while heat mortality rates were highest in bungalows due to the occupant age profiles. Full energy efficiency retrofit reduced regional domestic space heating energy use by 26% but increased summertime heat mortality 3–4%, while reduced façade absorptance decreased heat mortality 12–15% but increased energy consumption by 4%. External shutters provided the largest reduction in heat mortality (37–43%, while closed windows caused a large increase in risk (29–64%. Ensuring adequate post-retrofit ventilation, targeted installation of shutters, and ensuring operable windows in dwellings with heat-vulnerable occupants may save energy and significantly reduce heat-related mortality.

  12. Parameter identification for structural dynamics based on interval analysis algorithm

    Science.gov (United States)

    Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke

    2018-04-01

    A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.

  13. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad; Valstar, Johan R.; Hoteit, Ibrahim

    2014-01-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  14. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-09-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  15. Using the value of Lin's concordance correlation coefficient as a criterion for efficient estimation of areas of leaves of eelgrass from noisy digital images.

    Science.gov (United States)

    Echavarría-Heras, Héctor; Leal-Ramírez, Cecilia; Villa-Diharce, Enrique; Castillo, Oscar

    2014-01-01

    Eelgrass is a cosmopolitan seagrass species that provides important ecological services in coastal and near-shore environments. Despite its relevance, loss of eelgrass habitats is noted worldwide. Restoration by replanting plays an important role, and accurate measurements of the standing crop and productivity of transplants are important for evaluating restoration of the ecological functions of natural populations. Traditional assessments are destructive, and although they do not harm natural populations, in transplants the destruction of shoots might cause undesirable alterations. Non-destructive assessments of the aforementioned variables are obtained through allometric proxies expressed in terms of measurements of the lengths or areas of leaves. Digital imagery could produce measurements of leaf attributes without the removal of shoots, but sediment attachments, damage infringed by drag forces or humidity contents induce noise-effects, reducing precision. Available techniques for dealing with noise caused by humidity contents on leaves use the concepts of adjacency, vicinity, connectivity and tolerance of similarity between pixels. Selection of an interval of tolerance of similarity for efficient measurements requires extended computational routines with tied statistical inferences making concomitant tasks complicated and time consuming. The present approach proposes a simplified and cost-effective alternative, and also a general tool aimed to deal with any sort of noise modifying eelgrass leaves images. Moreover, this selection criterion relies only on a single statistics; the calculation of the maximum value of the Concordance Correlation Coefficient for reproducibility of observed areas of leaves through proxies obtained from digital images. Available data reveals that the present method delivers simplified, consistent estimations of areas of eelgrass leaves taken from noisy digital images. Moreover, the proposed procedure is robust because both the optimal

  16. Cálcio ionizado no soro: estimativa do intervalo de referência e condições de coleta Serum ionized calcium: reference interval estimation and blood collection condictions

    Directory of Open Access Journals (Sweden)

    Adagmar Andriolo

    2004-04-01

    methodology with favorable cost/benefice ratio. The use of this methodology implies in reference interval estimation. OBJECTIVE: To estimate the reference interval for serum ionized calcium, and to evaluate interferences in tourniquet time application, and interferences in sample refrigeration before analysis. MATERIAL AND METHOD: to estimate the reference interval we included the results of 11,320 consecutive calcium ionized determinations accomplished from January 2000 to November 2002; in order to evaluate the effect of sample refrigeration, 16 samples were collected in duplicate, so that one tube was placed in ice bath and the other was maintained in room temperature. To evaluate the effect of tourniquet application time, we collected blood samples from one arm of 6 normal subjects, immediately after, and from the other arm, after 3 minutes of the tourniquet application. The blood was collected in evacuated tubes with gel separator and centrifuged up to 30 minutes after collection. All determinations were performed up to 4 hours after the centrifugation by ion-selective electrode. RESULTS: regarding to the central 95% data distribution, the inferior and superior limits were, respectively, 1.11 (confidence interval of 90%: 1.1 to 1.11 and 1.4mmol/l (confidence interval of 90%: 1.39 to 1.41. No significant differences were detected between results with and without refrigeration and between samples with less than 1 and after 3 minutes of tourniquet application.

  17. An applicable method for efficiency estimation of operating tray distillation columns and its comparison with the methods utilized in HYSYS and Aspen Plus

    Science.gov (United States)

    Sadeghifar, Hamidreza

    2015-10-01

    Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.

  18. Interval Forecast for Smooth Transition Autoregressive Model ...

    African Journals Online (AJOL)

    In this paper, we propose a simple method for constructing interval forecast for smooth transition autoregressive (STAR) model. This interval forecast is based on bootstrapping the residual error of the estimated STAR model for each forecast horizon and computing various Akaike information criterion (AIC) function. This new ...

  19. Low-Pass Filtering Approach via Empirical Mode Decomposition Improves Short-Scale Entropy-Based Complexity Estimation of QT Interval Variability in Long QT Syndrome Type 1 Patients

    Directory of Open Access Journals (Sweden)

    Vlasta Bari

    2014-09-01

    Full Text Available Entropy-based complexity of cardiovascular variability at short time scales is largely dependent on the noise and/or action of neural circuits operating at high frequencies. This study proposes a technique for canceling fast variations from cardiovascular variability, thus limiting the effect of these overwhelming influences on entropy-based complexity. The low-pass filtering approach is based on the computation of the fastest intrinsic mode function via empirical mode decomposition (EMD and its subtraction from the original variability. Sample entropy was exploited to estimate complexity. The procedure was applied to heart period (HP and QT (interval from Q-wave onset to T-wave end variability derived from 24-hour Holter recordings in 14 non-mutation carriers (NMCs and 34 mutation carriers (MCs subdivided into 11 asymptomatic MCs (AMCs and 23 symptomatic MCs (SMCs. All individuals belonged to the same family developing long QT syndrome type 1 (LQT1 via KCNQ1-A341V mutation. We found that complexity indexes computed over EMD-filtered QT variability differentiated AMCs from NMCs and detected the effect of beta-blocker therapy, while complexity indexes calculated over EMD-filtered HP variability separated AMCs from SMCs. The EMD-based filtering method enhanced features of the cardiovascular control that otherwise would have remained hidden by the dominant presence of noise and/or fast physiological variations, thus improving classification in LQT1.

  20. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns.

    Directory of Open Access Journals (Sweden)

    Mohammad Manir Hossain Mollah

    Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large

  1. ESTIMATION OF LONG-TERM INVESTMENT PROJECTS WITH ENERGY-EFFICIENT SOLUTIONS BASED ON LIFE CYCLE COSTS INDICATOR

    Directory of Open Access Journals (Sweden)

    Bazhenov Viktor Ivanovich

    2015-09-01

    Full Text Available The starting stage of the tender procedures in Russia with the participation of foreign suppliers dictates the feasibility of the developments for economical methods directed to comparison of technical solutions on the construction field. The article describes the example of practical Life Cycle Cost (LCC evaluations under respect of Present Value (PV determination. These create a possibility for investor to estimate long-term projects (indicated as 25 years as commercially profitable, taking into account inflation rate, interest rate, real discount rate (indicated as 5 %. For economic analysis air-blower station of WWTP was selected as a significant energy consumer. Technical variants for the comparison of blower types are: 1 - multistage without control, 2 - multistage with VFD control, 3 - single stage double vane control. The result of LCC estimation shows the last variant as most attractive or cost-effective for investments with economy of 17,2 % (variant 1 and 21,0 % (variant 2 under adopted duty conditions and evaluations of capital costs (Cic + Cin with annual expenditure related (Ce+Co+Cm. The adopted duty conditions include daily and seasonal fluctuations of air flow. This was the reason for the adopted energy consumption as, kW∙h: 2158 (variant 1,1743...2201 (variant 2, 1058...1951 (variant 3. The article refers to Europump guide tables in order to simplify sophisticated factors search (Cp /Cn, df, which can be useful for economical analyses in Russia. Example of evaluations connected with energy-efficient solutions is given, but this reference involves the use of materials for the cases with resource savings, such as all types of fuel. In conclusion follows the assent to use LCC indicator jointly with the method of determining discounted cash flows, that will satisfy the investor’s need for interest source due to technical and economical comparisons.

  2. Towards Remote Estimation of Radiation Use Efficiency in Maize Using UAV-Based Low-Cost Camera Imagery

    Directory of Open Access Journals (Sweden)

    Andreas Tewes

    2018-02-01

    Full Text Available Radiation Use Efficiency (RUE defines the productivity with which absorbed photosynthetically active radiation (APAR is converted to plant biomass. Readily used in crop growth models to predict dry matter accumulation, RUE is commonly determined by elaborate static sensor measurements in the field. Different definitions are used, based on total absorbed PAR (RUEtotal or PAR absorbed by the photosynthetically active leaf tissue only (RUEgreen. Previous studies have shown that the fraction of PAR absorbed (fAPAR, which supports the assessment of RUE, can be reliably estimated via remote sensing (RS, but unfortunately at spatial resolutions too coarse for experimental agriculture. UAV-based RS offers the possibility to cover plant reflectance at very high spatial and temporal resolution, possibly covering several experimental plots in little time. We investigated if (a UAV-based low-cost camera imagery allowed estimating RUEs in different experimental plots where maize was cultivated in the growing season of 2016, (b those values were different from the ones previously reported in literature and (c there was a difference between RUEtotal and RUEgreen. We determined fractional cover and canopy reflectance based on the RS imagery. Our study found that RUEtotal ranges between 4.05 and 4.59, and RUEgreen between 4.11 and 4.65. These values are higher than those published in other research articles, but not outside the range of plausibility. The difference between RUEtotal and RUEgreen was minimal, possibly due to prolonged canopy greenness induced by the stay-green trait of the cultivar grown. The procedure presented here makes time-consuming APAR measurements for determining RUE especially in large experiments superfluous.

  3. Laboratory estimation of net trophic transfer efficiencies of PCB congeners to lake trout (Salvelinus namaycush) from its prey

    Science.gov (United States)

    Madenjian, Charles P.; Rediske, Richard R.; O'Keefe, James P.; David, Solomon R.

    2014-01-01

    A technique for laboratory estimation of net trophic transfer efficiency (γ) of polychlorinated biphenyl (PCB) congeners to piscivorous fish from their prey is described herein. During a 135-day laboratory experiment, we fed bloater (Coregonus hoyi) that had been caught in Lake Michigan to lake trout (Salvelinus namaycush) kept in eight laboratory tanks. Bloater is a natural prey for lake trout. In four of the tanks, a relatively high flow rate was used to ensure relatively high activity by the lake trout, whereas a low flow rate was used in the other four tanks, allowing for low lake trout activity. On a tank-by-tank basis, the amount of food eaten by the lake trout on each day of the experiment was recorded. Each lake trout was weighed at the start and end of the experiment. Four to nine lake trout from each of the eight tanks were sacrificed at the start of the experiment, and all 10 lake trout remaining in each of the tanks were euthanized at the end of the experiment. We determined concentrations of 75 PCB congeners in the lake trout at the start of the experiment, in the lake trout at the end of the experiment, and in bloaters fed to the lake trout during the experiment. Based on these measurements, γ was calculated for each of 75 PCB congeners in each of the eight tanks. Mean γ was calculated for each of the 75 PCB congeners for both active and inactive lake trout. Because the experiment was replicated in eight tanks, the standard error about mean γ could be estimated. Results from this type of experiment are useful in risk assessment models to predict future risk to humans and wildlife eating contaminated fish under various scenarios of environmental contamination.

  4. High-Level Design Space and Flexibility Exploration for Adaptive, Energy-Efficient WCDMA Channel Estimation Architectures

    Directory of Open Access Journals (Sweden)

    Zoltán Endre Rákossy

    2012-01-01

    Full Text Available Due to the fast changing wireless communication standards coupled with strict performance constraints, the demand for flexible yet high-performance architectures is increasing. To tackle the flexibility requirement, software-defined radio (SDR is emerging as an obvious solution, where the underlying hardware implementation is tuned via software layers to the varied standards depending on power-performance and quality requirements leading to adaptable, cognitive radio. In this paper, we conduct a case study for representatives of two complexity classes of WCDMA channel estimation algorithms and explore the effect of flexibility on energy efficiency using different implementation options. Furthermore, we propose new design guidelines for both highly specialized architectures and highly flexible architectures using high-level synthesis, to enable the required performance and flexibility to support multiple applications. Our experiments with various design points show that the resulting architectures meet the performance constraints of WCDMA and a wide range of options are offered for tuning such architectures depending on power/performance/area constraints of SDR.

  5. Subnanosecond fluorescence spectroscopy of human serum albumin as a method to estimate the efficiency of the depression therapy

    Science.gov (United States)

    Syrejshchikova, T. I.; Gryzunov, Yu. A.; Smolina, N. V.; Komar, A. A.; Uzbekov, M. G.; Misionzhnik, E. J.; Maksimova, N. M.

    2010-05-01

    The efficiency of the therapy of psychiatric diseases is estimated using the fluorescence measurements of the conformational changes of human serum albumin in the course of medical treatment. The fluorescence decay curves of the CAPIDAN probe (N-carboxyphenylimide of the dimethylaminonaphthalic acid) in the blood serum are measured. The probe is specifically bound to the albumin drug binding sites and exhibits fluorescence as a reporter ligand. A variation in the conformation of the albumin molecule substantially affects the CAPIDAN fluorescence decay curve on the subnanosecond time scale. A subnanosecond pulsed laser or a Pico-Quant LED excitation source and a fast photon detector with a time resolution of about 50 ps are used for the kinetic measurements. The blood sera of ten patients suffering from depression and treated at the Institute of Psychiatry were preliminary clinically tested. Blood for analysis was taken from each patient prior to the treatment and on the third week of treatment. For ten patients, the analysis of the fluorescence decay curves of the probe in the blood serum using the three-exponential fitting shows that the difference between the amplitudes of the decay function corresponding to the long-lived (9 ns) fluorescence of the probe prior to and after the therapeutic procedure reliably differs from zero at a significance level of 1% ( p < 0.01).

  6. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra

    Science.gov (United States)

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-01

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.

  7. INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL

    Directory of Open Access Journals (Sweden)

    T. A. Kharkovskaia

    2014-05-01

    Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.

  8. Correct Bayesian and frequentist intervals are similar

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1986-01-01

    This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

  9. Scaling gross ecosystem production at Harvard Forest with remote sensing: a comparison of estimates from a constrained quantum-use efficiency model and eddy correlation

    International Nuclear Information System (INIS)

    Waring, R.H.; Law, B.E.; Goulden, M.L.; Bassow, S.L.; McCreight, R.W.; Wofsy, S.C.; Bazzaz, F.A.

    1995-01-01

    Two independent methods of estimating gross ecosystem production (GEP) were compared over a period of 2 years at monthly integrals for a mixed forest of conifers and deciduous hardwoods at Harvard Forest in central Massachusetts. Continuous eddy flux measurements of net ecosystem exchange (NEE) provided one estimate of GEP by taking day to night temperature differences into account to estimate autotrophic and heterotrophic respiration. GEP was also estimated with a quantum efficiency model based on measurements of maximum quantum efficiency (Qmax), seasonal variation in canopy phenology and chlorophyll content, incident PAR, and the constraints of freezing temperatures and vapour pressure deficits on stomatal conductance. Quantum efficiency model estimates of GEP and those derived from eddy flux measurements compared well at monthly integrals over two consecutive years (R 2 = 0–98). Remotely sensed data were acquired seasonally with an ultralight aircraft to provide a means of scaling the leaf area and leaf pigmentation changes that affected the light absorption of photosynthetically active radiation to larger areas. A linear correlation between chlorophyll concentrations in the upper canopy leaves of four hardwood species and their quantum efficiencies (R 2 = 0–99) suggested that seasonal changes in quantum efficiency for the entire canopy can be quantified with remotely sensed indices of chlorophyll. Analysis of video data collected from the ultralight aircraft indicated that the fraction of conifer cover varied from < 7% near the instrument tower to about 25% for a larger sized area. At 25% conifer cover, the quantum efficiency model predicted an increase in the estimate of annual GEP of < 5% because unfavourable environmental conditions limited conifer photosynthesis in much of the non-growing season when hardwoods lacked leaves

  10. Chapter 12: Survey Design and Implementation for Estimating Gross Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Baumgartner, Robert [Tetra Tech, Madison, WI (United States)

    2017-10-05

    This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savings from energy efficiency programs.

  11. Light- and water-use efficiency model synergy: a revised look at crop yield estimation for agricultural decision-making

    Science.gov (United States)

    Marshall, M.; Tu, K. P.

    2015-12-01

    Large-area crop yield models (LACMs) are commonly employed to address climate-driven changes in crop yield and inform policy makers concerned with climate change adaptation. Production efficiency models (PEMs), a class of LACMs that rely on the conservative response of carbon assimilation to incoming solar radiation absorbed by a crop contingent on environmental conditions, have increasingly been used over large areas with remote sensing spectral information to improve the spatial resolution of crop yield estimates and address important data gaps. Here, we present a new PEM that combines model principles from the remote sensing-based crop yield and evapotranspiration (ET) model literature. One of the major limitations of PEMs is that they are evaluated using data restricted in both space and time. To overcome this obstacle, we first validated the model using 2009-2014 eddy covariance flux tower Gross Primary Production data in a rice field in the Central Valley of California- a critical agro-ecosystem of the United States. This evaluation yielded a Willmot's D and mean absolute error of 0.81 and 5.24 g CO2/d, respectively, using CO2, leaf area, temperature, and moisture constraints from the MOD16 ET model, Priestley-Taylor ET model, and the Global Production Efficiency Model (GLOPEM). A Monte Carlo simulation revealed that the model was most sensitive to the Enhanced Vegetation Index (EVI) input, followed by Photosynthetically Active Radiation, vapor pressure deficit, and air temperature. The model will now be evaluated using 30 x 30m (Landsat resolution) biomass transects developed in 2011 and 2012 from spectroradiometric and other non-destructive in situ metrics for several cotton, maize, and rice fields across the Central Valley. Finally, the model will be driven by Daymet and MODIS data over the entire State of California and compared with county-level crop yield statistics. It is anticipated that the new model will facilitate agro-climatic decision-making in

  12. Estimation of efficiency of new local rehabilitation method at the early post-operative period after dental implantation

    Directory of Open Access Journals (Sweden)

    A. V. Pasechnik

    2017-01-01

      Summary Despite of success of dental implantation, there are often complications at the early post-operative period of implant placing associated with wound damage and aseptic inflammation. Purpose of the work is studying clinical efficiency of combined local application of new mucosal gel “Apior” and magnetotherapy at the early post-operative period after dental implantation. Combined local application of the mucosal gel “Apior” and pulsating low-frequency electromagnetic field in the complex medical treatment of patients after conducting an operation of setting dental implants favourably affects the common state of patients and clinical symptoms of inflammation in the area of operating wound. As compared with patients who had traditional anti-inflammatory therapy, the patients treated with local application of apigel and magnetoterapy had decline of edema incidence, of gingival mucosa hyperemia, of discomfort in the area of conducted operation. There occurred more rapid improvement of inflammation painfulness, which correlated with the improvement of hygienic state of oral cavity and promoted to prevention of bacterial content of damaged mucous surfaces. Estimation of microvasculatory blood stream by the method of ultrasonic doppler flowmetry revealed more rapid normalization of volume and linear high systole speed of blood stream in the periimplant tissues in case of use of new complex local rehabilitation method, that testified to the less pronounced inflammation of oral mucosa after the operation. The authors came to conclusion that the local application of the offered method of medical treatment of early post-operative complications of dental implantation reduces terms of renewal of structural-functional integrity of oral mucosa, helps in preventing development of inflammatory complications and strengthening endosseus implant. The inclusion in the treatment management of a new combined method of application of mucosal gel “Apior” and

  13. Comparison of relative efficiency of genomic SSR and EST-SSR markers in estimating genetic diversity in sugarcane.

    Science.gov (United States)

    Parthiban, S; Govindaraj, P; Senthilkumar, S

    2018-03-01

    Twenty-five primer pairs developed from genomic simple sequence repeats (SSR) were compared with 25 expressed sequence tags (EST) SSRs to evaluate the efficiency of these two sets of primers using 59 sugarcane genetic stocks. The mean polymorphism information content (PIC) of genomic SSR was higher (0.72) compared to the PIC value recorded by EST-SSR marker (0.62). The relatively low level of polymorphism in EST-SSR markers may be due to the location of these markers in more conserved and expressed sequences compared to genomic sequences which are spread throughout the genome. Dendrogram based on the genomic SSR and EST-SSR marker data showed differences in grouping of genotypes. A total of 59 sugarcane accessions were grouped into 6 and 4 clusters using genomic SSR and EST-SSR, respectively. The highly efficient genomic SSR could subcluster the genotypes of some of the clusters formed by EST-SSR markers. The difference in dendrogram observed was probably due to the variation in number of markers produced by genomic SSR and EST-SSR and different portion of genome amplified by both the markers. The combined dendrogram (genomic SSR and EST-SSR) more clearly showed the genetic relationship among the sugarcane genotypes by forming four clusters. The mean genetic similarity (GS) value obtained using EST-SSR among 59 sugarcane accessions was 0.70, whereas the mean GS obtained using genomic SSR was 0.63. Although relatively lower level of polymorphism was displayed by the EST-SSR markers, genetic diversity shown by the EST-SSR was found to be promising as they were functional marker. High level of PIC and low genetic similarity values of genomic SSR may be more useful in DNA fingerprinting, selection of true hybrids, identification of variety specific markers and genetic diversity analysis. Identification of diverse parents based on cluster analysis can be effectively done with EST-SSR as the genetic similarity estimates are based on functional attributes related to

  14. Efficient multiple-trait association and estimation of genetic correlation using the matrix-variate linear mixed model.

    Science.gov (United States)

    Furlotte, Nicholas A; Eskin, Eleazar

    2015-05-01

    Multiple-trait association mapping, in which multiple traits are used simultaneously in the identification of genetic variants affecting those traits, has recently attracted interest. One class of approaches for this problem builds on classical variance component methodology, utilizing a multitrait version of a linear mixed model. These approaches both increase power and provide insights into the genetic architecture of multiple traits. In particular, it is possible to estimate the genetic correlation, which is a measure of the portion of the total correlation between traits that is due to additive genetic effects. Unfortunately, the practical utility of these methods is limited since they are computationally intractable for large sample sizes. In this article, we introduce a reformulation of the multiple-trait association mapping approach by defining the matrix-variate linear mixed model. Our approach reduces the computational time necessary to perform maximum-likelihood inference in a multiple-trait model by utilizing a data transformation. By utilizing a well-studied human cohort, we show that our approach provides more than a 10-fold speedup, making multiple-trait association feasible in a large population cohort on the genome-wide scale. We take advantage of the efficiency of our approach to analyze gene expression data. By decomposing gene coexpression into a genetic and environmental component, we show that our method provides fundamental insights into the nature of coexpressed genes. An implementation of this method is available at http://genetics.cs.ucla.edu/mvLMM. Copyright © 2015 by the Genetics Society of America.

  15. Estimate of Cost-Effective Potential for Minimum Efficiency Performance Standards in 13 Major World Economies Energy Savings, Environmental and Financial Impacts

    Energy Technology Data Exchange (ETDEWEB)

    Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bojda, Nicholas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-07-01

    This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programs while still saving consumers money?

  16. Construction of Structure of Indicators of Efficiency of Counteraction to Threats of Information Safety in Interests of the Estimation of Security of Information Processes in Computer Systems

    Directory of Open Access Journals (Sweden)

    A. P. Kurilo

    2010-06-01

    Full Text Available The theorem of system of indicators for an estimation of the security of information processes in the computer systems is formulated and proved. A number of the signs is proved, allowing to consider set of the indicators of efficiency of counteraction to the threats of information safety of the computer systems as the system.

  17. Applications of interval computations

    CERN Document Server

    Kreinovich, Vladik

    1996-01-01

    Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...

  18. Interval-based reconstruction for uncertainty quantification in PET

    Science.gov (United States)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  19. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  20. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    Directory of Open Access Journals (Sweden)

    Githure John I

    2009-09-01

    Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction

  1. Using the confidence interval confidently.

    Science.gov (United States)

    Hazra, Avijit

    2017-10-01

    Biomedical research is seldom done with entire populations but rather with samples drawn from a population. Although we work with samples, our goal is to describe and draw inferences regarding the underlying population. It is possible to use a sample statistic and estimates of error in the sample to get a fair idea of the population parameter, not as a single value, but as a range of values. This range is the confidence interval (CI) which is estimated on the basis of a desired confidence level. Calculation of the CI of a sample statistic takes the general form: CI = Point estimate ± Margin of error, where the margin of error is given by the product of a critical value (z) derived from the standard normal curve and the standard error of point estimate. Calculation of the standard error varies depending on whether the sample statistic of interest is a mean, proportion, odds ratio (OR), and so on. The factors affecting the width of the CI include the desired confidence level, the sample size and the variability in the sample. Although the 95% CI is most often used in biomedical research, a CI can be calculated for any level of confidence. A 99% CI will be wider than 95% CI for the same sample. Conflict between clinical importance and statistical significance is an important issue in biomedical research. Clinical importance is best inferred by looking at the effect size, that is how much is the actual change or difference. However, statistical significance in terms of P only suggests whether there is any difference in probability terms. Use of the CI supplements the P value by providing an estimate of actual clinical effect. Of late, clinical trials are being designed specifically as superiority, non-inferiority or equivalence studies. The conclusions from these alternative trial designs are based on CI values rather than the P value from intergroup comparison.

  2. Influence of Plot Size on Efficiency of Biomass Estimates in Inventories of Dry Tropical Forests Assisted by Photogrammetric Data from an Unmanned Aircraft System

    Directory of Open Access Journals (Sweden)

    Daud Jones Kachamba

    2017-06-01

    Full Text Available Applications of unmanned aircraft systems (UASs to assist in forest inventories have provided promising results in biomass estimation for different forest types. Recent studies demonstrating use of different types of remotely sensed data to assist in biomass estimation have shown that accuracy and precision of estimates are influenced by the size of field sample plots used to obtain reference values for biomass. The objective of this case study was to assess the influence of sample plot size on efficiency of UAS-assisted biomass estimates in the dry tropical miombo woodlands of Malawi. The results of a design-based field sample inventory assisted by three-dimensional point clouds obtained from aerial imagery acquired with a UAS showed that the root mean square errors as well as the standard error estimates of mean biomass decreased as sample plot sizes increased. Furthermore, relative efficiency values over different sample plot sizes were above 1.0 in a design-based and model-assisted inferential framework, indicating that UAS-assisted inventories were more efficient than purely field-based inventories. The results on relative costs for UAS-assisted and pure field-based sample plot inventories revealed that there is a trade-off between inventory costs and required precision. For example, in our study if a standard error of less than approximately 3 Mg ha−1 was targeted, then a UAS-assisted forest inventory should be applied to ensure more cost effective and precise estimates. Future studies should therefore focus on finding optimum plot sizes for particular applications, like for example in projects under the Reducing Emissions from Deforestation and Forest Degradation, plus forest conservation, sustainable management of forest and enhancement of carbon stocks (REDD+ mechanism with different geographical scales.

  3. A hydrogen production experiment by the thermo-chemical and electrolytic hybrid hydrogen production in lower temperature range. System viability and preliminary thermal efficiency estimation

    International Nuclear Information System (INIS)

    Takai, Toshihide; Nakagiri, Toshio; Inagaki, Yoshiyuki

    2008-10-01

    A new experimental apparatus by the thermo-chemical and electrolytic Hybrid-Hydrogen production in Lower Temperature range (HHLT) was developed and hydrogen production experiment was performed to confirm the system operability. Hydrogen production efficiency was estimated and technical problems were clarified through the experimental results. Stable operation of the SO 3 electrolysis cell and the sulfur dioxide solution electrolysis cell were confirmed during experimental operation and any damage which would be affected solid operation was not detected under post operation inspection. To improve hydrogen production efficiency, it was found that the reduction of sulfuric acid circulation and the decrease in the cell voltage were key issues. (author)

  4. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    CERN Document Server

    Rao, D V; Brunetti, A; Gigante, G E; Takeda, T; Itai, Y; Akatsuka, T

    2002-01-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic K alpha radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. (authors)

  5. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    Energy Technology Data Exchange (ETDEWEB)

    Rao, D.V.; Cesareo, R.; Brunetti, A. [Sassari University, Istituto di Matematica e Fisica (Italy); Gigante, G.E. [Roma Universita, Dipt. di Fisica (Italy); Takeda, T.; Itai, Y. [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine; Akatsuka, T. [Yamagata Univ., Yonezawa (Japan). Faculty of Engineering

    2002-10-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic K{alpha} radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. (authors)

  6. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    Science.gov (United States)

    Rao, D. V.; Cesareo, R.; Brunetti, A.; Gigante, G. E.; Takeda, T.; Itai, Y.; Akatsuka, T.

    2002-10-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic Kα radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work.

  7. Estimation of absolute microglial cell numbers in mouse fascia dentata using unbiased and efficient stereological cell counting principles

    DEFF Research Database (Denmark)

    Wirenfeldt, Martin; Dalmau, Ishar; Finsen, Bente

    2003-01-01

    Stereology offers a set of unbiased principles to obtain precise estimates of total cell numbers in a defined region. In terms of microglia, which in the traumatized and diseased CNS is an extremely dynamic cell population, the strength of stereology is that the resultant estimate is unaffected...... of microglia, although with this thickness, the intensity of the staining is too high to distinguish single cells. Lectin histochemistry does not visualize microglia throughout the section and, accordingly, is not suited for the optical fractionator. The mean total number of Mac-1+ microglial cells...... in the unilateral dentate gyrus of the normal young adult male C57BL/6 mouse was estimated to be 12,300 (coefficient of variation (CV)=0.13) with a mean coefficient of error (CE) of 0.06. The perspective of estimating microglial cell numbers using stereology is to establish a solid basis for studying the dynamics...

  8. Chaos on the interval

    CERN Document Server

    Ruette, Sylvie

    2017-01-01

    The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...

  9. Estimating Profit Efficiency of Artisanal Fishing in the Pru District of the Brong-Ahafo Region, Ghana

    Directory of Open Access Journals (Sweden)

    Edinam Dope Setsoafia

    2017-01-01

    Full Text Available This study evaluated the profit efficiency of artisanal fishing in the Pru District of Ghana by explicitly computing profit efficiency level, identifying the sources of profit inefficiency, and examining the constraints of artisanal fisheries. Cross-sectional data was obtained from 120 small-scale fishing households using semistructured questionnaire. The stochastic profit frontier model was used to compute profit efficiency level and identify the determinants of profit inefficiency while Garrett ranking technique was used to rank the constraints. The average profit efficiency level was 81.66% which implies that about 82% of the prospective maximum profit was gained due to production efficiency. That is, only 18% of the potential profit was lost due to the fishers’ inefficiency. Also, the age of the household head and household size increase the inefficiency level while experience in artisanal fishing tends to decrease the inefficiency level. From the Garrett ranking, access to credit facility to fully operate the small-scale fishing business was ranked as the most pressing issue followed by unstable prices while perishability was ranked last among the constraints. The study, therefore, recommends that group formation should be encouraged to enable easy access to loans and contract sales to boost profitability.

  10. Rigorous Verification for the Solution of Nonlinear Interval System ...

    African Journals Online (AJOL)

    We survey a general method for solving nonlinear interval systems of equations. In particular, we paid special attention to the computational aspects of linear interval systems since the bulk of computations are done during the stage of computing outer estimation of the including linear interval systems. The height of our ...

  11. Efficient Narrowband Direction of Arrival Estimation Based on a Combination of Uniform Linear/Shirvani-Akbari Arrays

    Directory of Open Access Journals (Sweden)

    Shahriar Shirvani Moghaddam

    2012-01-01

    Full Text Available Uniform linear array (ULA geometry does not perform well for direction of arrival (DOA estimation at directions close to the array endfires. Shirvani and Akbari solved this problem by displacing two elements from both ends of the ULA to the top and/or bottom of the array axis. Shirvani-Akbari array (SAA presents a considerable improvement in the DOA estimation of narrowband sources arriving at endfire directions in terms of DOA estimation accuracy and angular resolution. In this paper, all new proposed SAA configurations are modelled and also examined, numerically. In this paper, two well-known DOA estimation algorithms, multiple signal classification (MUSIC and minimum variance distortionless response (MVDR, are used to evaluate the effectiveness of proposed arrays using total root mean square error (RMSE criterion. In addition, two new scenarios are proposed which divide angular search to two parts, directions close to array endfires as well as middle angles. For middle angles, which belong to (−70∘≤≤70∘, ULA is considered, and for endfire angles, the angles which belong to (−90∘≤≤−70∘ and (70∘≤≤90∘, SAA is considered. Simulation results of new proposed scenarios for DOA estimation of narrowband signals show the better performance with lower computational load.

  12. Multichannel interval timer

    International Nuclear Information System (INIS)

    Turko, B.T.

    1983-10-01

    A CAMAC based modular multichannel interval timer is described. The timer comprises twelve high resolution time digitizers with a common start enabling twelve independent stop inputs. Ten time ranges from 2.5 μs to 1.3 μs can be preset. Time can be read out in twelve 24-bit words either via CAMAC Crate Controller or an external FIFO register. LSB time calibration is 78.125 ps. An additional word reads out the operational status of twelve stop channels. The system consists of two modules. The analog module contains a reference clock and 13 analog time stretchers. The digital module contains counters, logic and interface circuits. The timer has an excellent differential linearity, thermal stability and crosstalk free performance

  13. Experimenting with musical intervals

    Science.gov (United States)

    Lo Presto, Michael C.

    2003-07-01

    When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.

  14. ESPRIT-like algorithm for computational-efficient angle estimation in bistatic multiple-input multiple-output radar

    Science.gov (United States)

    Gong, Jian; Lou, Shuntian; Guo, Yiduo

    2016-04-01

    An estimation of signal parameters via a rotational invariance techniques-like (ESPRIT-like) algorithm is proposed to estimate the direction of arrival and direction of departure for bistatic multiple-input multiple-output (MIMO) radar. The properties of a noncircular signal and Euler's formula are first exploited to establish a real-valued bistatic MIMO radar array data, which is composed of sine and cosine data. Then the receiving/transmitting selective matrices are constructed to obtain the receiving/transmitting rotational invariance factors. Since the rotational invariance factor is a cosine function, symmetrical mirror angle ambiguity may occur. Finally, a maximum likelihood function is used to avoid the estimation ambiguities. Compared with the existing ESPRIT, the proposed algorithm can save about 75% of computational load owing to the real-valued ESPRIT algorithm. Simulation results confirm the effectiveness of the ESPRIT-like algorithm.

  15. Methodical Approach to Estimation of Energy Efficiency Parameters of the Economy Under the Structural Changes in the Fuel And Energy Balance (on the Example of Baikal Region

    Directory of Open Access Journals (Sweden)

    Boris Grigorievich Saneev

    2013-12-01

    Full Text Available The authors consider a methodical approach which allows estimating energy efficiency parameters of the region’s economy using a fuel and energy balance (FEB. This approach was tested on the specific case of Baikal region. During the testing process the authors have developed ex ante and ex post FEBs and estimated energy efficiency parameters such as energy-, electro- and heat capacity of GRP, coefficients of useful utilization of fuel and energy resources and a monetary version of FEB. Forecast estimations are based on assumptions and limitations of technologically-intensive development scenario of the region. Authors show that the main factor of structural changes in the fuel and energy balance will be the large-scale development of hydrocarbon resources in Baikal region. It will cause structural changes in the composition of both the debit and credit of FEB (namely the structure of export and final consumption of fuel and energy resources. Authors assume that the forecast structural changes of the region’s FEB will significantly improve energy efficiency parameters of the economy: energy capacity of GRP will decrease by 1,5 times in 2010– 2030, electro and heat capacity – 1,9 times; coefficients of useful utilization of fuel and energy resources will increase by 3–5 p.p. This will save about 20 million tons of fuel equivalent (about 210 billion rubles in 2011 the prices until 2030

  16. Socioeconomic position and the primary care interval

    DEFF Research Database (Denmark)

    Vedsted, Anders

    2018-01-01

    to the easiness to interpret the symptoms of the underlying cancer. Methods. We conducted a population-based cohort study using survey data on time intervals linked at an individually level to routine collected data on demographics from Danish registries. Using logistic regression we estimated the odds......Introduction. Diagnostic delays affect cancer survival negatively. Thus, the time interval from symptomatic presentation to a GP until referral to secondary care (i.e. primary care interval (PCI)), should be as short as possible. Lower socioeconomic position seems associated with poorer cancer...... younger than 45 years of age and older than 54 years of age had longer primary care interval than patients aged ‘45-54’ years. No other associations for SEP characteristics were observed. The findings may imply that GPs are referring patients regardless of SEP, although some room for improvement prevails...

  17. Modelling the PCR amplification process by a size-dependent branching process and estimation of the efficiency

    NARCIS (Netherlands)

    Lalam, N.; Jacob, C.; Jagers, P.

    2004-01-01

    We propose a stochastic modelling of the PCR amplification process by a size-dependent branching process starting as a supercritical Bienaymé-Galton-Watson transient phase and then having a saturation near-critical size-dependent phase. This model allows us to estimate the probability of replication

  18. 高效超分辨波达方向估计算法综述%Overview of efficient algorithms for super-resolution DOA estimates

    Institute of Scientific and Technical Information of China (English)

    闫锋刚; 沈毅; 刘帅; 金铭; 乔晓林

    2015-01-01

    Computationally efficient methods for super-resolution direction of arrival (DOA)estimation aim to reduce the complexity of conventional techniques,to economize on the costs of systems and to enhance the ro-bustness of DOA estimators against array geometries and other environmental restrictions,which has been an important topic in the field.According to the theory and elements of the multiple signal classification (MUSIC) algorithm and the primary derivations from MUSIC,state-of-the-art efficient super-resolution DOA estimators are classified into five different types.These five types of approaches reduce the complexity by real-valued com-putation,beam-space transformation,fast subspace estimation,rapid spectral search,and no spectral search, respectively.With such a classification,comprehensive overviews of each kind of efficient methods are given and numerical comparisons among these estimators are also conducted to illustrate their advantages.Future develop-ment trends of efficient algorithms for super-resolution DOA estimates are finally predicted with basic require-ments of real-world applications.%高效超分辨波达方向估计算法致力于降低超分辨算法的计算量、节约系统的实现成本、弱化算法对于阵列结构的依赖性,是推进超分辨理论工程化的一个重要研究课题。从多重信号分类(multiple signal classifi-cation,MUSIC)算法的原理和构成要素入手,以基于 MUSIC 派生高效超分辨算法的目的和方法为标准,将现存高效超分辨算法划分为实值运算、波束域变换、快速子空间估计、快速峰值搜索和免峰值搜索5大类。在此基础上,全面回顾总结了各类高效算法的发展历程和最新进展,对比分析了它们的主要优缺点。最后,结合空间谱估计实际工程化的应用需求,指出了高效超分辨算法的未来发展趋势。

  19. Towards Remote Estimation of Radiation Use Efficiency in Maize Using UAV-Based Low-Cost Camera Imagery

    OpenAIRE

    Andreas Tewes; Jürgen Schellberg

    2018-01-01

    Radiation Use Efficiency (RUE) defines the productivity with which absorbed photosynthetically active radiation (APAR) is converted to plant biomass. Readily used in crop growth models to predict dry matter accumulation, RUE is commonly determined by elaborate static sensor measurements in the field. Different definitions are used, based on total absorbed PAR (RUEtotal) or PAR absorbed by the photosynthetically active leaf tissue only (RUEgreen). Previous studies have shown that the fraction ...

  20. Quantitative estimations of the efficiency of stabilization and lowering of background in gamma-spectrometry of environment samples

    International Nuclear Information System (INIS)

    Pop, O.M.; Stets, M.V.; Maslyuk, V.T.

    2015-01-01

    We consider a gamma-spectrometric complex of IEP of the NAS of Ukraine, where a passive multilayer external defense is used (complex has been made in 1989). We have developed and investigated a system of stability and lowering of background in the gamma-spectrometric complex. As metrological coefficients, the efficiency factor of defense are considered, the calculation and analysis of which show that their values are different for different energies of gamma-quanta and gamma-active nuclides