WorldWideScience

Sample records for estimate monotone normal

  1. Estimation of a monotone percentile residual life function under random censorship.

    Science.gov (United States)

    Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo

    2013-01-01

    In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Estimating monotonic rates from biological data using local linear regression.

    Science.gov (United States)

    Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R

    2017-03-01

    Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.

  3. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    Science.gov (United States)

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  4. Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data

    Directory of Open Access Journals (Sweden)

    Xueqin Zhou

    2017-01-01

    Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.

  5. Unordered Monotonicity.

    Science.gov (United States)

    Heckman, James J; Pinto, Rodrigo

    2018-01-01

    This paper defines and analyzes a new monotonicity condition for the identification of counterfactuals and treatment effects in unordered discrete choice models with multiple treatments, heterogenous agents and discrete-valued instruments. Unordered monotonicity implies and is implied by additive separability of choice of treatment equations in terms of observed and unobserved variables. These results follow from properties of binary matrices developed in this paper. We investigate conditions under which unordered monotonicity arises as a consequence of choice behavior. We characterize IV estimators of counterfactuals as solutions to discrete mixture problems.

  6. Absolute Monotonicity of Functions Related To Estimates of First Eigenvalue of Laplace Operator on Riemannian Manifolds

    Directory of Open Access Journals (Sweden)

    Feng Qi

    2014-10-01

    Full Text Available The authors find the absolute monotonicity and complete monotonicity of some functions involving trigonometric functions and related to estimates the lower bounds of the first eigenvalue of Laplace operator on Riemannian manifolds.

  7. Asymptotic estimates and exponential stability for higher-order monotone difference equations

    Directory of Open Access Journals (Sweden)

    Pituk Mihály

    2005-01-01

    Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.

  8. Asymptotic estimates and exponential stability for higher-order monotone difference equations

    Directory of Open Access Journals (Sweden)

    Mihály Pituk

    2005-03-01

    Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.

  9. Edit Distance to Monotonicity in Sliding Windows

    DEFF Research Database (Denmark)

    Chan, Ho-Leung; Lam, Tak-Wah; Lee, Lap Kei

    2011-01-01

    Given a stream of items each associated with a numerical value, its edit distance to monotonicity is the minimum number of items to remove so that the remaining items are non-decreasing with respect to the numerical value. The space complexity of estimating the edit distance to monotonicity of a ...

  10. Monotonism.

    Science.gov (United States)

    Franklin, Elda

    1981-01-01

    Reviews studies on the etiology of monotonism, the monotone being that type of uncertain or inaccurate singer who cannot vocally match pitches and who has trouble accurately reproducing even a familiar song. Neurological factors (amusia, right brain abnormalities), age, and sex differences are considered. (Author/SJL)

  11. Bayesian nonparametric estimation of continuous monotone functions with applications to dose-response analysis.

    Science.gov (United States)

    Bornkamp, Björn; Ickstadt, Katja

    2009-03-01

    In this article, we consider monotone nonparametric regression in a Bayesian framework. The monotone function is modeled as a mixture of shifted and scaled parametric probability distribution functions, and a general random probability measure is assumed as the prior for the mixing distribution. We investigate the choice of the underlying parametric distribution function and find that the two-sided power distribution function is well suited both from a computational and mathematical point of view. The model is motivated by traditional nonlinear models for dose-response analysis, and provides possibilities to elicitate informative prior distributions on different aspects of the curve. The method is compared with other recent approaches to monotone nonparametric regression in a simulation study and is illustrated on a data set from dose-response analysis.

  12. Monotone piecewise bicubic interpolation

    International Nuclear Information System (INIS)

    Carlson, R.E.; Fritsch, F.N.

    1985-01-01

    In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables

  13. Monotone Boolean functions

    International Nuclear Information System (INIS)

    Korshunov, A D

    2003-01-01

    Monotone Boolean functions are an important object in discrete mathematics and mathematical cybernetics. Topics related to these functions have been actively studied for several decades. Many results have been obtained, and many papers published. However, until now there has been no sufficiently complete monograph or survey of results of investigations concerning monotone Boolean functions. The object of this survey is to present the main results on monotone Boolean functions obtained during the last 50 years

  14. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Monotonicity and bounds on Bessel functions

    Directory of Open Access Journals (Sweden)

    Larry Landau

    2000-07-01

    Full Text Available survey my recent results on monotonicity with respect to order of general Bessel functions, which follow from a new identity and lead to best possible uniform bounds. Application may be made to the "spreading of the wave packet" for a free quantum particle on a lattice and to estimates for perturbative expansions.

  16. Strong Convergence of Monotone Hybrid Method for Maximal Monotone Operators and Hemirelatively Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Chakkrid Klin-eam

    2009-01-01

    Full Text Available We prove strong convergence theorems for finding a common element of the zero point set of a maximal monotone operator and the fixed point set of a hemirelatively nonexpansive mapping in a Banach space by using monotone hybrid iteration method. By using these results, we obtain new convergence results for resolvents of maximal monotone operators and hemirelatively nonexpansive mappings in a Banach space.

  17. Almost monotonicity formulas for elliptic and parabolic operators with variable coefficients

    KAUST Repository

    Matevosyan, Norayr

    2010-10-21

    In this paper we extend the results of Caffarelli, Jerison, and Kenig [Ann. of Math. (2)155 (2002)] and Caffarelli and Kenig [Amer. J. Math.120 (1998)] by establishing an almost monotonicity estimate for pairs of continuous functions satisfying u± ≥ 0 Lu± ≥ -1, u+ · u_ = 0 ;in an infinite strip (global version) or a finite parabolic cylinder (localized version), where L is a uniformly parabolic operator Lu = LA,b,cu := div(A(x, s)∇u) + b(x,s) · ∇u + c(x,s)u - δsu with double Dini continuous A and uniformly bounded b and c. We also prove the elliptic counterpart of this estimate.This closes the gap between the known conditions in the literature (both in the elliptic and parabolic case) imposed on u± in order to obtain an almost monotonicity estimate.At the end of the paper, we demonstrate how to use this new almost monotonicity formula to prove the optimal C1,1-regularity in a fairly general class of quasi-linear obstacle-type free boundary problems. © 2010 Wiley Periodicals, Inc.

  18. Matching by Monotonic Tone Mapping.

    Science.gov (United States)

    Kovacs, Gyorgy

    2018-06-01

    In this paper, a novel dissimilarity measure called Matching by Monotonic Tone Mapping (MMTM) is proposed. The MMTM technique allows matching under non-linear monotonic tone mappings and can be computed efficiently when the tone mappings are approximated by piecewise constant or piecewise linear functions. The proposed method is evaluated in various template matching scenarios involving simulated and real images, and compared to other measures developed to be invariant to monotonic intensity transformations. The results show that the MMTM technique is a highly competitive alternative of conventional measures in problems where possible tone mappings are close to monotonic.

  19. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  20. Generalized monotone operators in Banach spaces

    International Nuclear Information System (INIS)

    Nanda, S.

    1988-07-01

    The concept of F-monotonicity was first introduced by Kato and this generalizes the notion of monotonicity introduced by Minty. The purpose of this paper is to define various types of F-monotonicities and discuss the relationships among them. (author). 6 refs

  1. Statistical analysis of sediment toxicity by additive monotone regression splines

    NARCIS (Netherlands)

    Boer, de W.J.; Besten, den P.J.; Braak, ter C.J.F.

    2002-01-01

    Modeling nonlinearity and thresholds in dose-effect relations is a major challenge, particularly in noisy data sets. Here we show the utility of nonlinear regression with additive monotone regression splines. These splines lead almost automatically to the estimation of thresholds. We applied this

  2. Optimal Monotone Drawings of Trees

    OpenAIRE

    He, Dayu; He, Xin

    2016-01-01

    A monotone drawing of a graph G is a straight-line drawing of G such that, for every pair of vertices u,w in G, there exists abpath P_{uw} in G that is monotone in some direction l_{uw}. (Namely, the order of the orthogonal projections of the vertices of P_{uw} on l_{uw} is the same as the order they appear in P_{uw}.) The problem of finding monotone drawings for trees has been studied in several recent papers. The main focus is to reduce the size of the drawing. Currently, the smallest drawi...

  3. Multipartite classical and quantum secrecy monotones

    International Nuclear Information System (INIS)

    Cerf, N.J.; Massar, S.; Schneider, S.

    2002-01-01

    In order to study multipartite quantum cryptography, we introduce quantities which vanish on product probability distributions, and which can only decrease if the parties carry out local operations or public classical communication. These 'secrecy monotones' therefore measure how much secret correlation is shared by the parties. In the bipartite case we show that the mutual information is a secrecy monotone. In the multipartite case we describe two different generalizations of the mutual information, both of which are secrecy monotones. The existence of two distinct secrecy monotones allows us to show that in multipartite quantum cryptography the parties must make irreversible choices about which multipartite correlations they want to obtain. Secrecy monotones can be extended to the quantum domain and are then defined on density matrices. We illustrate this generalization by considering tripartite quantum cryptography based on the Greenberger-Horne-Zeilinger state. We show that before carrying out measurements on the state, the parties must make an irreversible decision about what probability distribution they want to obtain

  4. Almost monotonicity formulas for elliptic and parabolic operators with variable coefficients

    KAUST Repository

    Matevosyan, Norayr; Petrosyan, Arshak

    2010-01-01

    In this paper we extend the results of Caffarelli, Jerison, and Kenig [Ann. of Math. (2)155 (2002)] and Caffarelli and Kenig [Amer. J. Math.120 (1998)] by establishing an almost monotonicity estimate for pairs of continuous functions satisfying u

  5. Commutative $C^*$-algebras and $\\sigma$-normal morphisms

    OpenAIRE

    de Jeu, Marcel

    2003-01-01

    We prove in an elementary fashion that the image of a commutative monotone $\\sigma$-complete $C^*$-algebra under a $\\sigma$-normal morphism is again monotone $\\sigma$-complete and give an application of this result in spectral theory.

  6. The Monotonicity Puzzle: An Experimental Investigation of Incentive Structures

    Directory of Open Access Journals (Sweden)

    Jeannette Brosig

    2010-05-01

    Full Text Available Non-monotone incentive structures, which - according to theory - are able to induce optimal behavior, are often regarded as empirically less relevant for labor relationships. We compare the performance of a theoretically optimal non-monotone contract with a monotone one under controlled laboratory conditions. Implementing some features relevant to real-world employment relationships, our paper demonstrates that, in fact, the frequency of income-maximizing decisions made by agents is higher under the monotone contract. Although this observed behavior does not change the superiority of the non-monotone contract for principals, they do not choose this contract type in a significant way. This is what we call the monotonicity puzzle. Detailed investigations of decisions provide a clue for solving the puzzle and a possible explanation for the popularity of monotone contracts.

  7. Regional trends in short-duration precipitation extremes: a flexible multivariate monotone quantile regression approach

    Science.gov (United States)

    Cannon, Alex

    2017-04-01

    Estimating historical trends in short-duration rainfall extremes at regional and local scales is challenging due to low signal-to-noise ratios and the limited availability of homogenized observational data. In addition to being of scientific interest, trends in rainfall extremes are of practical importance, as their presence calls into question the stationarity assumptions that underpin traditional engineering and infrastructure design practice. Even with these fundamental challenges, increasingly complex questions are being asked about time series of extremes. For instance, users may not only want to know whether or not rainfall extremes have changed over time, they may also want information on the modulation of trends by large-scale climate modes or on the nonstationarity of trends (e.g., identifying hiatus periods or periods of accelerating positive trends). Efforts have thus been devoted to the development and application of more robust and powerful statistical estimators for regional and local scale trends. While a standard nonparametric method like the regional Mann-Kendall test, which tests for the presence of monotonic trends (i.e., strictly non-decreasing or non-increasing changes), makes fewer assumptions than parametric methods and pools information from stations within a region, it is not designed to visualize detected trends, include information from covariates, or answer questions about the rate of change in trends. As a remedy, monotone quantile regression (MQR) has been developed as a nonparametric alternative that can be used to estimate a common monotonic trend in extremes at multiple stations. Quantile regression makes efficient use of data by directly estimating conditional quantiles based on information from all rainfall data in a region, i.e., without having to precompute the sample quantiles. The MQR method is also flexible and can be used to visualize and analyze the nonlinearity of the detected trend. However, it is fundamentally a

  8. A Mathematical Model for Non-monotonic Deposition Profiles in Deep Bed Filtration Systems

    DEFF Research Database (Denmark)

    Yuan, Hao; Shapiro, Alexander

    2011-01-01

    A mathematical model for suspension/colloid flow in porous media and non-monotonic deposition is proposed. It accounts for the migration of particles associated with the pore walls via the second energy minimum (surface associated phase). The surface associated phase migration is characterized...... by advection and diffusion/dispersion. The proposed model is able to produce a nonmonotonic deposition profile. A set of methods for estimating the modeling parameters is provided in the case of minimal particle release. The estimation can be easily performed with available experimental information....... The numerical modeling results highly agree with the experimental observations, which proves the ability of the model to catch a non-monotonic deposition profile in practice. An additional equation describing a mobile population behaving differently from the injected population seems to be a sufficient...

  9. Testing Manifest Monotonicity Using Order-Constrained Statistical Inference

    Science.gov (United States)

    Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas

    2013-01-01

    Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores,…

  10. Strong monotonicity in mixed-state entanglement manipulation

    International Nuclear Information System (INIS)

    Ishizaka, Satoshi

    2006-01-01

    A strong entanglement monotone, which never increases under local operations and classical communications (LOCC), restricts quantum entanglement manipulation more strongly than the usual monotone since the usual one does not increase on average under LOCC. We propose strong monotones in mixed-state entanglement manipulation under LOCC. These are related to the decomposability and one-positivity of an operator constructed from a quantum state, and reveal geometrical characteristics of entangled states. These are lower bounded by the negativity or generalized robustness of entanglement

  11. Generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators with applications in non-compact settings and minimization problems

    Directory of Open Access Journals (Sweden)

    Chowdhury Molhammad SR

    2000-01-01

    Full Text Available Results are obtained on existence theorems of generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators in both compact and non-compact settings. We shall use the concept of escaping sequences introduced by Border (Fixed Point Theorem with Applications to Economics and Game Theory, Cambridge University Press, Cambridge, 1985 to obtain results in non-compact settings. Existence theorems on non-compact generalized bi-complementarity problems for quasi-semi-monotone and bi-quasi-semi-monotone operators are also obtained. Moreover, as applications of some results of this paper on generalized bi-quasi-variational inequalities, we shall obtain existence of solutions for some kind of minimization problems with quasi- semi-monotone and bi-quasi-semi-monotone operators.

  12. On-line learning of non-monotonic rules by simple perceptron

    OpenAIRE

    Inoue, Jun-ichi; Nishimori, Hidetoshi; Kabashima, Yoshiyuki

    1997-01-01

    We study the generalization ability of a simple perceptron which learns unlearnable rules. The rules are presented by a teacher perceptron with a non-monotonic transfer function. The student is trained in the on-line mode. The asymptotic behaviour of the generalization error is estimated under various conditions. Several learning strategies are proposed and improved to obtain the theoretical lower bound of the generalization error.

  13. Specific non-monotonous interactions increase persistence of ecological networks.

    Science.gov (United States)

    Yan, Chuan; Zhang, Zhibin

    2014-03-22

    The relationship between stability and biodiversity has long been debated in ecology due to opposing empirical observations and theoretical predictions. Species interaction strength is often assumed to be monotonically related to population density, but the effects on stability of ecological networks of non-monotonous interactions that change signs have not been investigated previously. We demonstrate that for four kinds of non-monotonous interactions, shifting signs to negative or neutral interactions at high population density increases persistence (a measure of stability) of ecological networks, while for the other two kinds of non-monotonous interactions shifting signs to positive interactions at high population density decreases persistence of networks. Our results reveal a novel mechanism of network stabilization caused by specific non-monotonous interaction types through either increasing stable equilibrium points or reducing unstable equilibrium points (or both). These specific non-monotonous interactions may be important in maintaining stable and complex ecological networks, as well as other networks such as genes, neurons, the internet and human societies.

  14. On the size of monotone span programs

    NARCIS (Netherlands)

    Nikov, V.S.; Nikova, S.I.; Preneel, B.; Blundo, C.; Cimato, S.

    2005-01-01

    Span programs provide a linear algebraic model of computation. Monotone span programs (MSP) correspond to linear secret sharing schemes. This paper studies the properties of monotone span programs related to their size. Using the results of van Dijk (connecting codes and MSPs) and a construction for

  15. Stepsize Restrictions for Boundedness and Monotonicity of Multistep Methods

    KAUST Repository

    Hundsdorfer, W.

    2011-04-29

    In this paper nonlinear monotonicity and boundedness properties are analyzed for linear multistep methods. We focus on methods which satisfy a weaker boundedness condition than strict monotonicity for arbitrary starting values. In this way, many linear multistep methods of practical interest are included in the theory. Moreover, it will be shown that for such methods monotonicity can still be valid with suitable Runge-Kutta starting procedures. Restrictions on the stepsizes are derived that are not only sufficient but also necessary for these boundedness and monotonicity properties. © 2011 Springer Science+Business Media, LLC.

  16. Type monotonic allocation schemes for multi-glove games

    OpenAIRE

    Brânzei, R.; Solymosi, T.; Tijs, S.H.

    2007-01-01

    Multiglove markets and corresponding games are considered.For this class of games we introduce the notion of type monotonic allocation scheme.Allocation rules for multiglove markets based on weight systems are introduced and characterized.These allocation rules generate type monotonic allocation schemes for multiglove games and are also helpful in proving that each core element of the corresponding game is extendable to a type monotonic allocation scheme.The T-value turns out to generate a ty...

  17. Moduli and Characteristics of Monotonicity in Some Banach Lattices

    Directory of Open Access Journals (Sweden)

    Miroslav Krbec

    2010-01-01

    Full Text Available First the characteristic of monotonicity of any Banach lattice X is expressed in terms of the left limit of the modulus of monotonicity of X at the point 1. It is also shown that for Köthe spaces the classical characteristic of monotonicity is the same as the characteristic of monotonicity corresponding to another modulus of monotonicity δ^m,E. The characteristic of monotonicity of Orlicz function spaces and Orlicz sequence spaces equipped with the Luxemburg norm are calculated. In the first case the characteristic is expressed in terms of the generating Orlicz function only, but in the sequence case the formula is not so direct. Three examples show why in the sequence case so direct formula is rather impossible. Some other auxiliary and complemented results are also presented. By the results of Betiuk-Pilarska and Prus (2008 which establish that Banach lattices X with ε0,m(X<1 and weak orthogonality property have the weak fixed point property, our results are related to the fixed point theory (Kirk and Sims (2001.

  18. A non-parametric test for partial monotonicity in multiple regression

    NARCIS (Netherlands)

    van Beek, M.; Daniëls, H.A.M.

    Partial positive (negative) monotonicity in a dataset is the property that an increase in an independent variable, ceteris paribus, generates an increase (decrease) in the dependent variable. A test for partial monotonicity in datasets could (1) increase model performance if monotonicity may be

  19. Testing manifest monotonicity using order-constrained statistical inference

    NARCIS (Netherlands)

    Tijmstra, J.; Hessen, D.J.; van der Heijden, P.G.M.; Sijtsma, K.

    2013-01-01

    Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest

  20. Monotonicity-based electrical impedance tomography for lung imaging

    Science.gov (United States)

    Zhou, Liangdong; Harrach, Bastian; Seo, Jin Keun

    2018-04-01

    This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e. the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used these monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.

  1. Stepsize Restrictions for Boundedness and Monotonicity of Multistep Methods

    KAUST Repository

    Hundsdorfer, W.; Mozartova, A.; Spijker, M. N.

    2011-01-01

    In this paper nonlinear monotonicity and boundedness properties are analyzed for linear multistep methods. We focus on methods which satisfy a weaker boundedness condition than strict monotonicity for arbitrary starting values. In this way, many

  2. Normal estimation for pointcloud using GPU based sparse tensor voting

    OpenAIRE

    Liu , Ming; Pomerleau , François; Colas , Francis; Siegwart , Roland

    2012-01-01

    International audience; Normal estimation is the basis for most applications using pointcloud, such as segmentation. However, it is still a challenging problem regarding computational complexity and observation noise. In this paper, we propose a normal estimation method for pointcloud using results from tensor voting. Comparing with other approaches, we show it has smaller estimation error. Moreover, by varying the voting kernel size, we find it is a flexible approach for structure extraction...

  3. A Survey on Operator Monotonicity, Operator Convexity, and Operator Means

    Directory of Open Access Journals (Sweden)

    Pattrawut Chansangiam

    2015-01-01

    Full Text Available This paper is an expository devoted to an important class of real-valued functions introduced by Löwner, namely, operator monotone functions. This concept is closely related to operator convex/concave functions. Various characterizations for such functions are given from the viewpoint of differential analysis in terms of matrix of divided differences. From the viewpoint of operator inequalities, various characterizations and the relationship between operator monotonicity and operator convexity are given by Hansen and Pedersen. In the viewpoint of measure theory, operator monotone functions on the nonnegative reals admit meaningful integral representations with respect to Borel measures on the unit interval. Furthermore, Kubo-Ando theory asserts the correspondence between operator monotone functions and operator means.

  4. Logarithmically completely monotonic functions involving the Generalized Gamma Function

    OpenAIRE

    Faton Merovci; Valmir Krasniqi

    2010-01-01

    By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain  problems of traffic flow are proved to be logarithmically  completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.

  5. Monotonic Loading of Circular Surface Footings on Clay

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Barari, Amin

    2011-01-01

    Appropriate modeling of offshore foundations under monotonic loading is a significant challenge in geotechnical engineering. This paper reports experimental and numerical analyses, specifically investigating the response of circular surface footings during monotonic loading and elastoplastic...... behavior during reloading. By using the findings presented in this paper, it is possible to extend the model to simulate the vertical-load displacement response of offshore bucket foundations....

  6. Semiparametric approach for non-monotone missing covariates in a parametric regression model

    KAUST Repository

    Sinha, Samiran

    2014-02-26

    Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.

  7. Proofs with monotone cuts

    Czech Academy of Sciences Publication Activity Database

    Jeřábek, Emil

    2012-01-01

    Roč. 58, č. 3 (2012), s. 177-187 ISSN 0942-5616 R&D Projects: GA AV ČR IAA100190902; GA MŠk(CZ) 1M0545 Institutional support: RVO:67985840 Keywords : proof complexity * monotone sequent calculus Subject RIV: BA - General Mathematics Impact factor: 0.376, year: 2012 http://onlinelibrary.wiley.com/doi/10.1002/malq.201020071/full

  8. Logarithmically completely monotonic functions involving the Generalized Gamma Function

    Directory of Open Access Journals (Sweden)

    Faton Merovci

    2010-12-01

    Full Text Available By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain  problems of traffic flow are proved to be logarithmically  completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.

  9. Stability of dynamical systems on the role of monotonic and non-monotonic Lyapunov functions

    CERN Document Server

    Michel, Anthony N; Liu, Derong

    2015-01-01

    The second edition of this textbook provides a single source for the analysis of system models represented by continuous-time and discrete-time, finite-dimensional and infinite-dimensional, and continuous and discontinuous dynamical systems.  For these system models, it presents results which comprise the classical Lyapunov stability theory involving monotonic Lyapunov functions, as well as corresponding contemporary stability results involving non-monotonicLyapunov functions.Specific examples from several diverse areas are given to demonstrate the applicability of the developed theory to many important classes of systems, including digital control systems, nonlinear regulator systems, pulse-width-modulated feedback control systems, and artificial neural networks.   The authors cover the following four general topics:   -          Representation and modeling of dynamical systems of the types described above -          Presentation of Lyapunov and Lagrange stability theory for dynamical sy...

  10. Obliquely Propagating Non-Monotonic Double Layer in a Hot Magnetized Plasma

    International Nuclear Information System (INIS)

    Kim, T.H.; Kim, S.S.; Hwang, J.H.; Kim, H.Y.

    2005-01-01

    Obliquely propagating non-monotonic double layer is investigated in a hot magnetized plasma, which consists of a positively charged hot ion fluid and trapped, as well as free electrons. A model equation (modified Korteweg-de Vries equation) is derived by the usual reductive perturbation method from a set of basic hydrodynamic equations. A time stationary obliquely propagating non-monotonic double layer solution is obtained in a hot magnetized-plasma. This solution is an analytic extension of the monotonic double layer and the solitary hole. The effects of obliqueness, external magnetic field and ion temperature on the properties of the non-monotonic double layer are discussed

  11. POLARIZED LINE FORMATION IN NON-MONOTONIC VELOCITY FIELDS

    Energy Technology Data Exchange (ETDEWEB)

    Sampoorna, M.; Nagendra, K. N., E-mail: sampoorna@iiap.res.in, E-mail: knn@iiap.res.in [Indian Institute of Astrophysics, Koramangala, Bengaluru 560034 (India)

    2016-12-10

    For a correct interpretation of the observed spectro-polarimetric data from astrophysical objects such as the Sun, it is necessary to solve the polarized line transfer problems taking into account a realistic temperature structure, the dynamical state of the atmosphere, a realistic scattering mechanism (namely, the partial frequency redistribution—PRD), and the magnetic fields. In a recent paper, we studied the effects of monotonic vertical velocity fields on linearly polarized line profiles formed in isothermal atmospheres with and without magnetic fields. However, in general the velocity fields that prevail in dynamical atmospheres of astrophysical objects are non-monotonic. Stellar atmospheres with shocks, multi-component supernova atmospheres, and various kinds of wave motions in solar and stellar atmospheres are examples of non-monotonic velocity fields. Here we present studies on the effect of non-relativistic non-monotonic vertical velocity fields on the linearly polarized line profiles formed in semi-empirical atmospheres. We consider a two-level atom model and PRD scattering mechanism. We solve the polarized transfer equation in the comoving frame (CMF) of the fluid using a polarized accelerated lambda iteration method that has been appropriately modified for the problem at hand. We present numerical tests to validate the CMF method and also discuss the accuracy and numerical instabilities associated with it.

  12. Monotonicity of social welfare optima

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Østerdal, Lars Peter Raahave

    2010-01-01

    This paper considers the problem of maximizing social welfare subject to participation constraints. It is shown that for an income allocation method that maximizes a social welfare function there is a monotonic relationship between the incomes allocated to individual agents in a given coalition...

  13. Generalized Yosida Approximations Based on Relatively A-Maximal m-Relaxed Monotonicity Frameworks

    Directory of Open Access Journals (Sweden)

    Heng-you Lan

    2013-01-01

    Full Text Available We introduce and study a new notion of relatively A-maximal m-relaxed monotonicity framework and discuss some properties of a new class of generalized relatively resolvent operator associated with the relatively A-maximal m-relaxed monotone operator and the new generalized Yosida approximations based on relatively A-maximal m-relaxed monotonicity framework. Furthermore, we give some remarks to show that the theory of the new generalized relatively resolvent operator and Yosida approximations associated with relatively A-maximal m-relaxed monotone operators generalizes most of the existing notions on (relatively maximal monotone mappings in Hilbert as well as Banach space and can be applied to study variational inclusion problems and first-order evolution equations as well as evolution inclusions.

  14. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  15. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2007-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  16. Effect of dynamic monotonic and cyclic loading on fracture behavior for Japanese carbon steel pipe STS410

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Kanji; Murayama, Kouichi; Ogata, Hiroyuki [and others

    1997-04-01

    The fracture behavior for Japanese carbon steel pipe STS410 was examined under dynamic monotonic and cyclic loading through a research program of International Piping Integrity Research Group (EPIRG-2), in order to evaluate the strength of pipe during the seismic event The tensile test and the fracture toughness test were conducted for base metal and TIG weld metal. Three base metal pipe specimens, 1,500mm in length and 6-inch diameter sch.120, were employed for a quasi-static monotonic, a dynamic monotonic and a dynamic cyclic loading pipe fracture tests. One weld joint pipe specimen was also employed for a dynamic cyclic loading test In the dynamic cyclic loading test, the displacement was controlled as applying the fully reversed load (R=-1). The pipe specimens with a circumferential through-wall crack were subjected four point bending load at 300C in air. Japanese STS410 carbon steel pipe material was found to have high toughness under dynamic loading condition through the CT fracture toughness test. As the results of pipe fracture tests, the maximum moment to pipe fracture under dynamic monotonic and cyclic loading condition, could be estimated by plastic collapse criterion and the effect of dynamic monotonic loading and cyclic loading was a little on the maximum moment to pipe fracture of the STS410 carbon steel pipe. The STS410 carbon steel pipe seemed to be less sensitive to dynamic and cyclic loading effects than the A106Gr.B carbon steel pipe evaluated in IPIRG-1 program.

  17. Effect of dynamic monotonic and cyclic loading on fracture behavior for Japanese carbon steel pipe STS410

    International Nuclear Information System (INIS)

    Kinoshita, Kanji; Murayama, Kouichi; Ogata, Hiroyuki

    1997-01-01

    The fracture behavior for Japanese carbon steel pipe STS410 was examined under dynamic monotonic and cyclic loading through a research program of International Piping Integrity Research Group (EPIRG-2), in order to evaluate the strength of pipe during the seismic event The tensile test and the fracture toughness test were conducted for base metal and TIG weld metal. Three base metal pipe specimens, 1,500mm in length and 6-inch diameter sch.120, were employed for a quasi-static monotonic, a dynamic monotonic and a dynamic cyclic loading pipe fracture tests. One weld joint pipe specimen was also employed for a dynamic cyclic loading test In the dynamic cyclic loading test, the displacement was controlled as applying the fully reversed load (R=-1). The pipe specimens with a circumferential through-wall crack were subjected four point bending load at 300C in air. Japanese STS410 carbon steel pipe material was found to have high toughness under dynamic loading condition through the CT fracture toughness test. As the results of pipe fracture tests, the maximum moment to pipe fracture under dynamic monotonic and cyclic loading condition, could be estimated by plastic collapse criterion and the effect of dynamic monotonic loading and cyclic loading was a little on the maximum moment to pipe fracture of the STS410 carbon steel pipe. The STS410 carbon steel pipe seemed to be less sensitive to dynamic and cyclic loading effects than the A106Gr.B carbon steel pipe evaluated in IPIRG-1 program

  18. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  19. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  20. Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI

    DEFF Research Database (Denmark)

    Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N

    2017-01-01

    available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time...... the quantitative results are compared against ground-truth histology, they seem to reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing......-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures – such as axons and extra-axonal spaces, which we here used in a simple model for the microstructure – and that, for axons parallel to the main magnetic field...

  1. Estimating Non-Normal Latent Trait Distributions within Item Response Theory Using True and Estimated Item Parameters

    Science.gov (United States)

    Sass, D. A.; Schmitt, T. A.; Walker, C. M.

    2008-01-01

    Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…

  2. Information flow in layered networks of non-monotonic units

    International Nuclear Information System (INIS)

    Neves, Fabio Schittler; Schubert, Benno Martim; Erichsen, Rubem Jr

    2015-01-01

    Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information. (paper)

  3. Information flow in layered networks of non-monotonic units

    Science.gov (United States)

    Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.

    2015-07-01

    Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.

  4. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  5. Optimization of nonlinear, non-Gaussian Bayesian filtering for diagnosis and prognosis of monotonic degradation processes

    Science.gov (United States)

    Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.

    2018-05-01

    The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.

  6. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  7. Iterates of piecewise monotone mappings on an interval

    CERN Document Server

    Preston, Chris

    1988-01-01

    Piecewise monotone mappings on an interval provide simple examples of discrete dynamical systems whose behaviour can be very complicated. These notes are concerned with the properties of the iterates of such mappings. The material presented can be understood by anyone who has had a basic course in (one-dimensional) real analysis. The account concentrates on the topological (as opposed to the measure theoretical) aspects of the theory of piecewise monotone mappings. As well as offering an elementary introduction to this theory, these notes also contain a more advanced treatment of the problem of classifying such mappings up to topological conjugacy.

  8. Risk-Sensitive Control with Near Monotone Cost

    International Nuclear Information System (INIS)

    Biswas, Anup; Borkar, V. S.; Suresh Kumar, K.

    2010-01-01

    The infinite horizon risk-sensitive control problem for non-degenerate controlled diffusions is analyzed under a 'near monotonicity' condition on the running cost that penalizes large excursions of the process.

  9. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  10. Failure mechanisms of closed-cell aluminum foam under monotonic and cyclic loading

    International Nuclear Information System (INIS)

    Amsterdam, E.; De Hosson, J.Th.M.; Onck, P.R.

    2006-01-01

    This paper concentrates on the differences in failure mechanisms of Alporas closed-cell aluminum foam under either monotonic or cyclic loading. The emphasis lies on aspects of crack nucleation and crack propagation in relation to the microstructure. The cell wall material consists of Al dendrites and an interdendritic network of Al 4 Ca and Al 22 CaTi 2 precipitates. In situ scanning electron microscopy monotonic tensile tests were performed on small samples to study crack nucleation and propagation. Digital image correlation was employed to map the strain in the cell wall on the characteristic microstructural length scale. Monotonic tensile tests and tension-tension fatigue tests were performed on larger samples to observe the overall fracture behavior and crack path in monotonic and cyclic loading. The crack nucleation and propagation path in both loading conditions are revealed and it can be concluded that during monotonic tension cracks nucleate in and propagate partly through the Al 4 Ca interdendritic network, whereas under cyclic loading cracks nucleate and propagate through the Al dendrites

  11. Completely monotonic functions related to logarithmic derivatives of entire functions

    DEFF Research Database (Denmark)

    Pedersen, Henrik Laurberg

    2011-01-01

    The logarithmic derivative l(x) of an entire function of genus p and having only non-positive zeros is represented in terms of a Stieltjes function. As a consequence, (-1)p(xml(x))(m+p) is a completely monotonic function for all m ≥ 0. This generalizes earlier results on complete monotonicity...... of functions related to Euler's psi-function. Applications to Barnes' multiple gamma functions are given....

  12. Monotone numerical methods for finite-state mean-field games

    KAUST Repository

    Gomes, Diogo A.; Saude, Joao

    2017-01-01

    Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.

  13. Monotone numerical methods for finite-state mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2017-04-29

    Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.

  14. A discrete wavelet spectrum approach for identifying non-monotonic trends in hydroclimate data

    Science.gov (United States)

    Sang, Yan-Fang; Sun, Fubao; Singh, Vijay P.; Xie, Ping; Sun, Jian

    2018-01-01

    The hydroclimatic process is changing non-monotonically and identifying its trends is a great challenge. Building on the discrete wavelet transform theory, we developed a discrete wavelet spectrum (DWS) approach for identifying non-monotonic trends in hydroclimate time series and evaluating their statistical significance. After validating the DWS approach using two typical synthetic time series, we examined annual temperature and potential evaporation over China from 1961-2013 and found that the DWS approach detected both the warming and the warming hiatus in temperature, and the reversed changes in potential evaporation. Further, the identified non-monotonic trends showed stable significance when the time series was longer than 30 years or so (i.e. the widely defined climate timescale). The significance of trends in potential evaporation measured at 150 stations in China, with an obvious non-monotonic trend, was underestimated and was not detected by the Mann-Kendall test. Comparatively, the DWS approach overcame the problem and detected those significant non-monotonic trends at 380 stations, which helped understand and interpret the spatiotemporal variability in the hydroclimatic process. Our results suggest that non-monotonic trends of hydroclimate time series and their significance should be carefully identified, and the DWS approach proposed has the potential for wide use in the hydrological and climate sciences.

  15. In some symmetric spaces monotonicity properties can be reduced to the cone of rearrangements

    Czech Academy of Sciences Publication Activity Database

    Hudzik, H.; Kaczmarek, R.; Krbec, Miroslav

    2016-01-01

    Roč. 90, č. 1 (2016), s. 249-261 ISSN 0001-9054 Institutional support: RVO:67985840 Keywords : symmetric spaces * K-monotone symmetric Banach spaces * strict monotonicity * lower local uniform monotonicity Subject RIV: BA - General Mathematics Impact factor: 0.826, year: 2016 http://link.springer.com/article/10.1007%2Fs00010-015-0379-6

  16. Alternans by non-monotonic conduction velocity restitution, bistability and memory

    International Nuclear Information System (INIS)

    Kim, Tae Yun; Hong, Jin Hee; Heo, Ryoun; Lee, Kyoung J

    2013-01-01

    Conduction velocity (CV) restitution is a key property that characterizes any medium supporting traveling waves. It reflects not only the dynamics of the individual constituents but also the coupling mechanism that mediates their interaction. Recent studies have suggested that cardiac tissues, which have a non-monotonic CV-restitution property, can support alternans, a period-2 oscillatory response of periodically paced cardiac tissue. This study finds that single-hump, non-monotonic, CV-restitution curves are a common feature of in vitro cultures of rat cardiac cells. We also find that the Fenton–Karma model, one of the well-established mathematical models of cardiac tissue, supports a very similar non-monotonic CV restitution in a physiologically relevant parameter regime. Surprisingly, the mathematical model as well as the cell cultures support bistability and show cardiac memory that tends to work against the generation of an alternans. Bistability was realized by adopting two different stimulation protocols, ‘S1S2’, which produces a period-1 wave train, and ‘alternans-pacing’, which favors a concordant alternans. Thus, we conclude that the single-hump non-monotonicity in the CV-restitution curve is not sufficient to guarantee a cardiac alternans, since cardiac memory interferes and the way the system is paced matters. (paper)

  17. Log-supermodularity of weight functions and the loading monotonicity of weighted insurance premiums

    OpenAIRE

    Hristo S. Sendov; Ying Wang; Ricardas Zitikis

    2010-01-01

    The paper is motivated by a problem concerning the monotonicity of insurance premiums with respect to their loading parameter: the larger the parameter, the larger the insurance premium is expected to be. This property, usually called loading monotonicity, is satisfied by premiums that appear in the literature. The increased interest in constructing new insurance premiums has raised a question as to what weight functions would produce loading-monotonic premiums. In this paper we demonstrate a...

  18. Assessing the Health of LiFePO4 Traction Batteries through Monotonic Echo State Networks

    Science.gov (United States)

    Anseán, David; Otero, José; Couso, Inés

    2017-01-01

    A soft sensor is presented that approximates certain health parameters of automotive rechargeable batteries from on-vehicle measurements of current and voltage. The sensor is based on a model of the open circuit voltage curve. This last model is implemented through monotonic neural networks and estimate over-potentials arising from the evolution in time of the Lithium concentration in the electrodes of the battery. The proposed soft sensor is able to exploit the information contained in operational records of the vehicle better than the alternatives, this being particularly true when the charge or discharge currents are between moderate and high. The accuracy of the neural model has been compared to different alternatives, including data-driven statistical models, first principle-based models, fuzzy observers and other recurrent neural networks with different topologies. It is concluded that monotonic echo state networks can outperform well established first-principle models. The algorithms have been validated with automotive Li-FePO4 cells. PMID:29267219

  19. Assessing the Health of LiFePO4 Traction Batteries through Monotonic Echo State Networks

    Directory of Open Access Journals (Sweden)

    Luciano Sánchez

    2017-12-01

    Full Text Available A soft sensor is presented that approximates certain health parameters of automotive rechargeable batteries from on-vehicle measurements of current and voltage. The sensor is based on a model of the open circuit voltage curve. This last model is implemented through monotonic neural networks and estimate over-potentials arising from the evolution in time of the Lithium concentration in the electrodes of the battery. The proposed soft sensor is able to exploit the information contained in operational records of the vehicle better than the alternatives, this being particularly true when the charge or discharge currents are between moderate and high. The accuracy of the neural model has been compared to different alternatives, including data-driven statistical models, first principle-based models, fuzzy observers and other recurrent neural networks with different topologies. It is concluded that monotonic echo state networks can outperform well established first-principle models. The algorithms have been validated with automotive Li-FePO4 cells.

  20. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  1. A discrete wavelet spectrum approach for identifying non-monotonic trends in hydroclimate data

    Directory of Open Access Journals (Sweden)

    Y.-F. Sang

    2018-01-01

    Full Text Available The hydroclimatic process is changing non-monotonically and identifying its trends is a great challenge. Building on the discrete wavelet transform theory, we developed a discrete wavelet spectrum (DWS approach for identifying non-monotonic trends in hydroclimate time series and evaluating their statistical significance. After validating the DWS approach using two typical synthetic time series, we examined annual temperature and potential evaporation over China from 1961–2013 and found that the DWS approach detected both the warming and the warming hiatus in temperature, and the reversed changes in potential evaporation. Further, the identified non-monotonic trends showed stable significance when the time series was longer than 30 years or so (i.e. the widely defined climate timescale. The significance of trends in potential evaporation measured at 150 stations in China, with an obvious non-monotonic trend, was underestimated and was not detected by the Mann–Kendall test. Comparatively, the DWS approach overcame the problem and detected those significant non-monotonic trends at 380 stations, which helped understand and interpret the spatiotemporal variability in the hydroclimatic process. Our results suggest that non-monotonic trends of hydroclimate time series and their significance should be carefully identified, and the DWS approach proposed has the potential for wide use in the hydrological and climate sciences.

  2. A System of Generalized Variational Inclusions Involving a New Monotone Mapping in Banach Spaces

    Directory of Open Access Journals (Sweden)

    Jinlin Guan

    2013-01-01

    Full Text Available We introduce a new monotone mapping in Banach spaces, which is an extension of the -monotone mapping studied by Nazemi (2012, and we generalize the variational inclusion involving the -monotone mapping. Based on the new monotone mapping, we propose a new proximal mapping which combines the proximal mapping studied by Nazemi (2012 with the mapping studied by Lan et al. (2011 and show its Lipschitz continuity. Based on the new proximal mapping, we give an iterative algorithm. Furthermore, we prove the convergence of iterative sequences generated by the algorithm under some appropriate conditions. Our results improve and extend corresponding ones announced by many others.

  3. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  4. A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-01-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlog⁡n) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376

  5. A simple algorithm for computing positively weighted straight skeletons of monotone polygons.

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-02-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.

  6. Modelling Embedded Systems by Non-Monotonic Refinement

    NARCIS (Netherlands)

    Mader, Angelika H.; Marincic, J.; Wupper, H.

    2008-01-01

    This paper addresses the process of modelling embedded sys- tems for formal verification. We propose a modelling process built on non-monotonic refinement and a number of guidelines. The outcome of the modelling process is a model, together with a correctness argument that justifies our modelling

  7. An analysis of the stability and monotonicity of a kind of control models

    Directory of Open Access Journals (Sweden)

    LU Yifa

    2013-06-01

    Full Text Available The stability and monotonicity of control systems with parameters are considered.By the iterative relationship of the coefficients of characteristic polynomials and the Mathematica software,some sufficient conditions for the monotonicity and stability of systems are given.

  8. CFD simulation of simultaneous monotonic cooling and surface heat transfer coefficient

    International Nuclear Information System (INIS)

    Mihálka, Peter; Matiašovský, Peter

    2016-01-01

    The monotonic heating regime method for determination of thermal diffusivity is based on the analysis of an unsteady-state (stabilised) thermal process characterised by an independence of the space-time temperature distribution on initial conditions. At the first kind of the monotonic regime a sample of simple geometry is heated / cooled at constant ambient temperature. The determination of thermal diffusivity requires the determination rate of a temperature change and simultaneous determination of the first eigenvalue. According to a characteristic equation the first eigenvalue is a function of the Biot number defined by a surface heat transfer coefficient and thermal conductivity of an analysed material. Knowing the surface heat transfer coefficient and the first eigenvalue the thermal conductivity can be determined. The surface heat transport coefficient during the monotonic regime can be determined by the continuous measurement of long-wave radiation heat flow and the photoelectric measurement of the air refractive index gradient in a boundary layer. CFD simulation of the cooling process was carried out to analyse local convective and radiative heat transfer coefficients more in detail. Influence of ambient air flow was analysed. The obtained eigenvalues and corresponding surface heat transfer coefficient values enable to determine thermal conductivity of the analysed specimen together with its thermal diffusivity during a monotonic heating regime.

  9. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  10. Effects of Different LiDAR Intensity Normalization Methods on Scotch Pine Forest Leaf Area Index Estimation

    Directory of Open Access Journals (Sweden)

    YOU Haotian

    2018-02-01

    Full Text Available The intensity data of airborne light detection and ranging (LiDAR are affected by many factors during the acquisition process. It is of great significance for the normalization and application of LiDAR intensity data to study the effective quantification and normalization of the effect from each factor. In this paper, the LiDAR data were normalized with range, angel of incidence, range and angle of incidence based on radar equation, respectively. Then two metrics, including canopy intensity sum and ratio of intensity, were extracted and used to estimate forest LAI, which was aimed at quantifying the effects of intensity normalization on forest LAI estimation. It was found that the range intensity normalization could improve the accuracy of forest LAI estimation. While the angle of incidence intensity normalization did not improve the accuracy and made the results worse. Although the range and incidence angle normalized intensity data could improve the accuracy, the improvement was less than the result of range intensity normalization. Meanwhile, the differences between the results of forest LAI estimation from raw intensity data and normalized intensity data were relatively big for canopy intensity sum metrics. However, the differences were relatively small for the ratio of intensity metrics. The results demonstrated that the effects of intensity normalization on forest LAI estimation were depended on the choice of affecting factor, and the influential level is closely related to the characteristics of metrics used. Therefore, the appropriate method of intensity normalization should be chosen according to the characteristics of metrics used in the future research, which could avoid the waste of cost and the reduction of estimation accuracy caused by the introduction of inappropriate affecting factors into intensity normalization.

  11. An Examination of Cooper's Test for Monotonic Trend

    Science.gov (United States)

    Hsu, Louis

    1977-01-01

    A statistic for testing monotonic trend that has been presented in the literature is shown not to be the binomial random variable it is contended to be, but rather it is linearly related to Kendall's tau statistic. (JKS)

  12. Design considerations and analysis planning of a phase 2a proof of concept study in rheumatoid arthritis in the presence of possible non-monotonicity.

    Science.gov (United States)

    Liu, Feng; Walters, Stephen J; Julious, Steven A

    2017-10-02

    It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA), a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC) trial for a follow-on compound using the lessons learnt from the lead compound. The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid "S" shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM) is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response follows a placebo like curve, an Emax like curve, or log

  13. Using exogenous variables in testing for monotonic trends in hydrologic time series

    Science.gov (United States)

    Alley, William M.

    1988-01-01

    One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.

  14. Rational functions with maximal radius of absolute monotonicity

    KAUST Repository

    Loczi, Lajos; Ketcheson, David I.

    2014-01-01

    -Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend

  15. A note on monotone real circuits

    Czech Academy of Sciences Publication Activity Database

    Hrubeš, Pavel; Pudlák, Pavel

    2018-01-01

    Roč. 131, March (2018), s. 15-19 ISSN 0020-0190 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : computational complexity * monotone real circuit * Karchmer-Wigderson game Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.748, year: 2016 http ://www.sciencedirect.com/science/article/pii/S0020019017301965?via%3Dihub

  16. A note on monotone real circuits

    Czech Academy of Sciences Publication Activity Database

    Hrubeš, Pavel; Pudlák, Pavel

    2018-01-01

    Roč. 131, March (2018), s. 15-19 ISSN 0020-0190 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : computational complexity * monotone real circuit * Karchmer-Wigderson game Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.748, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020019017301965?via%3Dihub

  17. Interval Routing and Minor-Monotone Graph Parameters

    NARCIS (Netherlands)

    Bakker, E.M.; Bodlaender, H.L.; Tan, R.B.; Leeuwen, J. van

    2006-01-01

    We survey a number of minor-monotone graph parameters and their relationship to the complexity of routing on graphs. In particular we compare the interval routing parameters κslir(G) and κsir(G) with Colin de Verdi`ere’s graph invariant μ(G) and its variants λ(G) and κ(G). We show that for all the

  18. A Hybrid Approach to Proving Memory Reference Monotonicity

    KAUST Repository

    Oancea, Cosmin E.

    2013-01-01

    Array references indexed by non-linear expressions or subscript arrays represent a major obstacle to compiler analysis and to automatic parallelization. Most previous proposed solutions either enhance the static analysis repertoire to recognize more patterns, to infer array-value properties, and to refine the mathematical support, or apply expensive run time analysis of memory reference traces to disambiguate these accesses. This paper presents an automated solution based on static construction of access summaries, in which the reference non-linearity problem can be solved for a large number of reference patterns by extracting arbitrarily-shaped predicates that can (in)validate the reference monotonicity property and thus (dis)prove loop independence. Experiments on six benchmarks show that our general technique for dynamic validation of the monotonicity property can cover a large class of codes, incurs minimal run-time overhead and obtains good speedups. © 2013 Springer-Verlag.

  19. Monotonous consumption of fibre-enriched bread at breakfast increases satiety and influences subsequent food intake.

    Science.gov (United States)

    Touyarou, Peio; Sulmont-Rossé, Claire; Gagnaire, Aude; Issanchou, Sylvie; Brondel, Laurent

    2012-04-01

    This study aimed to observe the influence of the monotonous consumption of two types of fibre-enriched bread at breakfast on hedonic liking for the bread, subsequent hunger and energy intake. Two groups of unrestrained normal weight participants were given either white sandwich bread (WS) or multigrain sandwich bread (MG) at breakfast (the sensory properties of the WS were more similar to the usual bread eaten by the participants than those of the MG). In each group, two 15-day cross-over conditions were set up. During the experimental condition the usual breakfast of each participant was replaced by an isocaloric portion of plain bread (WS or MG). During the control condition, participants consumed only 10 g of the corresponding bread and completed their breakfast with other foods they wanted. The results showed that bread appreciation did not change over exposure even in the experimental condition. Hunger was lower in the experimental condition than in the control condition. The consumption of WS decreased energy intake while the consumption of MG did not in the experimental condition compared to the corresponding control one. In conclusion, a monotonous breakfast composed solely of a fibre-enriched bread may decrease subsequent hunger and, when similar to a familiar bread, food intake. Copyright © 2011. Published by Elsevier Ltd.

  20. Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods

    KAUST Repository

    Mozartova, A.; Savostianov, I.; Hundsdorfer, W.

    2015-01-01

    © 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.

  1. Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods

    KAUST Repository

    Mozartova, A.

    2015-05-01

    © 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.

  2. The regularized monotonicity method: detecting irregular indefinite inclusions

    DEFF Research Database (Denmark)

    Garde, Henrik; Staboulis, Stratos

    2018-01-01

    inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...

  3. A note on monotonicity of item response functions for ordered polytomous item response theory models.

    Science.gov (United States)

    Kang, Hyeon-Ah; Su, Ya-Hui; Chang, Hua-Hua

    2018-03-08

    A monotone relationship between a true score (τ) and a latent trait level (θ) has been a key assumption for many psychometric applications. The monotonicity property in dichotomous response models is evident as a result of a transformation via a test characteristic curve. Monotonicity in polytomous models, in contrast, is not immediately obvious because item response functions are determined by a set of response category curves, which are conceivably non-monotonic in θ. The purpose of the present note is to demonstrate strict monotonicity in ordered polytomous item response models. Five models that are widely used in operational assessments are considered for proof: the generalized partial credit model (Muraki, 1992, Applied Psychological Measurement, 16, 159), the nominal model (Bock, 1972, Psychometrika, 37, 29), the partial credit model (Masters, 1982, Psychometrika, 47, 147), the rating scale model (Andrich, 1978, Psychometrika, 43, 561), and the graded response model (Samejima, 1972, A general model for free-response data (Psychometric Monograph no. 18). Psychometric Society, Richmond). The study asserts that the item response functions in these models strictly increase in θ and thus there exists strict monotonicity between τ and θ under certain specified conditions. This conclusion validates the practice of customarily using τ in place of θ in applied settings and provides theoretical grounds for one-to-one transformations between the two scales. © 2018 The British Psychological Society.

  4. Non-monotone positive solutions of second-order linear differential equations: existence, nonexistence and criteria

    Directory of Open Access Journals (Sweden)

    Mervan Pašić

    2016-10-01

    Full Text Available We study non-monotone positive solutions of the second-order linear differential equations: $(p(tx'' + q(t x = e(t$, with positive $p(t$ and $q(t$. For the first time, some criteria as well as the existence and nonexistence of non-monotone positive solutions are proved in the framework of some properties of solutions $\\theta (t$ of the corresponding integrable linear equation: $(p(t\\theta''=e(t$. The main results are illustrated by many examples dealing with equations which allow exact non-monotone positive solutions not necessarily periodic. Finally, we pose some open questions.

  5. New concurrent iterative methods with monotonic convergence

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Qingchuan [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    This paper proposes the new concurrent iterative methods without using any derivatives for finding all zeros of polynomials simultaneously. The new methods are of monotonic convergence for both simple and multiple real-zeros of polynomials and are quadratically convergent. The corresponding accelerated concurrent iterative methods are obtained too. The new methods are good candidates for the application in solving symmetric eigenproblems.

  6. The Bird Core for Minimum Cost Spanning Tree problems Revisited : Monotonicity and Additivity Aspects

    NARCIS (Netherlands)

    Tijs, S.H.; Moretti, S.; Brânzei, R.; Norde, H.W.

    2005-01-01

    A new way is presented to define for minimum cost spanning tree (mcst-) games the irreducible core, which is introduced by Bird in 1976.The Bird core correspondence turns out to have interesting monotonicity and additivity properties and each stable cost monotonic allocation rule for mcst-problems

  7. Rational functions with maximal radius of absolute monotonicity

    KAUST Repository

    Loczi, Lajos

    2014-05-19

    We study the radius of absolute monotonicity R of rational functions with numerator and denominator of degree s that approximate the exponential function to order p. Such functions arise in the application of implicit s-stage, order p Runge-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend and Kraaijevanger. We determine the maximum attainable radius for functions in several one-parameter families of rational functions. Moreover, we prove earlier conjectured optimal radii in some families with 2 or 3 parameters via uniqueness arguments for systems of polynomial inequalities. Our results also prove the optimality of some strong stability preserving implicit and singly diagonally implicit Runge-Kutta methods. Whereas previous results in this area were primarily numerical, we give all constants as exact algebraic numbers.

  8. Thermal effects on the enhanced ductility in non-monotonic uniaxial tension of DP780 steel sheet

    Science.gov (United States)

    Majidi, Omid; Barlat, Frederic; Korkolis, Yannis P.; Fu, Jiawei; Lee, Myoung-Gyu

    2016-11-01

    To understand the material behavior during non-monotonic loading, uniaxial tension tests were conducted in three modes, namely, the monotonic loading, loading with periodic relaxation and periodic loading-unloadingreloading, at different strain rates (0.001/s to 0.01/s). In this study, the temperature gradient developing during each test and its contribution to increasing the apparent ductility of DP780 steel sheets were considered. In order to assess the influence of temperature, isothermal uniaxial tension tests were also performed at three temperatures (298 K, 313 K and 328 K (25 °C, 40 °C and 55 °C)). A digital image correlation system coupled with an infrared thermography was used in the experiments. The results show that the non-monotonic loading modes increased the apparent ductility of the specimens. It was observed that compared with the monotonic loading, the temperature gradient became more uniform when a non-monotonic loading was applied.

  9. Reduction theorems for weighted integral inequalities on the cone of monotone functions

    International Nuclear Information System (INIS)

    Gogatishvili, A; Stepanov, V D

    2013-01-01

    This paper surveys results related to the reduction of integral inequalities involving positive operators in weighted Lebesgue spaces on the real semi-axis and valid on the cone of monotone functions, to certain more easily manageable inequalities valid on the cone of non-negative functions. The case of monotone operators is new. As an application, a complete characterization for all possible integrability parameters is obtained for a number of Volterra operators. Bibliography: 118 titles

  10. Partial coherence with application to the monotonicity problem of coherence involving skew information

    Science.gov (United States)

    Luo, Shunlong; Sun, Yuan

    2017-08-01

    Quantifications of coherence are intensively studied in the context of completely decoherent operations (i.e., von Neuamnn measurements, or equivalently, orthonormal bases) in recent years. Here we investigate partial coherence (i.e., coherence in the context of partially decoherent operations such as Lüders measurements). A bona fide measure of partial coherence is introduced. As an application, we address the monotonicity problem of K -coherence (a quantifier for coherence in terms of Wigner-Yanase skew information) [Girolami, Phys. Rev. Lett. 113, 170401 (2014), 10.1103/PhysRevLett.113.170401], which is introduced to realize a measure of coherence as axiomatized by Baumgratz, Cramer, and Plenio [Phys. Rev. Lett. 113, 140401 (2014), 10.1103/PhysRevLett.113.140401]. Since K -coherence fails to meet the necessary requirement of monotonicity under incoherent operations, it is desirable to remedy this monotonicity problem. We show that if we modify the original measure by taking skew information with respect to the spectral decomposition of an observable, rather than the observable itself, as a measure of coherence, then the problem disappears, and the resultant coherence measure satisfies the monotonicity. Some concrete examples are discussed and related open issues are indicated.

  11. Optimal Monotonicity-Preserving Perturbations of a Given Runge–Kutta Method

    KAUST Repository

    Higueras, Inmaculada

    2018-02-14

    Perturbed Runge–Kutta methods (also referred to as downwind Runge–Kutta methods) can guarantee monotonicity preservation under larger step sizes relative to their traditional Runge–Kutta counterparts. In this paper we study the question of how to optimally perturb a given method in order to increase the radius of absolute monotonicity (a.m.). We prove that for methods with zero radius of a.m., it is always possible to give a perturbation with positive radius. We first study methods for linear problems and then methods for nonlinear problems. In each case, we prove upper bounds on the radius of a.m., and provide algorithms to compute optimal perturbations. We also provide optimal perturbations for many known methods.

  12. Optimal Monotonicity-Preserving Perturbations of a Given Runge–Kutta Method

    KAUST Repository

    Higueras, Inmaculada; Ketcheson, David I.; Kocsis, Tihamé r A.

    2018-01-01

    Perturbed Runge–Kutta methods (also referred to as downwind Runge–Kutta methods) can guarantee monotonicity preservation under larger step sizes relative to their traditional Runge–Kutta counterparts. In this paper we study the question of how to optimally perturb a given method in order to increase the radius of absolute monotonicity (a.m.). We prove that for methods with zero radius of a.m., it is always possible to give a perturbation with positive radius. We first study methods for linear problems and then methods for nonlinear problems. In each case, we prove upper bounds on the radius of a.m., and provide algorithms to compute optimal perturbations. We also provide optimal perturbations for many known methods.

  13. Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2014-01-01

    In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...

  14. Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation

    Directory of Open Access Journals (Sweden)

    Namyong Kim

    2016-06-01

    Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.

  15. Global Attractivity Results for Mixed-Monotone Mappings in Partially Ordered Complete Metric Spaces

    Directory of Open Access Journals (Sweden)

    Kalabušić S

    2009-01-01

    Full Text Available We prove fixed point theorems for mixed-monotone mappings in partially ordered complete metric spaces which satisfy a weaker contraction condition than the classical Banach contraction condition for all points that are related by given ordering. We also give a global attractivity result for all solutions of the difference equation , where satisfies mixed-monotone conditions with respect to the given ordering.

  16. Asymptotic normality of kernel estimator of $\\psi$-regression function for functional ergodic data

    OpenAIRE

    Laksaci ALI; Benziadi Fatima; Gheriballak Abdelkader

    2016-01-01

    In this paper we consider the problem of the estimation of the $\\psi$-regression function when the covariates take values in an infinite dimensional space. Our main aim is to establish, under a stationary ergodic process assumption, the asymptotic normality of this estimate.

  17. Error bounds for augmented truncations of discrete-time block-monotone Markov chains under subgeometric drift conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2015-01-01

    This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...

  18. Pathwise duals of monotone and additive Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    -, - (2018) ISSN 0894-9840 R&D Projects: GA ČR GAP201/12/2613 Institutional support: RVO:67985556 Keywords : pathwise duality * monotone Markov process * additive Markov process * interacting particle system Subject RIV: BA - General Mathematics Impact factor: 0.854, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0465436.pdf

  19. The relation between majorization theory and quantum information from entanglement monotones perspective

    Energy Technology Data Exchange (ETDEWEB)

    Erol, V. [Department of Computer Engineering, Institute of Science, Okan University, Istanbul (Turkey); Netas Telecommunication Inc., Istanbul (Turkey)

    2016-04-21

    Entanglement has been studied extensively for understanding the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known monotones for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. The study on these monotones has been a hot topic in quantum information [1-7] in order to understand the role of entanglement in this discipline. It can be observed that from any arbitrary quantum pure state a mixed state can obtained. A natural generalization of this observation would be to consider local operations classical communication (LOCC) transformations between general pure states of two parties. Although this question is a little more difficult, a complete solution has been developed using the mathematical framework of the majorization theory [8]. In this work, we analyze the relation between entanglement monotones concurrence and negativity with respect to majorization for general two-level quantum systems of two particles.

  20. Cytotoxicity of binary mixtures of human pharmaceuticals in a fish cell line: approaches for non-monotonic concentration-response relationships.

    Science.gov (United States)

    Bain, Peter A; Kumar, Anupama

    2014-08-01

    Predicting the effects of mixtures of environmental micropollutants is a priority research area. In this study, the cytotoxicity of ten pharmaceuticals to the rainbow trout cell line RTG-2 was determined using the neutral red uptake assay. Fluoxetine (FL), propranolol (PPN), and diclofenac (DCF) were selected for further study as binary mixtures. Biphasic concentration-response relationships were observed in cells exposed to FL and PPN. In the case of PPN, microscopic examination revealed lysosomal swelling indicative of direct uptake and accumulation of the compound. Three equations describing non-monotonic concentration-response relationships were evaluated and one was found to consistently provide more accurate estimates of the median and 10% effect concentrations compared with a sigmoidal concentration-response model. Predictive modeling of the effects of binary mixtures of FL, PPN, and DCF was undertaken using an implementation of the concentration addition (CA) conceptual model incorporating non-monotonic concentration-response relationships. The cytotoxicity of the all three binary combinations could be adequately predicted using CA, suggesting that the toxic mode of action in RTG-2 cells is unrelated to the therapeutic mode of action of these compounds. The approach presented here is widely applicable to the study of mixture toxicity in cases where non-monotonic concentration-response relationships are observed. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  1. Generalized monotonicity from global minimization in fourth-order ODEs

    NARCIS (Netherlands)

    M.A. Peletier (Mark)

    2000-01-01

    textabstractWe consider solutions of the stationary Extended Fisher-Kolmogorov equation with general potential that are global minimizers of an associated variational problem. We present results that relate the global minimization property to a generalized concept of monotonicity of the solutions.

  2. Monotone methods for solving a boundary value problem of second order discrete system

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Ming

    1999-01-01

    Full Text Available A new concept of a pair of upper and lower solutions is introduced for a boundary value problem of second order discrete system. A comparison result is given. An existence theorem for a solution is established in terms of upper and lower solutions. A monotone iterative scheme is proposed, and the monotone convergence rate of the iteration is compared and analyzed. The numerical results are given.

  3. Design considerations and analysis planning of a phase 2a proof of concept study in rheumatoid arthritis in the presence of possible non-monotonicity

    Directory of Open Access Journals (Sweden)

    Feng Liu

    2017-10-01

    Full Text Available Abstract Background It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA, a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC trial for a follow-on compound using the lessons learnt from the lead compound. Methods The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid “S” shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. Results It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response

  4. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    Science.gov (United States)

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  5. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization of decision trees and decision rules) to conduct experiments. We show that, for each monotone Boolean function with at most five variables, there exists a totally optimal decision tree which is optimal with respect to both depth and number of nodes.

  6. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  7. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    Science.gov (United States)

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  8. Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.

    Science.gov (United States)

    Nguyen, Hien D; Wood, Ian A

    2016-04-01

    Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.

  9. Theoretical and experimental study of non-monotonous effects

    International Nuclear Information System (INIS)

    Delforge, J.

    1977-01-01

    In recent years, the study of the effects of low dose rates has expanded considerably, especially in connection with current problems concerning the environment and health physics. After having made a precise definition of the different types of non-monotonous effect which may be encountered, for each the main experimental results known are indicated, as well as the principal consequences which may be expected. One example is the case of radiotherapy, where there is a chance of finding irradiation conditions such that the ratio of destructive action on malignant cells to healthy cells is significantly improved. In the second part of the report, the appearance of these phenomena, especially at low dose rates are explained. For this purpose, the theory of transformation systems of P. Delattre is used as a theoretical framework. With the help of a specific example, it is shown that non-monotonous effects are frequently encountered, especially when the overall effect observed is actually the sum of several different elementary effects (e.g. in survival curves, where death may be due to several different causes), or when the objects studied possess inherent kinetics not limited to restoration phenomena alone (e.g. cellular cycle) [fr

  10. EVALUATION OF METHODS FOR ESTIMATING FATIGUE PROPERTIES APPLIED TO STAINLESS STEELS AND ALUMINUM ALLOYS

    Directory of Open Access Journals (Sweden)

    Taylor Mac Intyer Fonseca Junior

    2013-12-01

    Full Text Available This work evaluate seven estimation methods of fatigue properties applied to stainless steels and aluminum alloys. Experimental strain-life curves are compared to the estimations obtained by each method. After applying seven different estimation methods at 14 material conditions, it was found that fatigue life can be estimated with good accuracy only by the Bäumel-Seeger method for the martensitic stainless steel tempered between 300°C and 500°C. The differences between mechanical behavior during monotonic and cyclic loading are probably the reason for the absence of a reliable method for estimation of fatigue behavior from monotonic properties for a group of materials.

  11. A locally adaptive normal distribution

    DEFF Research Database (Denmark)

    Arvanitidis, Georgios; Hansen, Lars Kai; Hauberg, Søren

    2016-01-01

    entropy distribution under the given metric. The underlying metric is, however, non-parametric. We develop a maximum likelihood algorithm to infer the distribution parameters that relies on a combination of gradient descent and Monte Carlo integration. We further extend the LAND to mixture models......The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric. We suggest to replace this metric with a locally adaptive, smoothly changing (Riemannian) metric that favors regions of high local density...

  12. How do people learn from negative evidence? Non-monotonic generalizations and sampling assumptions in inductive reasoning.

    Science.gov (United States)

    Voorspoels, Wouter; Navarro, Daniel J; Perfors, Amy; Ransom, Keith; Storms, Gert

    2015-09-01

    A robust finding in category-based induction tasks is for positive observations to raise the willingness to generalize to other categories while negative observations lower the willingness to generalize. This pattern is referred to as monotonic generalization. Across three experiments we find systematic non-monotonicity effects, in which negative observations raise the willingness to generalize. Experiments 1 and 2 show that this effect emerges in hierarchically structured domains when a negative observation from a different category is added to a positive observation. They also demonstrate that this is related to a specific kind of shift in the reasoner's hypothesis space. Experiment 3 shows that the effect depends on the assumptions that the reasoner makes about how inductive arguments are constructed. Non-monotonic reasoning occurs when people believe the facts were put together by a helpful communicator, but monotonicity is restored when they believe the observations were sampled randomly from the environment. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    Science.gov (United States)

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  14. Monte Carlo comparison of four normality tests using different entropy estimates

    Czech Academy of Sciences Publication Activity Database

    Esteban, M. D.; Castellanos, M. E.; Morales, D.; Vajda, Igor

    2001-01-01

    Roč. 30, č. 4 (2001), s. 761-785 ISSN 0361-0918 R&D Projects: GA ČR GA102/99/1137 Institutional research plan: CEZ:AV0Z1075907 Keywords : test of normality * entropy test and entropy estimator * table of critical values Subject RIV: BD - Theory of Information Impact factor: 0.153, year: 2001

  15. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  16. Monotone matrix transformations defined by the group inverse and simultaneous diagonalizability

    International Nuclear Information System (INIS)

    Bogdanov, I I; Guterman, A E

    2007-01-01

    Bijective linear transformations of the matrix algebra over an arbitrary field that preserve simultaneous diagonalizability are characterized. This result is used for the characterization of bijective linear monotone transformations . Bibliography: 28 titles.

  17. Scaling laws for dislocation microstructures in monotonic and cyclic deformation of fcc metals

    International Nuclear Information System (INIS)

    Kubin, L.P.; Sauzay, M.

    2011-01-01

    This work reviews and critically discusses the current understanding of two scaling laws, which are ubiquitous in the modeling of monotonic plastic deformation in face-centered cubic metals. A compilation of the available data allows extending the domain of application of these scaling laws to cyclic deformation. The strengthening relation tells that the flow stress is proportional to the square root of the average dislocation density, whereas the similitude relation assumes that the flow stress is inversely proportional to the characteristic wavelength of dislocation patterns. The strengthening relation arises from short-range reactions of non-coplanar segments and applies all through the first three stages of the monotonic stress vs. strain curves. The value of the proportionality coefficient is calculated and simulated in good agreement with the bulk of experimental measurements published since the beginning of the 1960's. The physical origin of what is called similitude is not understood and the related coefficient is not predictable. Its value is determined from a review of the experimental literature. The generalization of these scaling laws to cyclic deformation is carried out on the base of a large collection of experimental results on single and polycrystals of various materials and on different microstructures. Surprisingly, for persistent slip bands (PSBs), both the strengthening and similitude coefficients appear to be more than two times smaller than the corresponding monotonic values, whereas their ratio is the same as in monotonic deformation. The similitude relation is also checked in cell structures and in labyrinth structures. Under low cyclic stresses, the strengthening coefficient is found even lower than in PSBs. A tentative explanation is proposed for the differences observed between cyclic and monotonic deformation. Finally, the influence of cross-slip on the temperature dependence of the saturation stress of PSBs is discussed in some detail

  18. Monotonicity properties of keff with shape change and with nesting

    International Nuclear Information System (INIS)

    Arzhanov, V.

    2002-01-01

    It was found that, contrary to expectations based on physical intuition, k eff can both increase and decrease when changing the shape of an initially regular critical system, while preserving its volume. Physical intuition would only allow for a decrease of k eff when the surface/volume ratio increases. The unexpected behaviour of increasing k eff was found through numerical investigation. For a convincing demonstration of the possibility of the non-monotonic behaviour, a simple geometrical proof was constructed. This latter proof, in turn, is based on the assumption that k eff can only increase (or stay constant) in the case of nesting, i.e. when adding extra volume to a system. Since we found no formal proof of the nesting theorem for the general case, we close the paper by a simple formal proof of the monotonic behaviour of k eff by nesting

  19. Monotone difference schemes for weakly coupled elliptic and parabolic systems

    NARCIS (Netherlands)

    P. Matus (Piotr); F.J. Gaspar Lorenz (Franscisco); L. M. Hieu (Le Minh); V.T.K. Tuyen (Vo Thi Kim)

    2017-01-01

    textabstractThe present paper is devoted to the development of the theory of monotone difference schemes, approximating the so-called weakly coupled system of linear elliptic and quasilinear parabolic equations. Similarly to the scalar case, the canonical form of the vector-difference schemes is

  20. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    Science.gov (United States)

    Englander, Jacob A.; Englander, Arnold C.

    2014-01-01

    Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.

  1. Monotone Comparative Statics for the Industry Composition

    DEFF Research Database (Denmark)

    Laugesen, Anders Rosenstand; Bache, Peter Arendorf

    2015-01-01

    We let heterogeneous firms face decisions on a number of complementary activities in a monopolistically-competitive industry. The endogenous level of competition and selection regarding entry and exit of firms introduces a wedge between monotone comparative statics (MCS) at the firm level and MCS...... for the industry composition. The latter phenomenon is defined as first-order stochastic dominance shifts in the equilibrium distributions of all activities across active firms. We provide sufficient conditions for MCS at both levels of analysis and show that we may have either type of MCS without the other...

  2. An electronic implementation for Liao's chaotic delayed neuron model with non-monotonous activation function

    International Nuclear Information System (INIS)

    Duan Shukai; Liao Xiaofeng

    2007-01-01

    A new chaotic delayed neuron model with non-monotonously increasing transfer function, called as chaotic Liao's delayed neuron model, was recently reported and analyzed. An electronic implementation of this model is described in detail. At the same time, some methods in circuit design, especially for circuit with time delayed unit and non-monotonously increasing activation unit, are also considered carefully. We find that the dynamical behaviors of the designed circuits are closely similar to the results predicted by numerical experiments

  3. Sampling from a Discrete Distribution While Preserving Monotonicity.

    Science.gov (United States)

    1982-02-01

    in a table beforehand, this procedure, known as the inverse transform method, requires n storage spaces and EX comparisons on average, which may prove...limitations that deserve attention: a. In general, the alias method does not preserve a monotone relationship between U and X as does the inverse transform method...uses the inverse transform approach but with more information computed beforehand, as in the alias method. The proposed method is not new having been

  4. Martensitic Transformation in Ultrafine-Grained Stainless Steel AISI 304L Under Monotonic and Cyclic Loading

    Directory of Open Access Journals (Sweden)

    Heinz Werner Höppel

    2012-02-01

    Full Text Available The monotonic and cyclic deformation behavior of ultrafine-grained metastable austenitic steel AISI 304L, produced by severe plastic deformation, was investigated. Under monotonic loading, the martensitic phase transformation in the ultrafine-grained state is strongly favored. Under cyclic loading, the martensitic transformation behavior is similar to the coarse-grained condition, but the cyclic stress response is three times larger for the ultrafine-grained condition.

  5. Existence, uniqueness, monotonicity and asymptotic behaviour of travelling waves for epidemic models

    International Nuclear Information System (INIS)

    Hsu, Cheng-Hsiung; Yang, Tzi-Sheng

    2013-01-01

    The purpose of this work is to investigate the existence, uniqueness, monotonicity and asymptotic behaviour of travelling wave solutions for a general epidemic model arising from the spread of an epidemic by oral–faecal transmission. First, we apply Schauder's fixed point theorem combining with a supersolution and subsolution pair to derive the existence of positive monotone monostable travelling wave solutions. Then, applying the Ikehara's theorem, we determine the exponential rates of travelling wave solutions which converge to two different equilibria as the moving coordinate tends to positive infinity and negative infinity, respectively. Finally, using the sliding method, we prove the uniqueness result provided the travelling wave solutions satisfy some boundedness conditions. (paper)

  6. Radiographic heart-volume estimation in normal cats

    International Nuclear Information System (INIS)

    Ahlberg, N.E.; Hansson, K.; Svensson, L.; Iwarsson, K.

    1989-01-01

    Heart volume mensuration was evaluated on conventional radiographs from eight normal cats in different body positions using computed tomography (CT). Heart volumes were calculated from orthogonal thoracic radiographs in ventral and dorsal recumbency and from radiographs exposed with a vertical X-ray beam in dorsal and lateral recumbency using the formula for an ellipsoid body. Heart volumes were also estimated with CT in ventral, dorsal, right lateral and left lateral recumbency. No differences between heart volumes from CT in ventral recumbency and those from CT in right and left lateral recumbency were seen. In dorsal recumbency, however, significantly lower heart volumes were obtained. Heart volumes from CT in ventral recumbency were similar to those from radiographs in ventral and dorsal recumbency and dorsal/left lateral recumbency. Close correlation was also demonstrated between heart volumes from radiographs in dorsal/ left lateral recumbency and body weights of the eight cats

  7. Positivity and monotonicity properties of C0-semigroups. Pt. 1

    International Nuclear Information System (INIS)

    Bratteli, O.; Kishimoto, A.; Robinson, D.W.

    1980-01-01

    If exp(-tH), exp(-tK), are self-adjoint, positivity preserving, contraction semigroups on a Hilbert space H = L 2 (X;dμ) we write esup(-tH) >= esup(-tK) >= 0 whenever exp(-tH) - exp(-tK) is positivity preserving for all t >= 0 and then we characterize the class of positive functions for which (*) always implies esup(-tf(H)) >= esup(-tf(K)) >= 0. This class consists of the f epsilon Csup(infinitely)(0, infinitely) with (-1)sup(n)fsup((n + 1))(x) >= 0, x epsilon(0, infinitely), n = 0, 1, 2, ... In particular it contains the class of monotone operator functions. Furthermore if exp(-tH) is Lsup(P)(X;dμ) contractive for all p epsilon[1, infinitely] and all t > 0 (or, equivalently, for p = infinitely and t > 0) then exp(-tf(H)) has the same property. Various applications to monotonicity properties of Green's functions are given. (orig.)

  8. Non-monotonic effect of growth temperature on carrier collection in SnS solar cells

    International Nuclear Information System (INIS)

    Chakraborty, R.; Steinmann, V.; Mangan, N. M.; Brandt, R. E.; Poindexter, J. R.; Jaramillo, R.; Mailoa, J. P.; Hartman, K.; Polizzotti, A.; Buonassisi, T.; Yang, C.; Gordon, R. G.

    2015-01-01

    We quantify the effects of growth temperature on material and device properties of thermally evaporated SnS thin-films and test structures. Grain size, Hall mobility, and majority-carrier concentration monotonically increase with growth temperature. However, the charge collection as measured by the long-wavelength contribution to short-circuit current exhibits a non-monotonic behavior: the collection decreases with increased growth temperature from 150 °C to 240 °C and then recovers at 285 °C. Fits to the experimental internal quantum efficiency using an opto-electronic model indicate that the non-monotonic behavior of charge-carrier collection can be explained by a transition from drift- to diffusion-assisted components of carrier collection. The results show a promising increase in the extracted minority-carrier diffusion length at the highest growth temperature of 285 °C. These findings illustrate how coupled mechanisms can affect early stage device development, highlighting the critical role of direct materials property measurements and simulation

  9. The effect of the electrical double layer on hydrodynamic lubrication: a non-monotonic trend with increasing zeta potential

    Directory of Open Access Journals (Sweden)

    Dalei Jing

    2017-07-01

    Full Text Available In the present study, a modified Reynolds equation including the electrical double layer (EDL-induced electroviscous effect of lubricant is established to investigate the effect of the EDL on the hydrodynamic lubrication of a 1D slider bearing. The theoretical model is based on the nonlinear Poisson–Boltzmann equation without the use of the Debye–Hückel approximation. Furthermore, the variation in the bulk electrical conductivity of the lubricant under the influence of the EDL is also considered during the theoretical analysis of hydrodynamic lubrication. The results show that the EDL can increase the hydrodynamic load capacity of the lubricant in a 1D slider bearing. More importantly, the hydrodynamic load capacity of the lubricant under the influence of the EDL shows a non-monotonic trend, changing from enhancement to attenuation with a gradual increase in the absolute value of the zeta potential. This non-monotonic hydrodynamic lubrication is dependent on the non-monotonic electroviscous effect of the lubricant generated by the EDL, which is dominated by the non-monotonic electrical field strength and non-monotonic electrical body force on the lubricant. The subject of the paper is the theoretical modeling and the corresponding analysis.

  10. Modelling the drained response of bucket foundations for offshore wind turbines under general monotonic and cyclic loading

    DEFF Research Database (Denmark)

    Foglia, Aligi; Gottardi, Guido; Govoni, Laura

    2015-01-01

    The response of bucket foundations on sand subjected to planar monotonic and cyclic loading is investigated in the paper. Thirteen monotonic and cyclic laboratory tests on a skirted footing model having a 0.3 m diameter and embedment ratio equal to 1 are presented. The loading regime reproduces t...

  11. Multigenerational contaminant exposures produce non-monotonic, transgenerational responses in Daphnia magna

    International Nuclear Information System (INIS)

    Kimberly, David A.; Salice, Christopher J.

    2015-01-01

    Generally, ecotoxicologists rely on short-term tests that assume populations to be static. Conversely, natural populations may be exposed to the same stressors for many generations, which can alter tolerance to the same (or other) stressors. The objective of this study was to improve our understanding of how multigenerational stressors alter life history traits and stressor tolerance. After continuously exposing Daphnia magna to cadmium for 120 days, we assessed life history traits and conducted a challenge at higher temperature and cadmium concentrations. Predictably, individuals exposed to cadmium showed an overall decrease in reproductive output compared to controls. Interestingly, control D. magna were the most cadmium tolerant to novel cadmium, followed by those exposed to high cadmium. Our data suggest that long-term exposure to cadmium alter tolerance traits in a non-monotonic way. Because we observed effects after one-generation removal from cadmium, transgenerational effects may be possible as a result of multigenerational exposure. - Highlights: • Daphnia magna exposed to cadmium for 120 days. • D. magna exposed to cadmium had decreased reproductive output. • Control D. magna were most cadmium tolerant to novel cadmium stress. • Long-term exposure to cadmium alter tolerance traits in a non-monotonic way. • Transgenerational effects observed as a result of multigenerational exposure. - Adverse effects of long-term cadmium exposure persist into cadmium free conditions, as seen by non-monotonic responses when exposed to novel stress one generation removed.

  12. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  13. Experimental quantum control landscapes: Inherent monotonicity and artificial structure

    International Nuclear Information System (INIS)

    Roslund, Jonathan; Rabitz, Herschel

    2009-01-01

    Unconstrained searches over quantum control landscapes are theoretically predicted to generally exhibit trap-free monotonic behavior. This paper makes an explicit experimental demonstration of this intrinsic monotonicity for two controlled quantum systems: frequency unfiltered and filtered second-harmonic generation (SHG). For unfiltered SHG, the landscape is randomly sampled and interpolation of the data is found to be devoid of landscape traps up to the level of data noise. In the case of narrow-band-filtered SHG, trajectories are taken on the landscape to reveal a lack of traps. Although the filtered SHG landscape is trap free, it exhibits a rich local structure. A perturbation analysis around the top of these landscapes provides a basis to understand their topology. Despite the inherent trap-free nature of the landscapes, practical constraints placed on the controls can lead to the appearance of artificial structure arising from the resultant forced sampling of the landscape. This circumstance and the likely lack of knowledge about the detailed local landscape structure in most quantum control applications suggests that the a priori identification of globally successful (un)constrained curvilinear control variables may be a challenging task.

  14. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...

  15. Bias in regression coefficient estimates upon different treatments of ...

    African Journals Online (AJOL)

    MS and PW consistently overestimated the population parameter. EM and RI, on the other hand, tended to consistently underestimate the population parameter under non-monotonic pattern. Keywords: Missing data, bias, regression, percent missing, non-normality, missing pattern > East African Journal of Statistics Vol.

  16. Non-monotonic reasoning in conceptual modeling and ontology design: A proposal

    CSIR Research Space (South Africa)

    Casini, G

    2013-06-01

    Full Text Available -1 2nd International Workshop on Ontologies and Conceptual Modeling (Onto.Com 2013), Valencia, Spain, 17-21 June 2013 Non-monotonic reasoning in conceptual modeling and ontology design: A proposal Giovanni Casini1 and Alessandro Mosca2 1...

  17. Monotonous braking of high energy hadrons in nuclear matter

    International Nuclear Information System (INIS)

    Strugalski, Z.

    1979-01-01

    Propagation of high energy hadrons in nuclear matter is discussed. The possibility of the existence of the monotonous energy losses of hadrons in nuclear matter is considered. In favour of this hypothesis experimental facts such as pion-nucleus interactions (proton emission spectra, proton multiplicity distributions in these interactions) and other data are presented. The investigated phenomenon in the framework of the hypothesis is characterized in more detail

  18. The electronic structure of normal metal-superconductor bilayers

    Energy Technology Data Exchange (ETDEWEB)

    Halterman, Klaus; Elson, J Merle [Sensor and Signal Sciences Division, Naval Air Warfare Center, China Lake, CA 93355 (United States)

    2003-09-03

    We study the electronic properties of ballistic thin normal metal-bulk superconductor heterojunctions by solving the Bogoliubov-de Gennes equations in the quasiclassical and microscopic 'exact' regimes. In particular, the significance of the proximity effect is examined through a series of self-consistent calculations of the space-dependent pair potential {delta}(r). It is found that self-consistency cannot be neglected for normal metal layer widths smaller than the superconducting coherence length {xi}{sub 0}, revealing its importance through discernible features in the subgap density of states. Furthermore, the exact self-consistent treatment yields a proximity-induced gap in the normal metal spectrum, which vanishes monotonically when the normal metal length exceeds {xi}{sub 0}. Through a careful analysis of the excitation spectra, we find that quasiparticle trajectories with wavevectors oriented mainly along the interface play a critical role in the destruction of the energy gap.

  19. Quantisation of monotonic twist maps

    International Nuclear Information System (INIS)

    Boasman, P.A.; Smilansky, U.

    1993-08-01

    Using an approach suggested by Moser, classical Hamiltonians are generated that provide an interpolating flow to the stroboscopic motion of maps with a monotonic twist condition. The quantum properties of these Hamiltonians are then studied in analogy with recent work on the semiclassical quantization of systems based on Poincare surfaces of section. For the generalized standard map, the correspondence with the usual classical and quantum results is shown, and the advantages of the quantum Moser Hamiltonian demonstrated. The same approach is then applied to the free motion of a particle on a 2-torus, and to the circle billiard. A natural quantization condition based on the eigenphases of the unitary time--development operator is applied, leaving the exact eigenvalues of the torus, but only the semiclassical eigenvalues for the billiard; an explanation for this failure is proposed. It is also seen how iterating the classical map commutes with the quantization. (authors)

  20. The influence of gas–solid reaction kinetics in models of thermochemical heat storage under monotonic and cyclic loading

    International Nuclear Information System (INIS)

    Nagel, T.; Shao, H.; Roßkopf, C.; Linder, M.; Wörner, A.; Kolditz, O.

    2014-01-01

    Highlights: • Detailed analysis of cyclic and monotonic loading of thermochemical heat stores. • Fully coupled reactive heat and mass transport. • Reaction kinetics can be simplified in systems limited by heat transport. • Operating lines valid during monotonic and cyclic loading. • Local integral degree of conversion to capture heterogeneous material usage. - Abstract: Thermochemical reactions can be employed in heat storage devices. The choice of suitable reactive material pairs involves a thorough kinetic characterisation by, e.g., extensive thermogravimetric measurements. Before testing a material on a reactor level, simulations with models based on the Theory of Porous Media can be used to establish its suitability. The extent to which the accuracy of the kinetic model influences the results of such simulations is unknown yet fundamental to the validity of simulations based on chemical models of differing complexity. In this article we therefore compared simulation results on the reactor level based on an advanced kinetic characterisation of a calcium oxide/hydroxide system to those obtained by a simplified kinetic model. Since energy storage is often used for short term load buffering, the internal reactor behaviour is analysed under cyclic partial loading and unloading in addition to full monotonic charge/discharge operation. It was found that the predictions by both models were very similar qualitatively and quantitatively in terms of thermal power characteristics, conversion profiles, temperature output, reaction duration and pumping powers. Major differences were, however, observed for the reaction rate profiles themselves. We conclude that for systems not limited by kinetics the simplified model seems sufficient to estimate the reactor behaviour. The degree of material usage within the reactor was further shown to strongly vary under cyclic loading conditions and should be considered when designing systems for certain operating regimes

  1. On utilization bounds for a periodic resource under rate monotonic scheduling

    NARCIS (Netherlands)

    Renssen, van A.M.; Geuns, S.J.; Hausmans, J.P.H.M.; Poncin, W.; Bril, R.J.

    2009-01-01

    This paper revisits utilization bounds for a periodic resource under the rate monotonic (RM) scheduling algorithm. We show that the existing utilization bound, as presented in [8, 9], is optimistic. We subsequently show that by viewing the unavailability of the periodic resource as a deferrable

  2. Monotonous property of non-oscillations of the damped Duffing's equation

    International Nuclear Information System (INIS)

    Feng Zhaosheng

    2006-01-01

    In this paper, we give a qualitative study to the damped Duffing's equation by means of the qualitative theory of planar systems. Under certain parametric conditions, the monotonous property of the bounded non-oscillations is obtained. Explicit exact solutions are obtained by a direct method and application of this approach to a reaction-diffusion equation is presented

  3. Monotonicity and Logarithmic Concavity of Two Functions Involving Exponential Function

    Science.gov (United States)

    Liu, Ai-Qi; Li, Guo-Fu; Guo, Bai-Ni; Qi, Feng

    2008-01-01

    The function 1 divided by "x"[superscript 2] minus "e"[superscript"-x"] divided by (1 minus "e"[superscript"-x"])[superscript 2] for "x" greater than 0 is proved to be strictly decreasing. As an application of this monotonicity, the logarithmic concavity of the function "t" divided by "e"[superscript "at"] minus "e"[superscript"(a-1)""t"] for "a"…

  4. On monotonic solutions of an integral equation of Abel type

    International Nuclear Information System (INIS)

    Darwish, Mohamed Abdalla

    2007-08-01

    We present an existence theorem of monotonic solutions for a quadratic integral equation of Abel type in C[0, 1]. The famous Chandrasekhar's integral equation is considered as a special case. The concept of measure of noncompactness and a fi xed point theorem due to Darbo are the main tools in carrying out our proof. (author)

  5. Estimation of serum ferritin for normal subject living in Khartoum area

    International Nuclear Information System (INIS)

    Eltayeb, E.A; Khangi, F.A.; Satti, G.M.; Abu Salab, A.

    2003-01-01

    This study was conducted with a main objective; the estimation of serum ferritin level in normal subjects in Khartoum area.To fulfil this objective, two hundred and sixty symptoms-free subjects were included in the study, 103 males with 15 to 45 years. serum ferritin was determined by radioimmunoassay (RIA). It was found that the mean concentration of males' serum ferritin was much higher than that of the females' (p<0.001). (Author)

  6. Logarithmically complete monotonicity of a function related to the Catalan-Qi function

    Directory of Open Access Journals (Sweden)

    Qi Feng

    2016-08-01

    Full Text Available In the paper, the authors find necessary and sufficient conditions such that a function related to the Catalan-Qi function, which is an alternative generalization of the Catalan numbers, is logarithmically complete monotonic.

  7. A note on profit maximization and monotonicity for inbound call centers

    NARCIS (Netherlands)

    Koole, G.M.; Pot, S.A.

    2011-01-01

    We consider an inbound call center with a fixed reward per call and communication and agent costs. By controlling the number of lines and the number of agents, we can maximize the profit. Abandonments are included in our performance model. Monotonicity results for the maximization problem are

  8. A New Family of Consistent and Asymptotically-Normal Estimators for the Extremal Index

    Directory of Open Access Journals (Sweden)

    Jose Olmo

    2015-08-01

    Full Text Available The extremal index (θ is the key parameter for extending extreme value theory results from i.i.d. to stationary sequences. One important property of this parameter is that its inverse determines the degree of clustering in the extremes. This article introduces a novel interpretation of the extremal index as a limiting probability characterized by two Poisson processes and a simple family of estimators derived from this new characterization. Unlike most estimators for θ in the literature, this estimator is consistent, asymptotically normal and very stable across partitions of the sample. Further, we show in an extensive simulation study that this estimator outperforms in finite samples the logs, blocks and runs estimation methods. Finally, we apply this new estimator to test for clustering of extremes in monthly time series of unemployment growth and inflation rates and conclude that runs of large unemployment rates are more prolonged than periods of high inflation.

  9. Structural analysis of reinforced concrete structures under monotonous and cyclic loadings: numerical aspects

    International Nuclear Information System (INIS)

    Lepretre, C.; Millard, A.; Nahas, G.

    1989-01-01

    The structural analysis of reinforced concrete structures is usually performed either by means of simplified methods of strength of materials type i.e. global methods, or by means of detailed methods of continuum mechanics type, i.e. local methods. For this second type, some constitutive models are available for concrete and rebars in a certain number of finite element systems. These models are often validated on simple homogeneous tests. Therefore, it is important to appraise the validity of the results when applying them to the analysis of a reinforced concrete structure, in order to be able to make correct predictions of the actual behaviour, under normal and faulty conditions. For this purpose, some tests have been performed at I.N.S.A. de Lyon on reinforced concrete beams, subjected to monotonous and cyclic loadings, in order to generate reference solutions to be compared with the numerical predictions given by two finite element systems: - CASTEM, developed by C.E.A./.D.E.M.T. - ELEFINI, developed by I.N.S.A. de Lyon

  10. Multipartite entangled quantum states: Transformation, Entanglement monotones and Application

    Science.gov (United States)

    Cui, Wei

    Entanglement is one of the fundamental features of quantum information science. Though bipartite entanglement has been analyzed thoroughly in theory and shown to be an important resource in quantum computation and communication protocols, the theory of entanglement shared between more than two parties, which is called multipartite entanglement, is still not complete. Specifically, the classification of multipartite entanglement and the transformation property between different multipartite states by local operators and classical communications (LOCC) are two fundamental questions in the theory of multipartite entanglement. In this thesis, we present results related to the LOCC transformation between multipartite entangled states. Firstly, we investigate the bounds on the LOCC transformation probability between multipartite states, especially the GHZ class states. By analyzing the involvement of 3-tangle and other entanglement measures under weak two-outcome measurement, we derive explicit upper and lower bound on the transformation probability between GHZ class states. After that, we also analyze the transformation between N-party W type states, which is a special class of multipartite entangled states that has an explicit unique expression and a set of analytical entanglement monotones. We present a necessary and sufficient condition for a known upper bound of transformation probability between two N-party W type states to be achieved. We also further investigate a novel entanglement transformation protocol, the random distillation, which transforms multipartite entanglement into bipartite entanglement ii shared by a non-deterministic pair of parties. We find upper bounds for the random distillation protocol for general N-party W type states and find the condition for the upper bounds to be achieved. What is surprising is that the upper bounds correspond to entanglement monotones that can be increased by Separable Operators (SEP), which gives the first set of

  11. Characteristic of monotonicity of Orlicz function spaces equipped with the Orlicz norm

    Czech Academy of Sciences Publication Activity Database

    Foralewski, P.; Hudzik, H.; Kaczmarek, R.; Krbec, Miroslav

    2013-01-01

    Roč. 53, č. 2 (2013), s. 421-432 ISSN 0373-8299 R&D Projects: GA ČR GAP201/10/1920 Institutional support: RVO:67985840 Keywords : Orlicz space * Köthe space * characteristic of monotonicity Subject RIV: BA - General Mathematics

  12. Hybrid Proximal-Point Methods for Zeros of Maximal Monotone Operators, Variational Inequalities and Mixed Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Kriengsak Wattanawitoon

    2011-01-01

    Full Text Available We prove strong and weak convergence theorems of modified hybrid proximal-point algorithms for finding a common element of the zero point of a maximal monotone operator, the set of solutions of equilibrium problems, and the set of solution of the variational inequality operators of an inverse strongly monotone in a Banach space under different conditions. Moreover, applications to complementarity problems are given. Our results modify and improve the recently announced ones by Li and Song (2008 and many authors.

  13. An electronic implementation for Liao's chaotic delayed neuron model with non-monotonous activation function

    Energy Technology Data Exchange (ETDEWEB)

    Duan Shukai [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); School of Electronic and Information Engineering, Southwest University, Chongqing 400715 (China)], E-mail: duansk@swu.edu.cn; Liao Xiaofeng [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)], E-mail: xfliao@cqu.edu.cn

    2007-09-10

    A new chaotic delayed neuron model with non-monotonously increasing transfer function, called as chaotic Liao's delayed neuron model, was recently reported and analyzed. An electronic implementation of this model is described in detail. At the same time, some methods in circuit design, especially for circuit with time delayed unit and non-monotonously increasing activation unit, are also considered carefully. We find that the dynamical behaviors of the designed circuits are closely similar to the results predicted by numerical experiments.

  14. Modeling non-monotonic properties under propositional argumentation

    Science.gov (United States)

    Wang, Geng; Lin, Zuoquan

    2013-03-01

    In the field of knowledge representation, argumentation is usually considered as an abstract framework for nonclassical logic. In this paper, however, we'd like to present a propositional argumentation framework, which can be used to closer simulate a real-world argumentation. We thereby argue that under a dialectical argumentation game, we can allow non-monotonic reasoning even under classical logic. We introduce two methods together for gaining nonmonotonicity, one by giving plausibility for arguments, the other by adding "exceptions" which is similar to defaults. Furthermore, we will give out an alternative definition for propositional argumentation using argumentative models, which is highly related to the previous reasoning method, but with a simple algorithm for calculation.

  15. A Min-max Relation for Monotone Path Systems in Simple Regions

    DEFF Research Database (Denmark)

    Cameron, Kathleen

    1996-01-01

    A monotone path system (MPS) is a finite set of pairwise disjointpaths (polygonal arcs) in the plane such that every horizontal line intersectseach of the paths in at most one point. We consider a simple polygon in thexy-plane which bounds the simple polygonal (closed) region D. Let T and B betwo...

  16. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  17. The Monotonic Lagrangian Grid for Rapid Air-Traffic Evaluation

    Science.gov (United States)

    Kaplan, Carolyn; Dahm, Johann; Oran, Elaine; Alexandrov, Natalia; Boris, Jay

    2010-01-01

    The Air Traffic Monotonic Lagrangian Grid (ATMLG) is presented as a tool to evaluate new air traffic system concepts. The model, based on an algorithm called the Monotonic Lagrangian Grid (MLG), can quickly sort, track, and update positions of many aircraft, both on the ground (at airports) and in the air. The underlying data structure is based on the MLG, which is used for sorting and ordering positions and other data needed to describe N moving bodies and their interactions. Aircraft that are close to each other in physical space are always near neighbors in the MLG data arrays, resulting in a fast nearest-neighbor interaction algorithm that scales as N. Recent upgrades to ATMLG include adding blank place-holders within the MLG data structure, which makes it possible to dynamically change the MLG size and also improves the quality of the MLG grid. Additional upgrades include adding FAA flight plan data, such as way-points and arrival and departure times from the Enhanced Traffic Management System (ETMS), and combining the MLG with the state-of-the-art strategic and tactical conflict detection and resolution algorithms from the NASA-developed Stratway software. In this paper, we present results from our early efforts to couple ATMLG with the Stratway software, and we demonstrate that it can be used to quickly simulate air traffic flow for a very large ETMS dataset.

  18. Monotonicity of fitness landscapes and mutation rate control.

    Science.gov (United States)

    Belavkin, Roman V; Channon, Alastair; Aston, Elizabeth; Aston, John; Krašovec, Rok; Knight, Christopher G

    2016-12-01

    A common view in evolutionary biology is that mutation rates are minimised. However, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Much biological theory in this area is based on Ronald Fisher's work, who used Euclidean geometry to study the relation between mutation size and expected fitness of the offspring in infinite phenotypic spaces. Here we reconsider this theory based on the alternative geometry of discrete and finite spaces of DNA sequences. First, we consider the geometric case of fitness being isomorphic to distance from an optimum, and show how problems of optimal mutation rate control can be solved exactly or approximately depending on additional constraints of the problem. Then we consider the general case of fitness communicating only partial information about the distance. We define weak monotonicity of fitness landscapes and prove that this property holds in all landscapes that are continuous and open at the optimum. This theoretical result motivates our hypothesis that optimal mutation rate functions in such landscapes will increase when fitness decreases in some neighbourhood of an optimum, resembling the control functions derived in the geometric case. We test this hypothesis experimentally by analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. Our findings support the hypothesis and find that the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We discuss the relevance of these findings to living organisms.

  19. Non-monotonic wetting behavior of chitosan films induced by silver nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Praxedes, A.P.P.; Webler, G.D.; Souza, S.T. [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Ribeiro, A.S. [Instituto de Química e Biotecnologia, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Fonseca, E.J.S. [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Oliveira, I.N. de, E-mail: italo@fis.ufal.br [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil)

    2016-05-01

    Highlights: • The addition of silver nanoparticles modifies the morphology of chitosan films. • Metallic nanoparticles can be used to control wetting properties of chitosan films. • The contact angle shows a non-monotonic dependence on the silver concentration. - Abstract: The present work is devoted to the study of structural and wetting properties of chitosan-based films containing silver nanoparticles. In particular, the effects of silver concentration on the morphology of chitosan films are characterized by different techniques, such as atomic force microscopy (AFM), X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR). By means of dynamic contact angle measurements, we study the modification on surface properties of chitosan-based films due to the addition of silver nanoparticles. The results are analyzed in the light of molecular-kinetic theory which describes the wetting phenomena in terms of statistical dynamics for the displacement of liquid molecules in a solid substrate. Our results show that the wetting properties of chitosan-based films are high sensitive to the fraction of silver nanoparticles, with the equilibrium contact angle exhibiting a non-monotonic behavior.

  20. Non-Monotonic Spatial Reasoning with Answer Set Programming Modulo Theories

    OpenAIRE

    Wałęga, Przemysław Andrzej; Schultz, Carl; Bhatt, Mehul

    2016-01-01

    The systematic modelling of dynamic spatial systems is a key requirement in a wide range of application areas such as commonsense cognitive robotics, computer-aided architecture design, and dynamic geographic information systems. We present ASPMT(QS), a novel approach and fully-implemented prototype for non-monotonic spatial reasoning -a crucial requirement within dynamic spatial systems- based on Answer Set Programming Modulo Theories (ASPMT). ASPMT(QS) consists of a (qualitative) spatial re...

  1. Slope Estimation during Normal Walking Using a Shank-Mounted Inertial Sensor

    Directory of Open Access Journals (Sweden)

    Juan C. Álvarez

    2012-08-01

    Full Text Available In this paper we propose an approach for the estimation of the slope of the walking surface during normal walking using a body-worn sensor composed of a biaxial accelerometer and a uniaxial gyroscope attached to the shank. It builds upon a state of the art technique that was successfully used to estimate the walking velocity from walking stride data, but did not work when used to estimate the slope of the walking surface. As claimed by the authors, the reason was that it did not take into account the actual inclination of the shank of the stance leg at the beginning of the stride (mid stance. In this paper, inspired by the biomechanical characteristics of human walking, we propose to solve this issue by using the accelerometer as a tilt sensor, assuming that at mid stance it is only measuring the gravity acceleration. Results from a set of experiments involving several users walking at different inclinations on a treadmill confirm the feasibility of our approach. A statistical analysis of slope estimations shows in first instance that the technique is capable of distinguishing the different slopes of the walking surface for every subject. It reports a global RMS error (per-unit difference between actual and estimated inclination of the walking surface for each stride identified in the experiments of 0.05 and this can be reduced to 0.03 with subject-specific calibration and post processing procedures by means of averaging techniques.

  2. On the Monotonicity and Log-Convexity of a Four-Parameter Homogeneous Mean

    Directory of Open Access Journals (Sweden)

    Yang Zhen-Hang

    2008-01-01

    Full Text Available Abstract A four-parameter homogeneous mean is defined by another approach. The criterion of its monotonicity and logarithmically convexity is presented, and three refined chains of inequalities for two-parameter mean values are deduced which contain many new and classical inequalities for means.

  3. Monotonic Set-Extended Prefix Rewriting and Verification of Recursive Ping-Pong Protocols

    DEFF Research Database (Denmark)

    Delzanno, Giorgio; Esparza, Javier; Srba, Jiri

    2006-01-01

    of messages) some verification problems become decidable. In particular we give an algorithm to decide control state reachability, a problem related to security properties like secrecy and authenticity. The proof is via a reduction to a new prefix rewriting model called Monotonic Set-extended Prefix rewriting...

  4. Necessary and sufficient conditions for a class of functions and their reciprocals to be logarithmically completely monotonic

    OpenAIRE

    Lv Yu-Pei; Sun Tian-Chuan; Chu Yu-Ming

    2011-01-01

    Abstract We prove that the function F α,β (x) = x α Γ β (x)/Γ(βx) is strictly logarithmically completely monotonic on (0, ∞) if and only if (α, β) ∈ {(α, β) : β > 0, β ≥ 2α + 1, β ≥ α + 1}{(α, β) : α = 0, β = 1} and that [F α,β (x)]-1 is strictly logarithmically completely monotonic on (0, ∞) if and only if (α, β) ∈ {(α, β ...

  5. Non-monotonic relationships between emotional arousal and memory for color and location.

    Science.gov (United States)

    Boywitt, C Dennis

    2015-01-01

    Recent research points to the decreased diagnostic value of subjective retrieval experience for memory accuracy for emotional stimuli. While for neutral stimuli rich recollective experiences are associated with better context memory than merely familiar memories this association appears questionable for emotional stimuli. The present research tested the implicit assumption that the effect of emotional arousal on memory is monotonic, that is, steadily increasing (or decreasing) with increasing arousal. In two experiments emotional arousal was manipulated in three steps using emotional pictures and subjective retrieval experience as well as context memory were assessed. The results show an inverted U-shape relationship between arousal and recognition memory but for context memory and retrieval experience the relationship was more complex. For frame colour, context memory decreased linearly while for spatial location it followed the inverted U-shape function. The complex, non-monotonic relationships between arousal and memory are discussed as possible explanations for earlier divergent findings.

  6. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    Science.gov (United States)

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  7. An iterative method for nonlinear demiclosed monotone-type operators

    International Nuclear Information System (INIS)

    Chidume, C.E.

    1991-01-01

    It is proved that a well known fixed point iteration scheme which has been used for approximating solutions of certain nonlinear demiclosed monotone-type operator equations in Hilbert spaces remains applicable in real Banach spaces with property (U, α, m+1, m). These Banach spaces include the L p -spaces, p is an element of [2,∞]. An application of our results to the approximation of a solution of a certain linear operator equation in this general setting is also given. (author). 19 refs

  8. Generalized convexity, generalized monotonicity recent results

    CERN Document Server

    Martinez-Legaz, Juan-Enrique; Volle, Michel

    1998-01-01

    A function is convex if its epigraph is convex. This geometrical structure has very strong implications in terms of continuity and differentiability. Separation theorems lead to optimality conditions and duality for convex problems. A function is quasiconvex if its lower level sets are convex. Here again, the geo­ metrical structure of the level sets implies some continuity and differentiability properties for quasiconvex functions. Optimality conditions and duality can be derived for optimization problems involving such functions as well. Over a period of about fifty years, quasiconvex and other generalized convex functions have been considered in a variety of fields including economies, man­ agement science, engineering, probability and applied sciences in accordance with the need of particular applications. During the last twenty-five years, an increase of research activities in this field has been witnessed. More recently generalized monotonicity of maps has been studied. It relates to generalized conve...

  9. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization

  10. Monotonic and fatigue deformation of Ni--W directionally solidified eutectic

    International Nuclear Information System (INIS)

    Garmong, G.; Williams, J.C.

    1975-01-01

    Unlike many eutectic composites, the Ni--W eutectic exhibits extensive ductility by slip. Furthermore, its properties may be greatly varied by proper heat treatments. Results of studies of deformation in both monotonic and fatigue loading are reported. During monotonic deformation the fiber/matrix interface acts as a source of dislocations at low strains and an obstacle to matrix slip at higher strains. Deforming the quenched-plus-aged eutectic causes planar matrix slip, with the result that matrix slip bands create stress concentrations in the fibers at low strains. The aged eutectic reaches generally higher stress levels for comparable strains than does the as-quenched eutectic, and the failure strains decrease with increasing aging times. For the composites tested in fatigue, the aged eutectic has better high-stress fatigue resistance than the as-quenched material, but for low-stress, high-cycle fatigue their cycles to failure are nearly the same. However, both crack initiation and crack propagation are different in the two conditions, so the coincidence in high-cycle fatigue is probably fortuitous. The effect of matrix strength on composite performance is not simple, since changes in strength may be accompanied by alterations in slip modes and failure processes. (17 fig) (auth)

  11. Sugarcane leaf area estimate obtained from the corrected Normalized Difference Vegetation Index (NDVI

    Directory of Open Access Journals (Sweden)

    Rodrigo Moura Pereira

    2016-06-01

    Full Text Available Large farmland areas and the knowledge on the interaction between solar radiation and vegetation canopies have increased the use of data from orbital remote sensors in sugarcane monitoring. However, the constituents of the atmosphere affect the reflectance values obtained by imaging sensors. This study aimed at improving a sugarcane Leaf Area Index (LAI estimation model, concerning the Normalized Difference Vegetation Index (NDVI subjected to atmospheric correction. The model generated by the NDVI with atmospheric correction showed the best results (R2 = 0.84; d = 0.95; MAE = 0.44; RMSE = 0.55, in relation to the other models compared. LAI estimation with this model, during the sugarcane plant cycle, reached a maximum of 4.8 at the vegetative growth phase and 2.3 at the end of the maturation phase. Thus, the use of atmospheric correction to estimate the sugarcane LAI is recommended, since this procedure increases the correlations between the LAI estimated by image and by plant parameters.

  12. Features of the normal choriocapillaris with OCT-angiography: Density estimation and textural properties.

    Science.gov (United States)

    Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo

    2017-01-01

    The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.

  13. Multistability and gluing bifurcation to butterflies in coupled networks with non-monotonic feedback

    International Nuclear Information System (INIS)

    Ma Jianfu; Wu Jianhong

    2009-01-01

    Neural networks with a non-monotonic activation function have been proposed to increase their capacity for memory storage and retrieval, but there is still a lack of rigorous mathematical analysis and detailed discussions of the impact of time lag. Here we consider a two-neuron recurrent network. We first show how supercritical pitchfork bifurcations and a saddle-node bifurcation lead to the coexistence of multiple stable equilibria (multistability) in the instantaneous updating network. We then study the effect of time delay on the local stability of these equilibria and show that four equilibria lose their stability at a certain critical value of time delay, and Hopf bifurcations of these equilibria occur simultaneously, leading to multiple coexisting periodic orbits. We apply centre manifold theory and normal form theory to determine the direction of these Hopf bifurcations and the stability of bifurcated periodic orbits. Numerical simulations show very interesting global patterns of periodic solutions as the time delay is varied. In particular, we observe that these four periodic solutions are glued together along the stable and unstable manifolds of saddle points to develop a butterfly structure through a complicated process of gluing bifurcations of periodic solutions

  14. Diagnosis of constant faults in iteration-free circuits over monotone basis

    KAUST Repository

    Alrawaf, Saad Abdullah; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2014-01-01

    We show that for each iteration-free combinatorial circuit S over a basis B containing only monotone Boolean functions with at most five variables, there exists a decision tree for diagnosis of constant faults on inputs of gates with depth at most 7L(S) where L(S) is the number of gates in S. © 2013 Elsevier B.V. All rights reserved.

  15. Diagnosis of constant faults in iteration-free circuits over monotone basis

    KAUST Repository

    Alrawaf, Saad Abdullah

    2014-03-01

    We show that for each iteration-free combinatorial circuit S over a basis B containing only monotone Boolean functions with at most five variables, there exists a decision tree for diagnosis of constant faults on inputs of gates with depth at most 7L(S) where L(S) is the number of gates in S. © 2013 Elsevier B.V. All rights reserved.

  16. Effect of fiber fabric orientation on the flexural monotonic and fatigue behavior of 2D woven ceramic matrix composites

    International Nuclear Information System (INIS)

    Chawla, N.; Liaw, P.K.; Lara-Curzio, E.; Ferber, M.K.; Lowden, R.A.

    2012-01-01

    The effect of fiber fabric orientation, i.e., parallel to loading and perpendicular to the loading axis, on the monotonic and fatigue behavior of plain-weave fiber reinforced SiC matrix laminated composites was investigated. Two composite systems were studied: Nextel 312 (3M Corp.) reinforced SiC and Nicalon (Nippon Carbon Corp.) reinforced SiC, both fabricated by Forced Chemical Vapor Infiltration (FCVI). The behavior of both materials was investigated under monotonic and fatigue loading. Interlaminar and in-plane shear tests were conducted to further correlate shear properties with the effect of fabric orientation, with respect to the loading axis, on the orientation effects in bending. The underlying mechanisms, in monotonic and fatigue loading, were investigated through post-fracture examination using scanning electron microscopy (SEM).

  17. Elucidating the Relations Between Monotonic and Fatigue Properties of Laser Powder Bed Fusion Stainless Steel 316L

    Science.gov (United States)

    Zhang, Meng; Sun, Chen-Nan; Zhang, Xiang; Goh, Phoi Chin; Wei, Jun; Li, Hua; Hardacre, David

    2018-03-01

    The laser powder bed fusion (L-PBF) technique builds parts with higher static strength than the conventional manufacturing processes through the formation of ultrafine grains. However, its fatigue endurance strength σ f does not match the increased monotonic tensile strength σ b. This work examines the monotonic and fatigue properties of as-built and heat-treated L-PBF stainless steel 316L. It was found that the general linear relation σ f = mσ b for describing conventional ferrous materials is not applicable to L-PBF parts because of the influence of porosity. Instead, the ductility parameter correlated linearly with fatigue strength and was proposed as the new fatigue assessment criterion for porous L-PBF parts. Annealed parts conformed to the strength-ductility trade-off. Fatigue resistance was reduced at short lives, but the effect was partially offset by the higher ductility such that comparing with an as-built part of equivalent monotonic strength, the heat-treated parts were more fatigue resistant.

  18. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2017-01-01

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  19. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla

    2017-10-25

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  20. Effect of meal glycemic load and caffeine consumption on prolonged monotonous driving performance.

    Science.gov (United States)

    Bragg, Christopher; Desbrow, Ben; Hall, Susan; Irwin, Christopher

    2017-11-01

    Monotonous driving involves low levels of stimulation and high levels of repetition and is essentially an exercise in sustained attention and vigilance. The aim of this study was to determine the effects of consuming a high or low glycemic load meal on prolonged monotonous driving performance. The effect of consuming caffeine with a high glycemic load meal was also examined. Ten healthy, non-diabetic participants (7 males, age 51±7yrs, mean±SD) completed a repeated measures investigation involving 3 experimental trials. On separate occasions, participants were provided one of three treatments prior to undertaking a 90min computer-based simulated drive. The 3 treatment conditions involved consuming: (1) a low glycemic load meal+placebo capsules (LGL), (2) a high glycemic load meal+placebo capsules (HGL) and (3) a high glycemic load meal+caffeine capsules (3mgkg -1 body weight) (CAF). Measures of driving performance included lateral (standard deviation of lane position (SDLP), average lane position (AVLP), total number of lane crossings (LC)) and longitudinal (average speed (AVSP) and standard deviation of speed (SDSP)) vehicle control parameters. Blood glucose levels, plasma caffeine concentrations and subjective ratings of sleepiness, alertness, mood, hunger and simulator sickness were also collected throughout each trial. No difference in either lateral or longitudinal vehicle control parameters or subjective ratings were observed between HGL and LGL treatments. A significant reduction in SDLP (0.36±0.20m vs 0.41±0.19m, p=0.004) and LC (34.4±31.4 vs 56.7±31.5, p=0.018) was observed in the CAF trial compared to the HGL trial. However, no differences in AVLP, AVSP and SDSP or subjective ratings were detected between these two trials (p>0.05). Altering the glycemic load of a breakfast meal had no effect on measures of monotonous driving performance in non-diabetic adults. Individuals planning to undertake a prolonged monotonous drive following consumption of a

  1. Robust Monotonically Convergent Iterative Learning Control for Discrete-Time Systems via Generalized KYP Lemma

    Directory of Open Access Journals (Sweden)

    Jian Ding

    2014-01-01

    Full Text Available This paper addresses the problem of P-type iterative learning control for a class of multiple-input multiple-output linear discrete-time systems, whose aim is to develop robust monotonically convergent control law design over a finite frequency range. It is shown that the 2 D iterative learning control processes can be taken as 1 D state space model regardless of relative degree. With the generalized Kalman-Yakubovich-Popov lemma applied, it is feasible to describe the monotonically convergent conditions with the help of linear matrix inequality technique and to develop formulas for the control gain matrices design. An extension to robust control law design against systems with structured and polytopic-type uncertainties is also considered. Two numerical examples are provided to validate the feasibility and effectiveness of the proposed method.

  2. In vivo estimation of normal amygdala volume from structural MRI scans with anatomical-based segmentation.

    Science.gov (United States)

    Siozopoulos, Achilleas; Thomaidis, Vasilios; Prassopoulos, Panos; Fiska, Aliki

    2018-02-01

    Literature includes a number of studies using structural MRI (sMRI) to determine the volume of the amygdala, which is modified in various pathologic conditions. The reported values vary widely mainly because of different anatomical approaches to the complex. This study aims at estimating of the normal amygdala volume from sMRI scans using a recent anatomical definition described in a study based on post-mortem material. The amygdala volume has been calculated in 106 healthy subjects, using sMRI and anatomical-based segmentation. The resulting volumes have been analyzed for differences related to hemisphere, sex, and age. The mean amygdalar volume was estimated at 1.42 cm 3 . The mean right amygdala volume has been found larger than the left, but the difference for the raw values was within the limits of the method error. No intersexual differences or age-related alterations have been observed. The study provides a method for determining the boundaries of the amygdala in sMRI scans based on recent anatomical considerations and an estimation of the mean normal amygdala volume from a quite large number of scans for future use in comparative studies.

  3. Asian Option Pricing with Monotonous Transaction Costs under Fractional Brownian Motion

    Directory of Open Access Journals (Sweden)

    Di Pan

    2013-01-01

    Full Text Available Geometric-average Asian option pricing model with monotonous transaction cost rate under fractional Brownian motion was established. The method of partial differential equations was used to solve this model and the analytical expressions of the Asian option value were obtained. The numerical experiments show that Hurst exponent of the fractional Brownian motion and transaction cost rate have a significant impact on the option value.

  4. Comparison of SUVs normalized by lean body mass determined by CT with those normalized by lean body mass estimated by predictive equations in normal tissues

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Woo Hyoung; Kim, Chang Guhn; Kim, Dae Weung [Wonkwang Univ. School of Medicine, Iksan (Korea, Republic of)

    2012-09-15

    Standardized uptake values (SUVs)normalized by lean body mass (LBM)determined by CT were compared with those normalized by LBM estimated using predictive equations (PEs)in normal liver, spleen, and aorta using {sup 18}F FDG PET/CT. Fluorine 18 fluorodeoxyglucose (F FDG)positron emission tomography/computed tomography (PET/CT)was conducted on 453 patients. LBM determined by CT was defined in 3 ways (LBM{sup CT1}-3). Five PEs were used for comparison (LBM{sup PE1}-5). Tissue SUV normalized by LBM (SUL) was calculated using LBM from each method (SUL{sup CT1}-3, SUL{sup PE1}-5). Agreement between methods was assessed by Bland Altman analysis. Percentage difference and percentage error were also calculated. For all liver SUL{sup CTS} vs. liver SUL{sup PES} except liver SUL{sup PE3}, the range of biases, SDs of percentage difference and percentage errors were -0.17-0.24 SUL, 6.15-10.17%, and 25.07-38.91%, respectively. For liver SUL{sup CTs} vs. liver SUL{sup PE3}, the corresponding figures were 0.47-0.69 SUL, 10.90-11.25%, and 50.85-51.55%, respectively, showing the largest percentage errors and positive biases. Irrespective of magnitudes of the biases, large percentage errors of 25.07-51.55% were observed between liver SUL{sup CT1}-3 and liver SUL{sup PE1}-5. The results of spleen and aorta SUL{sup CTs} and SUL{sup PEs} comparison were almost identical to those for liver. The present study demonstrated substantial errors in individual SUL{sup PEs} compared with SUL{sup CTs} as a reference value. Normalization of SUV by LBM determined by CT rather than PEs may be a useful approach to reduce errors in individual SUL{sup PEs}.

  5. Comparison of SUVs normalized by lean body mass determined by CT with those normalized by lean body mass estimated by predictive equations in normal tissues

    International Nuclear Information System (INIS)

    Kim, Woo Hyoung; Kim, Chang Guhn; Kim, Dae Weung

    2012-01-01

    Standardized uptake values (SUVs)normalized by lean body mass (LBM)determined by CT were compared with those normalized by LBM estimated using predictive equations (PEs)in normal liver, spleen, and aorta using 18 F FDG PET/CT. Fluorine 18 fluorodeoxyglucose (F FDG)positron emission tomography/computed tomography (PET/CT)was conducted on 453 patients. LBM determined by CT was defined in 3 ways (LBM CT1 -3). Five PEs were used for comparison (LBM PE1 -5). Tissue SUV normalized by LBM (SUL) was calculated using LBM from each method (SUL CT1 -3, SUL PE1 -5). Agreement between methods was assessed by Bland Altman analysis. Percentage difference and percentage error were also calculated. For all liver SUL CTS vs. liver SUL PES except liver SUL PE3 , the range of biases, SDs of percentage difference and percentage errors were -0.17-0.24 SUL, 6.15-10.17%, and 25.07-38.91%, respectively. For liver SUL CTs vs. liver SUL PE3 , the corresponding figures were 0.47-0.69 SUL, 10.90-11.25%, and 50.85-51.55%, respectively, showing the largest percentage errors and positive biases. Irrespective of magnitudes of the biases, large percentage errors of 25.07-51.55% were observed between liver SUL CT1 -3 and liver SUL PE1 -5. The results of spleen and aorta SUL CTs and SUL PEs comparison were almost identical to those for liver. The present study demonstrated substantial errors in individual SUL PEs compared with SUL CTs as a reference value. Normalization of SUV by LBM determined by CT rather than PEs may be a useful approach to reduce errors in individual SUL PEs

  6. Psychophysiological responses to short-term cooling during a simulated monotonous driving task.

    Science.gov (United States)

    Schmidt, Elisabeth; Decke, Ralf; Rasshofer, Ralph; Bullinger, Angelika C

    2017-07-01

    For drivers on monotonous routes, cognitive fatigue causes discomfort and poses an important risk for traffic safety. Countermeasures against this type of fatigue are required and thermal stimulation is one intervention method. Surprisingly, there are hardly studies available to measure the effect of cooling while driving. Hence, to better understand the effect of short-term cooling on the perceived sleepiness of car drivers, a driving simulator study (n = 34) was conducted in which physiological and vehicular data during cooling and control conditions were compared. The evaluation of the study showed that cooling applied during a monotonous drive increased the alertness of the car driver. The sleepiness rankings were significantly lower for the cooling condition. Furthermore, the significant pupillary and electrodermal responses were physiological indicators for increased sympathetic activation. In addition, during cooling a better driving performance was observed. In conclusion, the study shows generally that cooling has a positive short-term effect on drivers' wakefulness; in detail, a cooling period of 3 min delivers best results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  8. Oscillation of Nonlinear Delay Differential Equation with Non-Monotone Arguments

    Directory of Open Access Journals (Sweden)

    Özkan Öcalan

    2017-07-01

    Full Text Available Consider the first-order nonlinear retarded differential equation $$ x^{\\prime }(t+p(tf\\left( x\\left( \\tau (t\\right \\right =0, t\\geq t_{0} $$ where $p(t$ and $\\tau (t$ are function of positive real numbers such that $%\\tau (t\\leq t$ for$\\ t\\geq t_{0},\\ $and$\\ \\lim_{t\\rightarrow \\infty }\\tau(t=\\infty $. Under the assumption that the retarded argument is non-monotone, new oscillation results are given. An example illustrating the result is also given.

  9. Denjoy minimal sets and Birkhoff periodic orbits for non-exact monotone twist maps

    Science.gov (United States)

    Qin, Wen-Xin; Wang, Ya-Nan

    2018-06-01

    A non-exact monotone twist map φbarF is a composition of an exact monotone twist map φ bar with a generating function H and a vertical translation VF with VF ((x , y)) = (x , y - F). We show in this paper that for each ω ∈ R, there exists a critical value Fd (ω) ≥ 0 depending on H and ω such that for 0 ≤ F ≤Fd (ω), the non-exact twist map φbarF has an invariant Denjoy minimal set with irrational rotation number ω lying on a Lipschitz graph, or Birkhoff (p , q)-periodic orbits for rational ω = p / q. Like the Aubry-Mather theory, we also construct heteroclinic orbits connecting Birkhoff periodic orbits, and show that quasi-periodic orbits in these Denjoy minimal sets can be approximated by periodic orbits. In particular, we demonstrate that at the critical value F =Fd (ω), the Denjoy minimal set is not uniformly hyperbolic and can be approximated by smooth curves.

  10. Non-monotonic behaviour in relaxation dynamics of image restoration

    International Nuclear Information System (INIS)

    Ozeki, Tomoko; Okada, Masato

    2003-01-01

    We have investigated the relaxation dynamics of image restoration through a Bayesian approach. The relaxation dynamics is much faster at zero temperature than at the Nishimori temperature where the pixel-wise error rate is minimized in equilibrium. At low temperature, we observed non-monotonic development of the overlap. We suggest that the optimal performance is realized through premature termination in the relaxation processes in the case of the infinite-range model. We also performed Markov chain Monte Carlo simulations to clarify the underlying mechanism of non-trivial behaviour at low temperature by checking the local field distributions of each pixel

  11. Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.

    Directory of Open Access Journals (Sweden)

    Umair Khalil

    Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.

  12. Critical undrained shear strength of sand-silt mixtures under monotonic loading

    Directory of Open Access Journals (Sweden)

    Mohamed Bensoula

    2014-07-01

    Full Text Available This study uses experimental triaxial tests with monotonic loading to develop empirical relationships to estimate undrained critical shear strength. The effect of the fines content on undrained shear strength is analyzed for different density states. The parametric analysis indicates that, based on the soil void ratio and fine content properties, the undrained critical shear strength first increases and then decreases as the proportion of fines increases, which demonstrates the influence of fine content on a soil’s vulnerability to liquefaction. A series of monotonic undrained triaxial tests were performed on reconstituted saturated sand-silt mixtures. Beyond 30% fines content, a fraction of the silt participates in the soil’s skeleton chain force. In this context, the concept of the equivalent intergranular void ratio may be an appropriate parameter to express the critical shear strength of the studied soil. This parameter is able to control the undrained shear strength of non-plastic silt and sand mixtures with different densities.   Resumen Este estudio utiliza evaluaciones experimentales triaxiales con cargas repetitivas para desarrollar relaciones empíricas y estimar la tensión crítica de corte bajo condiciones no drenadas. El efecto de contenido de finos en la tensión de corte sin drenar se analizó en diferentes estados de densidad. El análisis paramétrico indica que, basado en la porosidad del suelo y las propiedades del material de finos, la tensión de corte sin drenar primero se incrementa y luego decrece mientras la proporción de finos aumenta, lo que demuestra la influencia de contenido de finos en la vulnerabilidad del suelo a la licuación. Una serie de las evaluaciones se realizó en  mezclas rehidratadas y saturadas de arena y cieno. Más allá del 30 % de los contenidos finos, una fracción del cieno hace parte principal de la cadena de fuerza del suelo. En este contexto, el concepto de porosidad equivalente

  13. Multistability of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2015-11-01

    The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  15. Estimation methods of deformational behaviours of RC beams under the unrestrained condition at elevated temperatures

    International Nuclear Information System (INIS)

    Kanezu, Tsutomu; Nakano, Takehiro; Endo, Tatsumi

    1986-01-01

    The estimation methods of free deformations of reinforced concrete (RC) beams at elevated temperatures are investigated based on the concepts of ACI's and CEB/FIP's formulas, which are well used to estimate the flexural deformations of RC beams at normal temperature. Conclusions derived from the study are as follows. 1. Features of free deformations of RC beams. (i) The ratios of the average compressive strains on the top fiber of RC beams to the calculated ones at the cracked section show the inclinations that the ratios once drop after cracking and then remain constant according to temperature rises. (ii) Average compressive strains might be estimated by the average of the calculated strains at the perfect bond section and the cracked section of RC beam. (iii) The ratios of the average tensile strains on the level of reinforcements to the calculated ones at the cracked section are inclined to approach the value of 1.0 monotonically according to temperature rises. The changes of the average tensile strains are caused by the deterioration of bond strength and cracking due to the increase of the differences of expansive strains between reinforcement and concrete. 2. Estimation methods of free deformations of RC beams. (i) In order to estimate the free deformations of RC beams at elevated temperatures, the basic concepts of ACI's and CEB/FIP's formulas are adopted, which are well used to estimate the M-φ relations of RC beams at normal temperature. (ii) It was confirmed that the suggested formulas are able to estimate the free deformations of RC beams, that is, the longitudinal deformation and the curvature, at elevated temperatures. (author)

  16. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    Science.gov (United States)

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  17. Monotonicity of the von Neumann entropy expressed as a function of R\\'enyi entropies

    OpenAIRE

    Fannes, Mark

    2013-01-01

    The von Neumann entropy of a density matrix of dimension d, expressed in terms of the first d-1 integer order R\\'enyi entropies, is monotonically increasing in R\\'enyi entropies of even order and decreasing in those of odd order.

  18. Some completely monotonic properties for the $(p,q )$-gamma function

    OpenAIRE

    Krasniqi, Valmir; Merovci, Faton

    2014-01-01

    It is defined $\\Gamma_{p,q}$ function, a generalize of $\\Gamma$ function. Also, we defined $\\psi_{p,q}$-analogue of the psi function as the log derivative of $\\Gamma_{p,q}$. For the $\\Gamma_{p,q}$ -function, are given some properties related to convexity, log-convexity and completely monotonic function. Also, some properties of $\\psi_{p,q} $ analog of the $\\psi$ function have been established. As an application, when $p\\to \\infty, q\\to 1,$ we obtain all result of \\cite{Valmir1} and \\cite{SHA}.

  19. Non-monotonic probability of thermal reversal in thin-film biaxial nanomagnets with small energy barriers

    Directory of Open Access Journals (Sweden)

    N. Kani

    2017-05-01

    Full Text Available The goal of this paper is to investigate the short time-scale, thermally-induced probability of magnetization reversal for an biaxial nanomagnet that is characterized with a biaxial magnetic anisotropy. For the first time, we clearly show that for a given energy barrier of the nanomagnet, the magnetization reversal probability of an biaxial nanomagnet exhibits a non-monotonic dependence on its saturation magnetization. Specifically, there are two reasons for this non-monotonic behavior in rectangular thin-film nanomagnets that have a large perpendicular magnetic anisotropy. First, a large perpendicular anisotropy lowers the precessional period of the magnetization making it more likely to precess across the x^=0 plane if the magnetization energy exceeds the energy barrier. Second, the thermal-field torque at a particular energy increases as the magnitude of the perpendicular anisotropy increases during the magnetization precession. This non-monotonic behavior is most noticeable when analyzing the magnetization reversals on time-scales up to several tens of ns. In light of the several proposals of spintronic devices that require data retention on time-scales up to 10’s of ns, understanding the probability of magnetization reversal on the short time-scales is important. As such, the results presented in this paper will be helpful in quantifying the reliability and noise sensitivity of spintronic devices in which thermal noise is inevitably present.

  20. Reduction theorems for weighted integral inequalities on the cone of monotone functions

    Czech Academy of Sciences Publication Activity Database

    Gogatishvili, Amiran; Stepanov, V.D.

    2013-01-01

    Roč. 68, č. 4 (2013), s. 597-664 ISSN 0036-0279 R&D Projects: GA ČR GA201/08/0383; GA ČR GA13-14743S Institutional support: RVO:67985840 Keywords : weighted Lebesgue space * cone of monotone functions * duality principle Subject RIV: BA - General Mathematics Impact factor: 1.357, year: 2013 http://iopscience.iop.org/0036-0279/68/4/597

  1. Sufficient Descent Conjugate Gradient Methods for Solving Convex Constrained Nonlinear Monotone Equations

    Directory of Open Access Journals (Sweden)

    San-Yang Liu

    2014-01-01

    Full Text Available Two unified frameworks of some sufficient descent conjugate gradient methods are considered. Combined with the hyperplane projection method of Solodov and Svaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.

  2. Is this the right normalization? A diagnostic tool for ChIP-seq normalization.

    Science.gov (United States)

    Angelini, Claudia; Heller, Ruth; Volkinshtein, Rita; Yekutieli, Daniel

    2015-05-09

    Chip-seq experiments are becoming a standard approach for genome-wide profiling protein-DNA interactions, such as detecting transcription factor binding sites, histone modification marks and RNA Polymerase II occupancy. However, when comparing a ChIP sample versus a control sample, such as Input DNA, normalization procedures have to be applied in order to remove experimental source of biases. Despite the substantial impact that the choice of the normalization method can have on the results of a ChIP-seq data analysis, their assessment is not fully explored in the literature. In particular, there are no diagnostic tools that show whether the applied normalization is indeed appropriate for the data being analyzed. In this work we propose a novel diagnostic tool to examine the appropriateness of the estimated normalization procedure. By plotting the empirical densities of log relative risks in bins of equal read count, along with the estimated normalization constant, after logarithmic transformation, the researcher is able to assess the appropriateness of the estimated normalization constant. We use the diagnostic plot to evaluate the appropriateness of the estimates obtained by CisGenome, NCIS and CCAT on several real data examples. Moreover, we show the impact that the choice of the normalization constant can have on standard tools for peak calling such as MACS or SICER. Finally, we propose a novel procedure for controlling the FDR using sample swapping. This procedure makes use of the estimated normalization constant in order to gain power over the naive choice of constant (used in MACS and SICER), which is the ratio of the total number of reads in the ChIP and Input samples. Linear normalization approaches aim to estimate a scale factor, r, to adjust for different sequencing depths when comparing ChIP versus Input samples. The estimated scaling factor can easily be incorporated in many peak caller algorithms to improve the accuracy of the peak identification. The

  3. A note on monotone solutions for a nonconvex second-order functional differential inclusion

    Directory of Open Access Journals (Sweden)

    Aurelian Cernea

    2011-12-01

    Full Text Available The existence of monotone solutions for a second-order functional differential inclusion with Carath\\'{e}odory perturbation is obtained in the case when the multifunction that define the inclusion is upper semicontinuous compact valued and contained in the Fr\\'{e}chet subdifferential of a $\\phi $-convex function of order two.

  4. Inelastic behavior of materials and structures under monotonic and cyclic loading

    CERN Document Server

    Brünig, Michael

    2015-01-01

    This book presents studies on the inelastic behavior of materials and structures under monotonic and cyclic loads. It focuses on the description of new effects like purely thermal cycles or cases of non-trivial damages. The various models are based on different approaches and methods and scaling aspects are taken into account. In addition to purely phenomenological models, the book also presents mechanisms-based approaches. It includes contributions written by leading authors from a host of different countries.

  5. PENGARUH MONOTON, KUALITAS TIDUR, PSIKOFISIOLOGI, DISTRAKSI, DAN KELELAHAN KERJA TERHADAP TINGKAT KEWASPADAAN

    Directory of Open Access Journals (Sweden)

    Wiwik Budiawan

    2016-02-01

    Full Text Available Manusia sebagai subyek yang memiliki keterbatasan dalam kerja, sehingga menyebabkan terjadinya kesalahan. Kesalahan manusia yang dilakukan mengakibatkan menurunnya tingkat kewaspadaan masinis dan asisten masinis dalam menjalankan tugas. Tingkat kewaspadaan dipengaruhi oleh 5 faktor yaitu keadaan monoton, kualitas tidur, keadaan psikofisiologi, distraksi dan kelelahan kerja. Metode untuk mengukur 5 faktor yaitu kuisioner mononton, kuisioner Pittsburgh Sleep Quality Index (PSQI, kuisioner General Job Stress dan kuisioner FAS. Sedangkan untuk menguji tingkat kewaspadaan menggunakan Software Psychomotor Vigilance Test (PVT. Responden yang dipilih adalah masinis dan asisten masinis, karena jenis pekerjaan tersebut sangat membutuhkan tingkat kewaspadaan yang tinggi. Hasil pengukuran kemudian dianalisa menggunakan uji regresi linear majemuk. Dalam penelitian ini menghasilkan keadaan monoton, kualitas tidur, keadaan psikofisiologi, distraksi dan kelelahan kerja berpengaruh secara simultan terhadap tingkat kewaspadaan. Hal ini dibuktikan dengan ketika sebelum jam dinas, hasil uji F-hitung keadaan monoton, kualitas tidur, keadaan psikofisiologi adalah sebesar 0,876, sedangkan untuk variabel distraksi dan Kelelahan Kerja (FAS terhadap tingkat kewaspadaan memiliki nilai 2,371. pada saat sesudah bekerja variabel distraksi dan kelelahan kerja (FAS terhadap tingkat kewaspadaan memiliki nilai F-hitung 2,953,dan nilai 0,544 untuk keadaan monoton, kualitas tidur, keadaan psikofisiologi. Faktor yang memiliki pengaruh terbesar terhadap tingkat kewaspadaan sebelum jam dinas yaitu faktor kualitas tidur, sedangkan untuk sesudah jam dinas adalah faktor kelelahan kerja.     Abstract Human beings as subjects who have limitations in work, thus causing the error. Human error committed resulted in a decreased level of alertness machinist and assistant machinist in the line of duty. Alert level is influenced by five factors: the state of monotony, quality of sleep

  6. Comparison of linear and non-linear monotonicity-based shape reconstruction using exact matrix characterizations

    DEFF Research Database (Denmark)

    Garde, Henrik

    2018-01-01

    . For a fair comparison, exact matrix characterizations are used when probing the monotonicity relations to avoid errors from numerical solution to PDEs and numerical integration. Using a special factorization of the Neumann-to-Dirichlet map also makes the non-linear method as fast as the linear method...

  7. Influence of Compaction Temperature on Resistance Under Monotonic Loading of Crumb-Rubber Modified Hot-Mix Asphalts

    Directory of Open Access Journals (Sweden)

    Hugo A. Rondón-Quintana

    2012-12-01

    Full Text Available The influence of compaction temperature on resistance under mono-tonic loading (Marshall of Crumb-Rubber Modified (CRM Hot-Mix As-phalt (HMA was evaluated. The emphasis of this study was the applica-tion in Bogotá D.C. (Colombia. In this city the compaction temperature of HMA mixtures decreases, compared to the optimum, in about 30°C. Two asphalt cements (AC 60-70 and AC 80-100 were modified. Two particle sizes distribution curve were used. The compaction temperatures used were 120, 130, 140 and 150°C. The decrease of the compaction tempera-ture produces a small decrease in resistance under monotonic loading of the modified mixtures tested. Mixtures without CRM undergo a lineal decrease in its resistance of up to 34%.

  8. Influence of Compaction Temperature on Resistance Under Monotonic Loading of Crumb-Rubber Modified Hot-Mix Asphalts

    Directory of Open Access Journals (Sweden)

    Hugo A. Rondón-Quintana

    2012-12-01

    Full Text Available The influence of compaction temperature on resistance under monotonic loading (Marshall of Crumb-Rubber Modified (CRM Hot-Mix Asphalt (HMA was evaluated. The emphasis of this study was the application in Bogotá D.C. (Colombia. In this city the compaction temperature of HMA mixtures decreases, compared to the optimum, in about 30°C. Two asphalt cements (AC 60-70 and AC 80-100 were modified. Two particle sizes distribution curve were used. The compaction temperatures used were 120, 130, 140 and 150°C. The decrease of the compaction temperature produces a small decrease in resistance under monotonic loading of the modified mixtures tested. Mixtures without CRM undergo a lineal decrease in its resistance of up to 34%.

  9. Monotonic and Cyclic Behavior of DIN 34CrNiMo6 Tempered Alloy Steel

    Directory of Open Access Journals (Sweden)

    Ricardo Branco

    2016-04-01

    Full Text Available This paper aims at studying the monotonic and cyclic plastic deformation behavior of DIN 34CrNiMo6 high strength steel. Monotonic and low-cycle fatigue tests are conducted in ambient air, at room temperature, using standard 8-mm diameter specimens. The former tests are carried out under position control with constant displacement rate. The latter are performed under fully-reversed strain-controlled conditions, using the single-step test method, with strain amplitudes lying between ±0.4% and ±2.0%. After the tests, the fracture surfaces are examined by scanning electron microscopy in order to characterize the surface morphologies and identify the main failure mechanisms. Regardless of the strain amplitude, a softening behavior was observed throughout the entire life. Total strain energy density, defined as the sum of both tensile elastic and plastic strain energies, was revealed to be an adequate fatigue damage parameter for short and long lives.

  10. On the Computation of Optimal Monotone Mean-Variance Portfolios via Truncated Quadratic Utility

    OpenAIRE

    Ales Cerný; Fabio Maccheroni; Massimo Marinacci; Aldo Rustichini

    2008-01-01

    We report a surprising link between optimal portfolios generated by a special type of variational preferences called divergence preferences (cf. [8]) and optimal portfolios generated by classical expected utility. As a special case we connect optimization of truncated quadratic utility (cf. [2]) to the optimal monotone mean-variance portfolios (cf. [9]), thus simplifying the computation of the latter.

  11. Monotonic childhoods: representations of otherness in research writing

    Directory of Open Access Journals (Sweden)

    Denise Marcos Bussoletti

    2011-12-01

    Full Text Available This paper is part of a doctoral thesis entitled “Monotonic childhoods – a rhapsody of hope”. It follows the perspective of a critical psychosocial and cultural study, and aims at discussing the other’s representation in research writing, electing childhood as an allegorical and refl ective place. It takes into consideration, by means of analysis, the drawings and poems of children from the Terezin ghetto during the Second World War. The work is mostly based on Serge Moscovici’s Social Representation Theory, but it is also in constant dialogue with other theories and knowledge fi elds, especially Walter Benjamin’s and Mikhail Bakhtin’s contributions. At the end, the paper supports the thesis that conceives poetics as one of the translation axes of childhood cultures.

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  13. Search for scalar-tensor gravity theories with a non-monotonic time evolution of the speed-up factor

    Energy Technology Data Exchange (ETDEWEB)

    Navarro, A [Dept Fisica, Universidad de Murcia, E30071-Murcia (Spain); Serna, A [Dept Fisica, Computacion y Comunicaciones, Universidad Miguel Hernandez, E03202-Elche (Spain); Alimi, J-M [Lab. de l' Univers et de ses Theories (LUTH, CNRS FRE2462), Observatoire de Paris-Meudon, F92195-Meudon (France)

    2002-08-21

    We present a method to detect, in the framework of scalar-tensor gravity theories, the existence of stationary points in the time evolution of the speed-up factor. An attractive aspect of this method is that, once the particular scalar-tensor theory has been specified, the stationary points are found through a simple algebraic equation which does not contain any integration. By applying this method to the three classes of scalar-tensor theories defined by Barrow and Parsons, we have found several new cosmological models with a non-monotonic evolution of the speed-up factor. The physical interest of these models is that, as previously shown by Serna and Alimi, they predict the observed primordial abundance of light elements for a very wide range of baryon density. These models are then consistent with recent CMB and Lyman-{alpha} estimates of the baryon content of the universe.

  14. A note on monotonically star Lindelöf spaces | Song | Quaestiones ...

    African Journals Online (AJOL)

    A space X is monotonically star Lindelöf if one assign to for each open cover U a subspace s(U) ⊆ X, called a kernel, such that s(U) is a Lindelöf subset of X, and st(s(U); U) = X, and if V renes U then ∪ s(U) ⊆ s(V), where st(s(U); U) = ∪ {U ∈ U : U ∩ s(U) ≠ ∅}. In this paper, we investigate the relationship between ...

  15. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  16. ASPMT(QS): Non-Monotonic Spatial Reasoning with Answer Set Programming Modulo Theories

    OpenAIRE

    Wałęga, Przemysław Andrzej; Bhatt, Mehul; Schultz, Carl

    2015-01-01

    The systematic modelling of \\emph{dynamic spatial systems} [9] is a key requirement in a wide range of application areas such as comonsense cognitive robotics, computer-aided architecture design, dynamic geographic information systems. We present ASPMT(QS), a novel approach and fully-implemented prototype for non-monotonic spatial reasoning ---a crucial requirement within dynamic spatial systems-- based on Answer Set Programming Modulo Theories (ASPMT). ASPMT(QS) consists of a (qualitative) s...

  17. Monotonicity Conditions for Multirate and Partitioned Explicit Runge-Kutta Schemes

    KAUST Repository

    Hundsdorfer, Willem

    2013-01-01

    Multirate schemes for conservation laws or convection-dominated problems seem to come in two flavors: schemes that are locally inconsistent, and schemes that lack mass-conservation. In this paper these two defects are discussed for one-dimensional conservation laws. Particular attention will be given to monotonicity properties of the multirate schemes, such as maximum principles and the total variation diminishing (TVD) property. The study of these properties will be done within the framework of partitioned Runge-Kutta methods. It will also be seen that the incompatibility of consistency and mass-conservation holds for ‘genuine’ multirate schemes, but not for general partitioned methods.

  18. Assessment of ANN and SVM models for estimating normal direct irradiation (H_b)

    International Nuclear Information System (INIS)

    Santos, Cícero Manoel dos; Escobedo, João Francisco; Teramoto, Érico Tadao; Modenese Gorla da Silva, Silvia Helena

    2016-01-01

    Highlights: • The performance of SVM and ANN in estimating Normal Direct Irradiation (H_b) was evaluated. • 12 models using different input variables are developed (hourly and daily partitions). • The most relevant input variables for DNI are kt, H_s_c and insolation ratio (r′ = n/N). • Support Vector Machine (SVM) provides accurate estimates and outperforms the Artificial Neural Network (ANN). - Abstract: This study evaluates the estimation of hourly and daily normal direct irradiation (H_b) using machine learning techniques (ML): Artificial Neural Network (ANN) and Support Vector Machine (SVM). Time series of different meteorological variables measured over thirteen years in Botucatu were used for training and validating ANN and SVM. Seven different sets of input variables were tested and evaluated, which were chosen based on statistical models reported in the literature. Relative Mean Bias Error (rMBE), Relative Root Mean Square Error (rRMSE), determination coefficient (R"2) and “d” Willmott index were used to evaluate ANN and SVM models. When compared to statistical models which use the same set of input variables (R"2 between 0.22 and 0.78), ANN and SVM show higher values of R"2 (hourly models between 0.52 and 0.88; daily models between 0.42 and 0.91). Considering the input variables, atmospheric transmissivity of global radiation (kt), integrated solar constant (H_s_c) and insolation ratio (n/N, n is sunshine duration and N is photoperiod) were the most relevant in ANN and SVM models. The rMBE and rRMSE values in the two time partitions of SVM models are lower than those obtained with ANN. Hourly ANN and SVM models have higher rRMSE values than daily models. Optimal performance with hourly models was obtained with ANN4"h (rMBE = 12.24%, rRMSE = 23.99% and “d” = 0.96) and SVM4"h (rMBE = 1.75%, rRMSE = 20.10% and “d” = 0.96). Optimal performance with daily models was obtained with ANN2"d (rMBE = −3.09%, rRMSE = 18.95% and “d” = 0

  19. Physical activity patterns and estimated daily energy expenditures in normal and overweight tunisian schoolchildren.

    Science.gov (United States)

    Zarrouk, Fayçal; Bouhlel, Ezdine; Feki, Youssef; Amri, Mohamed; Shephard, Roy J

    2009-01-01

    Our aim was to test the normality of physical activity patterns and energy expenditures in normal weight and overweight primary school students. Heart rate estimates of total daily energy expenditure (TEE), active energy expenditure (AEE), and activity patterns were made over 3 consecutive school days in healthy middle-class Tunisian children (46 boys, 44 girls, median age (25(th)-75(th)) percentile, 9.2 (8.8-9.9) years. Our cross-section included 52 students with a normal body mass index (BMI) and 38 who exceeded age-specific BMI limits. TEE, AEE and overall physical activity level (PAL) were not different between overweight children and those with a normal BMI [median values (25(th)-75(th)) 9.20 (8.20-9.84) vs. 8.88 (7.42-9.76) MJ/d; 3.56 (2.59-4.22) vs. 3.85 (2.77-4.78) MJ/d and 1.74 (1.54-2.04) vs. 1.89 (1.66-2.15) respectively]. Physical activity intensities (PAI) were expressed as percentages of the individual's heart rate reserve (%HRR). The median PAI for the entire day (PAI24) and for the waking part of day (PAIw) were lower in overweight than in normal weight individuals [16.3 (14.2-18.9) vs. 20.6 (17.9-22.3) %HRR, p spend more time in moderate activity and less time in sedentary pursuits than overweight children.

  20. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  1. Estimation of normal chromium-51 ethylene diamine tetra-acetic acid clearance in children

    International Nuclear Information System (INIS)

    Piepsz, A.; Pintelon, H.; Ham, H.R.

    1994-01-01

    In order to estimate the normal range of chromium-51 ethylene diamine tetra-acetic acid (EDTA) clearance in children, we selected a series of 256 patients with past or present urinary tract infection who showed, at the time of the clearance determination, normal technetium-99m dimercaptosuccinic acid (DMSA) scintigraphy and normal left to right DMSA relative uptake. The clearance was calculated by means of either the simplified second exponential method or the 120-min single blood sample; Chantler's correction was used in order to correct for having neglected the first exponential. There was a progressive increase in clearance from the first weeks of life (mean value around 1 month: 55 ml/min/1.73 m 2 ), with a plateau at around 18 months. Between 2 and 17 years of age, the clearance values remained constant, with a mean value of 114 ml/min/1.73 m 2 (SD: 24 ml/min); this is similar to the level described for inulin clearance. No significant differences were observed between boys and girls, or between clearance values calculated with one or with two blood samples. Taking into account the hour of intravenous injection of the tracer, we did not observe any influence of the lunchtime meal on the distribution of the 51 Cr-EDTA clearance values. (orig.)

  2. Uniform persistence and upper Lyapunov exponents for monotone skew-product semiflows

    International Nuclear Information System (INIS)

    Novo, Sylvia; Obaya, Rafael; Sanz, Ana M

    2013-01-01

    Several results of uniform persistence above and below a minimal set of an abstract monotone skew-product semiflow are obtained. When the minimal set has a continuous separation the results are given in terms of the principal spectrum. In the case that the semiflow is generated by the solutions of a family of non-autonomous differential equations of ordinary, delay or parabolic type, the former results are strongly improved. A method of calculus of the upper Lyapunov exponent of the minimal set is also determined. (paper)

  3. Complex, non-monotonic dose-response curves with multiple maxima: Do we (ever) sample densely enough?

    Science.gov (United States)

    Cvrčková, Fatima; Luštinec, Jiří; Žárský, Viktor

    2015-01-01

    We usually expect the dose-response curves of biological responses to quantifiable stimuli to be simple, either monotonic or exhibiting a single maximum or minimum. Deviations are often viewed as experimental noise. However, detailed measurements in plant primary tissue cultures (stem pith explants of kale and tobacco) exposed to varying doses of sucrose, cytokinins (BA or kinetin) or auxins (IAA or NAA) revealed that growth and several biochemical parameters exhibit multiple reproducible, statistically significant maxima over a wide range of exogenous substance concentrations. This results in complex, non-monotonic dose-response curves, reminiscent of previous reports of analogous observations in both metazoan and plant systems responding to diverse pharmacological treatments. These findings suggest the existence of a hitherto neglected class of biological phenomena resulting in dose-response curves exhibiting periodic patterns of maxima and minima, whose causes remain so far uncharacterized, partly due to insufficient sampling frequency used in many studies.

  4. Eigenvalue for Densely Defined Perturbations of Multivalued Maximal Monotone Operators in Reflexive Banach Spaces

    Directory of Open Access Journals (Sweden)

    Boubakari Ibrahimou

    2013-01-01

    maximal monotone with and . Using the topological degree theory developed by Kartsatos and Quarcoo we study the eigenvalue problem where the operator is a single-valued of class . The existence of continuous branches of eigenvectors of infinite length then could be easily extended to the case where the operator is multivalued and is investigated.

  5. Non-monotonic dose-response relationships and endocrine disruptors: a qualitative method of assessment

    OpenAIRE

    Lagarde, Fabien; Beausoleil, Claire; Belcher, Scott M; Belzunces, Luc P; Emond, Claude; Guerbet, Michel; Rousselle, Christophe

    2015-01-01

    International audience; Experimental studies investigating the effects of endocrine disruptors frequently identify potential unconventional dose-response relationships called non-monotonic dose-response (NMDR) relationships. Standardized approaches for investigating NMDR relationships in a risk assessment context are missing. The aim of this work was to develop criteria for assessing the strength of NMDR relationships. A literature search was conducted to identify published studies that repor...

  6. Estimation of polyclonal IgG4 hybrids in normal human serum.

    Science.gov (United States)

    Young, Elizabeth; Lock, Emma; Ward, Douglas G; Cook, Alexander; Harding, Stephen; Wallis, Gregg L F

    2014-07-01

    The in vivo or in vitro formation of IgG4 hybrid molecules, wherein the immunoglobulins have exchanged half molecules, has previously been reported under experimental conditions. Here we estimate the incidence of polyclonal IgG4 hybrids in normal human serum and comment on the existence of IgG4 molecules with different immunoglobulin light chains. Polyclonal IgG4 was purified from pooled or individual donor human sera and sequentially fractionated using light-chain affinity and size exclusion chromatography. Fractions were analysed by SDS-PAGE, immunoblotting, ELISA, immunodiffusion and matrix-assisted laser-desorption mass spectrometry. Polyclonal IgG4 purified from normal serum contained IgG4κ, IgG4λ and IgG4κ/λ molecules. Size exclusion chromatography showed that IgG4 was principally present in monomeric form (150 000 MW). SDS-PAGE, immunoblotting and ELISA showed the purity of the three IgG4 samples. Immunodiffusion, light-chain sandwich ELISA and mass spectrometry demonstrated that both κ and λ light chains were present on only the IgG4κ/λ molecules. The amounts of IgG4κ/λ hybrid molecules ranged from 21 to 33% from the five sera analysed. Based on the molecular weight these molecules were formed of two IgG4 heavy chains plus one κ and one λ light chain. Polyclonal IgG (IgG4-depleted) was similarly fractionated according to light-chain specificity. No evidence of hybrid IgG κ/λ antibodies was observed. These results indicate that hybrid IgG4κ/λ antibodies compose a substantial portion of IgG4 from normal human serum. © 2014 John Wiley & Sons Ltd.

  7. Expert system for failures detection and non-monotonic reasoning

    International Nuclear Information System (INIS)

    Assis, Abilio de; Schirru, Roberto

    1997-01-01

    This paper presents the development of a shell denominated TIGER that has the purpose to serve as environment to the development of expert systems in diagnosis of faults in industrial complex plants. A model of knowledge representation and an inference engine based on non monotonic reasoning has been developed in order to provide flexibility in the representation of complex plants as well as performance to satisfy restrictions of real time. The TIGER is able to provide both the occurred fault and a hierarchical view of the several reasons that caused the fault to happen. As a validation of the developed shell a monitoring system of the critical safety functions of Angra-1 has been developed. 7 refs., 7 figs., 2 tabs

  8. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    Science.gov (United States)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  9. Stereological estimates of nuclear volume in normal germ cells and carcinoma in situ of the human testis

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Müller, J

    1990-01-01

    Carcinoma in situ of the testis may appear many years prior to the development of an invasive tumour. Using point-sampled intercepts, base-line data concerning unbiased stereological estimates of the volume-weighted mean nuclear volume (nuclear vV) were obtained in 50 retrospective serial...... testicular biopsies from 10 patients with carcinoma in situ. All but two patients eventually developed an invasive growth. Testicular biopsies from 10 normal adult individuals and five prepubertal boys were included as controls. Nuclear vV in testicular carcinoma in situ was significantly larger than...... that of morphologically normal spermatogonia (2P = 1.0 x 10(-19)), with only minor overlap. Normal spermatogonia from controls had, on average, smaller nuclear vV than morphologically normal spermatogonia in biopsies with ipsi- or contra-lateral carcinoma in situ (2P = 5.2 x 10(-3)). No difference in nuclear vV was found...

  10. The non-monotonic shear-thinning flow of two strongly cohesive concentrated suspensions

    OpenAIRE

    Buscall, Richard; Kusuma, Tiara E.; Stickland, Anthony D.; Rubasingha, Sayuri; Scales, Peter J.; Teo, Hui-En; Worrall, Graham L.

    2014-01-01

    The behaviour in simple shear of two concentrated and strongly cohesive mineral suspensions showing highly non-monotonic flow curves is described. Two rheometric test modes were employed, controlled stress and controlled shear-rate. In controlled stress mode the materials showed runaway flow above a yield stress, which, for one of the suspensions, varied substantially in value and seemingly at random from one run to the next, such that the up flow-curve appeared to be quite irreproducible. Th...

  11. Monotone measures of ergodicity for Markov chains

    Directory of Open Access Journals (Sweden)

    J. Keilson

    1998-01-01

    Full Text Available The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t below Equation (1.7]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.

  12. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    Science.gov (United States)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  13. A generalized L1-approach for a kernel estimator of conditional quantile with functional regressors: Consistency and asymptotic normality

    OpenAIRE

    2009-01-01

    Abstract A kernel estimator of the conditional quantile is defined for a scalar response variable given a covariate taking values in a semi-metric space. The approach generalizes the median?s L1-norm estimator. The almost complete consistency and asymptotic normality are stated. correspondance: Corresponding author. Tel: +33 320 964 933; fax: +33 320 964 704. (Lemdani, Mohamed) (Laksaci, Ali) mohamed.lemdani@univ-lill...

  14. MONOTONIC DERIVATIVE CORRECTION FOR CALCULATION OF SUPERSONIC FLOWS WITH SHOCK WAVES

    Directory of Open Access Journals (Sweden)

    P. V. Bulat

    2015-07-01

    Full Text Available Subject of Research. Numerical solution methods of gas dynamics problems based on exact and approximate solution of Riemann problem are considered. We have developed an approach to the solution of Euler equations describing flows of inviscid compressible gas based on finite volume method and finite difference schemes of various order of accuracy. Godunov scheme, Kolgan scheme, Roe scheme, Harten scheme and Chakravarthy-Osher scheme are used in calculations (order of accuracy of finite difference schemes varies from 1st to 3rd. Comparison of accuracy and efficiency of various finite difference schemes is demonstrated on the calculation example of inviscid compressible gas flow in Laval nozzle in the case of continuous acceleration of flow in the nozzle and in the case of nozzle shock wave presence. Conclusions about accuracy of various finite difference schemes and time required for calculations are made. Main Results. Comparative analysis of difference schemes for Euler equations integration has been carried out. These schemes are based on accurate and approximate solution for the problem of an arbitrary discontinuity breakdown. Calculation results show that monotonic derivative correction provides numerical solution uniformity in the breakdown neighbourhood. From the one hand, it prevents formation of new points of extremum, providing the monotonicity property, but from the other hand, causes smoothing of existing minimums and maximums and accuracy loss. Practical Relevance. Developed numerical calculation method gives the possibility to perform high accuracy calculations of flows with strong non-stationary shock and detonation waves. At the same time, there are no non-physical solution oscillations on the shock wave front.

  15. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    Science.gov (United States)

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  16. Risk-Sensitive Control of Pure Jump Process on Countable Space with Near Monotone Cost

    International Nuclear Information System (INIS)

    Suresh Kumar, K.; Pal, Chandan

    2013-01-01

    In this article, we study risk-sensitive control problem with controlled continuous time pure jump process on a countable space as state dynamics. We prove multiplicative dynamic programming principle, elliptic and parabolic Harnack’s inequalities. Using the multiplicative dynamic programing principle and the Harnack’s inequalities, we prove the existence and a characterization of optimal risk-sensitive control under the near monotone condition

  17. Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems

    International Nuclear Information System (INIS)

    Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George

    2012-01-01

    In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.

  18. Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems

    Energy Technology Data Exchange (ETDEWEB)

    Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)

    2012-12-15

    In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.

  19. Iterative methods for nonlinear set-valued operators of the monotone type with applications to operator equations

    International Nuclear Information System (INIS)

    Chidume, C.E.

    1989-06-01

    The fixed points of set-valued operators satisfying a condition of monotonicity type in real Banach spaces with uniformly convex dual spaces are approximated by recursive averaging processes. Applications to important classes of linear and nonlinear operator equations are also presented. (author). 33 refs

  20. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits

    Directory of Open Access Journals (Sweden)

    Thomson Peter C

    2003-05-01

    Full Text Available Abstract To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits.

  1. The behavior of welded joint in steel pipe members under monotonic and cyclic loading

    International Nuclear Information System (INIS)

    Chang, Kyong-Ho; Jang, Gab-Chul; Shin, Young-Eui; Han, Jung-Guen; Kim, Jong-Min

    2006-01-01

    Most steel pipe members are joined by welding. The residual stress and weld metal in a welded joint have the influence on the behavior of steel pipes. Therefore, to accurately predict the behavior of steel pipes with a welded joint, the influence of welding residual stress and weld metal on the behavior of steel pipe must be investigated. In this paper, the residual stress of steel pipes with a welded joint was investigated by using a three-dimensional non-steady heat conduction analysis and a three-dimensional thermal elastic-plastic analysis. Based on the results of monotonic and cyclic loading tests, a hysteresis model for weld metal was formulated. The hysteresis model was proposed by the authors and applied to a three-dimensional finite elements analysis. To investigate the influence of a welded joint in steel pipes under monotonic and cyclic loading, three-dimensional finite elements analysis considering the proposed model and residual stress was carried out. The influence of a welded joint on the behavior of steel pipe members was investigated by comparing the analytical result both steel pipe with a welded joint and that without a welded joint

  2. An estimation of population doses from a nuclear power plant during normal operation

    International Nuclear Information System (INIS)

    Nowicki, K.

    1975-07-01

    A model is presented for estimation of the potential submersion and inhalation radiation doses to people located within a distance of 1000 km from a nuclear power plant during normal operation. The model was used to calculate doses for people living 200-1000 km from hypothetical nuclear power facility sited near the geographical centre of Denmark. Two kinds of sources are considered for this situation: - unit release of 15 isotopes of noble gases and iodines, - effluent releases from two types of 1000 MWe Light Water Power Reactors: PWR and BWR. Parameter variations were made and analyzed in order to obtain a better understanding of the mechanisms of the model. (author)

  3. L∞-error estimate for a system of elliptic quasivariational inequalities

    Directory of Open Access Journals (Sweden)

    M. Boulbrachene

    2003-01-01

    Full Text Available We deal with the numerical analysis of a system of elliptic quasivariational inequalities (QVIs. Under W2,p(Ω-regularity of the continuous solution, a quasi-optimal L∞-convergence of a piecewise linear finite element method is established, involving a monotone algorithm of Bensoussan-Lions type and standard uniform error estimates known for elliptic variational inequalities (VIs.

  4. Construction of second order accurate monotone and stable residual distribution schemes for unsteady flow problems

    International Nuclear Information System (INIS)

    Abgrall, Remi; Mezine, Mohamed

    2003-01-01

    The aim of this paper is to construct upwind residual distribution schemes for the time accurate solution of hyperbolic conservation laws. To do so, we evaluate a space-time fluctuation based on a space-time approximation of the solution and develop new residual distribution schemes which are extensions of classical steady upwind residual distribution schemes. This method has been applied to the solution of scalar advection equation and to the solution of the compressible Euler equations both in two space dimensions. The first version of the scheme is shown to be, at least in its first order version, unconditionally energy stable and possibly conditionally monotonicity preserving. Using an idea of Csik et al. [Space-time residual distribution schemes for hyperbolic conservation laws, 15th AIAA Computational Fluid Dynamics Conference, Anahein, CA, USA, AIAA 2001-2617, June 2001], we modify the formulation to end up with a scheme that is unconditionally energy stable and unconditionally monotonicity preserving. Several numerical examples are shown to demonstrate the stability and accuracy of the method

  5. Body fat assessed from body density and estimated from skinfold thickness in normal children and children with cystic fibrosis.

    Science.gov (United States)

    Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A

    1988-12-01

    Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.

  6. On stability and monotonicity requirements of finite difference approximations of stochastic conservation laws with random viscosity

    KAUST Repository

    Pettersson, Per

    2013-05-01

    The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain and spatially varying viscosity. We investigate well-posedness, monotonicity and stability for the extended system resulting from the Galerkin projection of the advection-diffusion equation onto the stochastic basis functions. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability of the semi-discrete system.It is essential that the eigenvalues of the resulting viscosity matrix of the stochastic Galerkin system are positive and we investigate conditions for this to hold. When the viscosity matrix is diagonalizable, stochastic Galerkin and stochastic collocation are similar in terms of computational cost, and for some cases the accuracy is higher for stochastic Galerkin provided that monotonicity requirements are met. We also investigate the total spatial operator of the semi-discretized system and its impact on the convergence to steady-state. © 2013 Elsevier B.V.

  7. MONOTONIC AND CYCLIC LOADING SIMULATION OF STRUCTURAL STEELWORK BEAM TO COLUMN BOLTED CONNECTIONS WITH CASTELLATED BEAM

    Directory of Open Access Journals (Sweden)

    SAEID ZAHEDI VAHID

    2013-08-01

    Full Text Available Recently steel extended end plate connections are commonly used in rigid steel frame due to its good ductility and ability for energy dissipation. This connection system is recommended to be widely used in special moment-resisting frame subjected to vertical monotonic and cyclic loads. However improper design of beam to column connection can leads to collapses and fatalities. Therefore extensive study of beam to column connection design must be carried out, particularly when the connection is exposed to cyclic loadings. This paper presents a Finite Element Analysis (FEA approach as an alternative method in studying the behavior of such connections. The performance of castellated beam-column end plate connections up to failure was investigated subjected to monotonic and cyclic loading in vertical and horizontal direction. The study was carried out through a finite element analysis using the multi-purpose software package LUSAS. The effect of arranging the geometry and location of openings were also been investigated.

  8. On stability and monotonicity requirements of finite difference approximations of stochastic conservation laws with random viscosity

    KAUST Repository

    Pettersson, Per; Doostan, Alireza; Nordströ m, Jan

    2013-01-01

    The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain and spatially varying viscosity. We investigate well-posedness, monotonicity and stability for the extended system resulting from the Galerkin projection of the advection-diffusion equation onto the stochastic basis functions. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability of the semi-discrete system.It is essential that the eigenvalues of the resulting viscosity matrix of the stochastic Galerkin system are positive and we investigate conditions for this to hold. When the viscosity matrix is diagonalizable, stochastic Galerkin and stochastic collocation are similar in terms of computational cost, and for some cases the accuracy is higher for stochastic Galerkin provided that monotonicity requirements are met. We also investigate the total spatial operator of the semi-discretized system and its impact on the convergence to steady-state. © 2013 Elsevier B.V.

  9. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    International Nuclear Information System (INIS)

    Tyson, Jon

    2009-01-01

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  10. Sampling dynamics: an alternative to payoff-monotone selection dynamics

    DEFF Research Database (Denmark)

    Berkemer, Rainer

    payoff-monotone nor payoff-positive which has interesting consequences. This can be demonstrated by application to the travelers dilemma, a deliberately constructed social dilemma. The game has just one symmetric Nash equilibrium which is Pareto inefficient. Especially when the travelers have many......'' of the standard game theory result. Both, analytical tools and agent based simulation are used to investigate the dynamic stability of sampling equilibria in a generalized travelers dilemma. Two parameters are of interest: the number of strategy options (m) available to each traveler and an experience parameter...... (k), which indicates the number of samples an agent would evaluate before fixing his decision. The special case (k=1) can be treated analytically. The stationary points of the dynamics must be sampling equilibria and one can calculate that for m>3 there will be an interior solution in addition...

  11. Non-monotonic behavior of electron temperature in argon inductively coupled plasma and its analysis via novel electron mean energy equation

    Science.gov (United States)

    Zhao, Shu-Xia

    2018-03-01

    In this work, the behavior of electron temperature against the power in argon inductively coupled plasma is investigated by a fluid model. The model properly reproduces the non-monotonic variation of temperature with power observed in experiments. By means of a novel electron mean energy equation proposed for the first time in this article, this electron temperature behavior is interpreted. In the overall considered power range, the skin effect of radio frequency electric field results in localized deposited power density, responsible for an increase of electron temperature with power by means of one parameter defined as power density divided by electron density. At low powers, the rate fraction of multistep and Penning ionizations of metastables that consume electron energy two times significantly increases with power, which dominates over the skin effect and consequently leads to the decrease of temperature with power. In the middle power regime, a transition region of temperature is given by the competition between the ionizing effect of metastables and the skin effect of electric field. The power location where the temperature alters its trend moves to the low power end as increasing the pressure due to the lack of metastables. The non-monotonic curve of temperature is asymmetric at the short chamber due to the weak role of skin effect in increasing the temperature and tends symmetric when axially prolonging the chamber. Still, the validity of the fluid model in this prediction is estimated and the role of neutral gas heating is guessed. This finding is helpful for people understanding the different trends of temperature with power in the literature.

  12. Application of non-monotonic logic to failure diagnosis of nuclear power plant

    International Nuclear Information System (INIS)

    Takahashi, M.; Kitamura, M.; Sugiyama, K.

    1989-01-01

    A prototype diagnosis system for nuclear power plants was developed based on Truth Maintenance systems: TMS and Dempster-Shafer probability theory. The purpose of this paper is to establish basic technique for more intelligent, man-computer cooperative diagnosis system. The developed system is capable of carrying out the diagnostic inference under the imperfect observation condition with the help of the proposed belief revision procedure with TMS and the systematic uncertainty treatment with Dempster-Shafer theory. The usefulness and potentiality of the present non-monotonic logic were demonstrated through simulation experiments

  13. Isochronous relaxation curves for type 304 stainless steel after monotonic and cyclic strain

    International Nuclear Information System (INIS)

    Swindeman, R.W.

    1978-01-01

    Relaxation tests to 100 hr were performed on type 304 stainless steel in the temperature range 480 to 650 0 C and were used to develop isochronous relaxation curves. Behavior after monotonic and cyclic strain was compared. Relaxation differed only slightly as a consequence of the type of previous strain, provided that plastic flow preceded the relaxation period. We observed that the short-time relaxation behavior did not manifest strong heat-to-heat variation in creep strength

  14. Annealing Effects on the Normal-State Resistive Properties of Underdoped Cuprates

    Science.gov (United States)

    Vovk, R. V.; Khadzhai, G. Ya.; Nazyrov, Z. F.; Kamchatnaya, S. N.; Feher, A.; Dobrovolskiy, O. V.

    2018-05-01

    The influence of room-temperature annealing on the parameters of the basal-plane electrical resistance of underdoped YBa_2Cu_3O_{7-δ } and HoBa_2Cu_3O_{7-δ } single crystals in the normal and superconducting states is investigated. The form of the derivatives dρ (T)/dT makes it possible to determine the onset temperature of the fluctuation conductivity and indicates a nonuniform distribution of the labile oxygen. Annealing has been revealed to lead to a monotonic decrease in the oxygen deficiency, that primarily manifests itself as a decrease in the residual resistance, an increase of T_c, and a decrease in the Debye temperature.

  15. Non-monotonicity and divergent time scale in Axelrod model dynamics

    Science.gov (United States)

    Vazquez, F.; Redner, S.

    2007-04-01

    We study the evolution of the Axelrod model for cultural diversity, a prototypical non-equilibrium process that exhibits rich dynamics and a dynamic phase transition between diversity and an inactive state. We consider a simple version of the model in which each individual possesses two features that can assume q possibilities. Within a mean-field description in which each individual has just a few interaction partners, we find a phase transition at a critical value qc between an active, diverse state for q < qc and a frozen state. For q lesssim qc, the density of active links is non-monotonic in time and the asymptotic approach to the steady state is controlled by a time scale that diverges as (q-qc)-1/2.

  16. Renormalization in charged colloids: non-monotonic behaviour with the surface charge

    International Nuclear Information System (INIS)

    Haro-Perez, C; Quesada-Perez, M; Callejas-Fernandez, J; Schurtenberger, P; Hidalgo-Alvarez, R

    2006-01-01

    The static structure factor S(q) is measured for a set of deionized latex dispersions with different numbers of ionizable surface groups per particle and similar diameters. For a given volume fraction, the height of the main peak of S(q), which is a direct measure of the spatial ordering of latex particles, does not increase monotonically with the number of ionizable groups. This behaviour cannot be described using the classical renormalization scheme based on the cell model. We analyse our experimental data using a renormalization model based on the jellium approximation, which predicts the weakening of the spatial order for moderate and large particle charges. (letter to the editor)

  17. Earth's Outer Core Properties Estimated Using Bayesian Inversion of Normal Mode Eigenfrequencies

    Science.gov (United States)

    Irving, J. C. E.; Cottaar, S.; Lekic, V.

    2016-12-01

    The outer core is arguably Earth's most dynamic region, and consists of an iron-nickel liquid with an unknown combination of lighter alloying elements. Frequencies of Earth's normal modes provide the strongest constraints on the radial profiles of compressional wavespeed, VΦ, and density, ρ, in the outer core. Recent great earthquakes have yielded new normal mode measurements; however, mineral physics experiments and calculations are often compared to the Preliminary reference Earth model (PREM), which is 35 years old and does not provide uncertainties. Here we investigate the thermo-elastic properties of the outer core using Earth's free oscillations and a Bayesian framework. To estimate radial structure of the outer core and its uncertainties, we choose to exploit recent datasets of normal mode centre frequencies. Under the self-coupling approximation, centre frequencies are unaffected by lateral heterogeneities in the Earth, for example in the mantle. Normal modes are sensitive to both VΦ and ρ in the outer core, with each mode's specific sensitivity depending on its eigenfunctions. We include a priori bounds on outer core models that ensure compatibility with measurements of mass and moment of inertia. We use Bayesian Monte Carlo Markov Chain techniques to explore different choices in parameterizing the outer core, each of which represents different a priori constraints. We test how results vary (1) assuming a smooth polynomial parametrization, (2) allowing for structure close to the outer core's boundaries, (3) assuming an Equation-of-State and adiabaticity and inverting directly for thermo-elastic parameters. In the second approach we recognize that the outer core may have distinct regions close to the core-mantle and inner core boundaries and investigate models which parameterize the well mixed outer core separately from these two layers. In the last approach we seek to map the uncertainties directly into thermo-elastic parameters including the bulk

  18. Elucidation of the effects of cementite morphology on damage formation during monotonic and cyclic tension in binary low carbon steels using in situ characterization

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Motomichi, E-mail: koyama@mech.kyushu-u.ac.jp [Faculty of Engineering, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395 (Japan); Yu, Yachen; Zhou, Jia-Xi [Faculty of Engineering, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395 (Japan); Yoshimura, Nobuyuki [Nippon Steel & Sumitomo Metal Corporation, 20-1 Shintomi, Futtsu, Chiba 293-8511 (Japan); Sakurada, Eisaku [Nippon Steel & Sumitomo Metal Corporation, 5-3 Tokai, Aichi 476-8686 (Japan); Ushioda, Kohsaku [Nippon Steel & Sumitomo Metal Corporation, 20-1 Shintomi, Futtsu, Chiba 293-8511 (Japan); Noguchi, Hiroshi [Faculty of Engineering, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395 (Japan)

    2016-06-14

    The effects of the morphology and distribution of cementite on damage formation were studied using in situ scanning electron microscopy under monotonic and cyclic tension. To investigate the effects of the morphology/distribution of cementite, intergranular cementite precipitation (ICP) and transgranular cementite precipitation (TCP) steels were prepared from an ingot of Fe-0.017 wt% C binary alloy using different heat treatments. In all cases, the damage incidents were observed primarily at the grain boundaries. The damage morphology was dependent on the cementite morphology and loading condition. Monotonic tension in the ICP steel caused cracks across the cementite plates, located at the grain boundaries. In contrast, fatigue loading in the ICP steel induced cracking at the ferrite/cementite interface. Moreover, in the TCP steel, monotonic tension- and cyclic tension-induced intergranular cracking was distinctly observed, due to the slip localization associated with a limited availability of free slip paths. When a notch is introduced to the ICP steel specimen, the morphology of the cyclic tension-induced damage at the notch tip changed to resemble that across the intergranular cementite, and was rather similar to the monotonic tension-induced damage. The damage at the notch tip coalesced with the main crack, accelerating the growth of the fatigue crack.

  19. Use Residual Correction Method and Monotone Iterative Technique to Calculate the Upper and Lower Approximate Solutions of Singularly Perturbed Non-linear Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Chi-Chang Wang

    2013-09-01

    Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.

  20. Post-error expression of speed and force while performing a simple, monotonous task with a haptic pen

    NARCIS (Netherlands)

    Bruns, M.; Keyson, D.V.; Jabon, M.E.; Hummels, C.C.M.; Hekkert, P.P.M.; Bailenson, J.N.

    2013-01-01

    Control errors often occur in repetitive and monotonous tasks, such as manual assembly tasks. Much research has been done in the area of human error identification; however, most existing systems focus solely on the prediction of errors, not on increasing worker accuracy. The current study examines

  1. Non-monotonic piezoresistive behaviour of graphene nanoplatelet (GNP-polymer composite flexible films prepared by solvent casting

    Directory of Open Access Journals (Sweden)

    S. Makireddi

    2017-07-01

    Full Text Available Graphene-polymer nanocomposite films show good piezoresistive behaviour and it is reported that the sensitivity increases either with the increased sheet resistance or decreased number density of the graphene fillers. A little is known about this behaviour near the percolation region. In this study, graphene nanoplatelet (GNP/poly (methyl methacrylate (PMMA flexible films are fabricated via solution casting process at varying weight percent of GNP. Electrical and piezoresistive behaviour of these films is studied as a function of GNP concentration. Piezoresistive strain sensitivity of the films is measured by affixing the film to an aluminium specimen which is subjected to monotonic uniaxial tensile load. The change in resistance of the film with strain is monitored using a four probe. An electrical percolation threshold at 3 weight percent of GNP is observed. We report non-monotonic piezoresistive behaviour of these films as a function GNP concentration. We observe an increase in gauge factor (GF with unstrained resistance of the films up to a critical resistance corresponding to percolation threshold. Beyond this limit the GF decreases with unstrained resistance.

  2. Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing

    2015-05-01

    This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Experimental Feasibility Study of Estimation of the Normalized Central Blood Pressure Waveform from Radial Photoplethysmogram

    Directory of Open Access Journals (Sweden)

    Edmond Zahedi

    2015-01-01

    Full Text Available The feasibility of a novel system to reliably estimate the normalized central blood pressure (CBPN from the radial photoplethysmogram (PPG is investigated. Right-wrist radial blood pressure and left-wrist PPG were simultaneously recorded in five different days. An industry-standard applanation tonometer was employed for recording radial blood pressure. The CBP waveform was amplitude-normalized to determine CBPN. A total of fifteen second-order autoregressive models with exogenous input were investigated using system identification techniques. Among these 15 models, the model producing the lowest coefficient of variation (CV of the fitness during the five days was selected as the reference model. Results show that the proposed model is able to faithfully reproduce CBPN (mean fitness = 85.2% ± 2.5% from the radial PPG for all 15 segments during the five recording days. The low CV value of 3.35% suggests a stable model valid for different recording days.

  4. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  5. Non-invasive estimation of myocardial efficiency using positron emission tomography and carbon-11 acetate - comparison between the normal and failing human heart

    International Nuclear Information System (INIS)

    Bengel, F.M.; Nekolla, S.; Schwaiger, M.; Ungerer, M.

    2000-01-01

    We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with 11 C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A ''stroke work index'' (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a ''work-metabolic index'' (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for 11 C-acetate derived from mono-exponential fitting. In DCM patients, left ventricular ejection fraction was 19%±10% and end-diastolic volume was 92±28 ml/m 2 (vs 64%±7% and 55±8 ml/m 2 in normals, P 2 ; P 6 mmHg x ml/m 2 ; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart was not able to respond with an increase in efficiency to increasing ventricular volume.(orig./MG) (orig.)

  6. Radiation Dose Estimates in Indian Adults in Normal and Pathological Conditions due to 99Tcm-Labelled Radiopharmaceuticals

    International Nuclear Information System (INIS)

    Tyagi, K.; Jain, S.C.; Jain, P.C.

    2001-01-01

    ICRP Publications 53, 62 and 80 give organ dose coefficients and effective doses to ICRP Reference Man and Child from established nuclear medicine procedures. However, an average Indian adult differs significantly from the ICRP Reference Man as regards anatomical, physiological and metabolic characteristics, and is also considered to have different tissue weighting factors (called here risk factors). The masses of total body and most organs are significantly lower for the Indian adult than for his ICRP counterpart (e.g. body mass 52 and 70 kg respectively). Similarly, the risk factors are lower by 20-30% for 8 out of the 13 organs and 30-60% higher for 3 organs. In the present study, available anatomical data of Indians and their risk factors have been utilised to estimate the radiation doses from administration of commonly used 99 Tc m -labelled radiopharmaceuticals under normal and certain pathological conditions. The following pathological conditions have been considered for phosphates/phosphonates - high bone uptake and severely impaired kidney function; IDA - parenchymal liver disease, occlusion of cystic duct, and occlusion of bile duct; DTPA - abnormal renal function; large colloids - early to intermediate diffuse parenchymal liver disease, intermediate to advanced parenchymal liver disease; small colloids - early to intermediate parenchymal liver disease, intermediate to advanced parenchymal liver disease; and MAG3 - abnormal renal function, acute unilateral renal blockage. The estimated 'effective doses' to Indian adults are 14-21% greater than the ICRP value from administration of the same activity of radiopharmaceutical under normal physiological conditions based on anatomical considerations alone, because of the smaller organ masses for the Indian; for some pathological conditions the effective doses are 11-22% more. When tissue risk factors are considered in addition to anatomical considerations, the estimated effective doses are still found to be

  7. [Statistical (Poisson) motor unit number estimation. Methodological aspects and normal results in the extensor digitorum brevis muscle of healthy subjects].

    Science.gov (United States)

    Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J

    Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (page than CMAP amplitude ( 0.5002 and 0.4142, respectively pphisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does.

  8. Convex analysis and monotone operator theory in Hilbert spaces

    CERN Document Server

    Bauschke, Heinz H

    2017-01-01

    This reference text, now in its second edition, offers a modern unifying presentation of three basic areas of nonlinear analysis: convex analysis, monotone operator theory, and the fixed point theory of nonexpansive operators. Taking a unique comprehensive approach, the theory is developed from the ground up, with the rich connections and interactions between the areas as the central focus, and it is illustrated by a large number of examples. The Hilbert space setting of the material offers a wide range of applications while avoiding the technical difficulties of general Banach spaces. The authors have also drawn upon recent advances and modern tools to simplify the proofs of key results making the book more accessible to a broader range of scholars and users. Combining a strong emphasis on applications with exceptionally lucid writing and an abundance of exercises, this text is of great value to a large audience including pure and applied mathematicians as well as researchers in engineering, data science, ma...

  9. Non-Monotonic Survival of Staphylococcus aureus with Respect to Ciprofloxacin Concentration Arises from Prophage-Dependent Killing of Persisters

    Directory of Open Access Journals (Sweden)

    Elizabeth L. Sandvik

    2015-11-01

    Full Text Available Staphylococcus aureus is a notorious pathogen with a propensity to cause chronic, non-healing wounds. Bacterial persisters have been implicated in the recalcitrance of S. aureus infections, and this motivated us to examine the persistence of S. aureus to ciprofloxacin, a quinolone antibiotic. Upon treatment of exponential phase S. aureus with ciprofloxacin, we observed that survival was a non-monotonic function of ciprofloxacin concentration. Maximal killing occurred at 1 µg/mL ciprofloxacin, which corresponded to survival that was up to ~40-fold lower than that obtained with concentrations ≥ 5 µg/mL. Investigation of this phenomenon revealed that the non-monotonic response was associated with prophage induction, which facilitated killing of S. aureus persisters. Elimination of prophage induction with tetracycline was found to prevent cell lysis and persister killing. We anticipate that these findings may be useful for the design of quinolone treatments.

  10. Non-monotonic resonance in a spatially forced Lengyel-Epstein model

    Energy Technology Data Exchange (ETDEWEB)

    Haim, Lev [Physics Department, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Department of Oncology, Soroka University Medical Center, Beer-Sheva 84101 (Israel); Hagberg, Aric [Center for Nonlinear Studies, Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Meron, Ehud [Physics Department, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Department of Solar Energy and Environmental Physics, BIDR, Ben-Gurion University of the Negev, Sede Boqer Campus, Midreshet Ben-Gurion 84990 (Israel)

    2015-06-15

    We study resonant spatially periodic solutions of the Lengyel-Epstein model modified to describe the chlorine dioxide-iodine-malonic acid reaction under spatially periodic illumination. Using multiple-scale analysis and numerical simulations, we obtain the stability ranges of 2:1 resonant solutions, i.e., solutions with wavenumbers that are exactly half of the forcing wavenumber. We show that the width of resonant wavenumber response is a non-monotonic function of the forcing strength, and diminishes to zero at sufficiently strong forcing. We further show that strong forcing may result in a π/2 phase shift of the resonant solutions, and argue that the nonequilibrium Ising-Bloch front bifurcation can be reversed. We attribute these behaviors to an inherent property of forcing by periodic illumination, namely, the increase of the mean spatial illumination as the forcing amplitude is increased.

  11. Estimation of coronary flow reserve by sestamibi imaging in patients with mild hypertension and normal coronary arteries

    International Nuclear Information System (INIS)

    Storto, G.; Gallicchio, R.; Maddalena, F.; Pellegrino, T.; Petretta, M.; Fiumara, G.; Cuocolo, A.

    2015-01-01

    Patients with hypertension may exhibit abnormal vasodilator capacity during pharmacological vasodilatation. We assessed coronary flow reserve (CFR) by sestamibi imaging in hypertensive patients with normal coronary vessels. Twenty-five patients with untreated mild essential hypertension and normal coronary vessels and 10 control subjects underwent dipyridamole-rest Tc-99m sestamibi imaging. Myocardial blood flow (MBF) was estimated by measuring first transit counts in pulmonary artery and myocardial counts from tomograhic images. CFR was expressed as the ratio of stress to rest MBF. Coronary vascular resistances (CVR) were computed as the ratio between mean arterial pressure and MBF. Estimated MBF at rest was not different in patients and controls (1.11±0.59 vs. 1.14±0.28 counts/pixel/s; P=0.87). Conversely, stress MBF was lower in patients than in controls (1.55±0.47 vs. 2.68±0.53 counts/pixel/s; P<0.001). Thus, CFR was reduced in patients compared to controls (1.61±0.58 vs. 2.43±0.62; P<0.001). Rest and stress CVR values were higher in patients (P<0.001), while stress-induced changes in CVR were not different (P=0.08) between patients (-51%) and controls (-62%). In the overall study population, a significant relation between CFR and stress-induced changes in CVR was observed (r=-0.86; P<0.001). Sestamibi imaging may detect impaired coronary vascular function in response to dipyridamole in patients with untreated mild essential hypertension and normal coronary arteries. A mild increase in arterial blood pressure does not affect baseline MBF, but impairs coronary reserve due to the amplified resting coronary resistances.

  12. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Subtractive, divisive and non-monotonic gain control in feedforward nets linearized by noise and delays.

    Science.gov (United States)

    Mejias, Jorge F; Payeur, Alexandre; Selin, Erik; Maler, Leonard; Longtin, André

    2014-01-01

    The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry-also known as "open-loop feedback"-, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.

  14. Subtractive, divisive and non-monotonic gain control in feedforward nets linearized by noise and delays

    Directory of Open Access Journals (Sweden)

    Jorge F Mejias

    2014-02-01

    Full Text Available The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry — also known as ’open-loop feedback’ —, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.

  15. The monotonicity and convexity of a function involving digamma one and their applications

    OpenAIRE

    Yang, Zhen-Hang

    2014-01-01

    Let $\\mathcal{L}(x,a)$ be defined on $\\left( -1,\\infty \\right) \\times \\left( 4/15,\\infty \\right) $ or $\\left( 0,\\infty \\right) \\times \\left( 1/15,\\infty \\right) $ by the formula% \\begin{equation*} \\mathcal{L}(x,a)=\\tfrac{1}{90a^{2}+2}\\ln \\left( x^{2}+x+\\tfrac{3a+1}{3}% \\right) +\\tfrac{45a^{2}}{90a^{2}+2}\\ln \\left( x^{2}+x+\\allowbreak \\tfrac{% 15a-1}{45a}\\right) . \\end{equation*} We investigate the monotonicity and convexity of the function $x\\rightarrow F_{a}\\left( x\\right) =\\psi \\left( x+1\\r...

  16. Convergence rates and finite-dimensional approximations for nonlinear ill-posed problems involving monotone operators in Banach spaces

    International Nuclear Information System (INIS)

    Nguyen Buong.

    1992-11-01

    The purpose of this paper is to investigate convergence rates for an operator version of Tikhonov regularization constructed by dual mapping for nonlinear ill-posed problems involving monotone operators in real reflective Banach spaces. The obtained results are considered in combination with finite-dimensional approximations for the space. An example is considered for illustration. (author). 15 refs

  17. Estimating the Heading Direction Using Normal Flow

    Science.gov (United States)

    1994-01-01

    understood (Faugeras and Maybank 1990), 3 Kinetic Stabilization under the assumption that optic flow or correspon- dence is known with some uncertainty...accelerometers can achieve very It can easily be shown (Koenderink and van Doom high accuracy, the same is not true for inexpensive 1975; Maybank 1985... Maybank . ’Motion from point matches: Multi- just don’t compute normal flow there (see Section 6). plicity of solutions". Int’l J. Computer Vision 4

  18. Simplest bifurcation diagrams for monotone families of vector fields on a torus

    Science.gov (United States)

    Baesens, C.; MacKay, R. S.

    2018-06-01

    In part 1, we prove that the bifurcation diagram for a monotone two-parameter family of vector fields on a torus has to be at least as complicated as the conjectured simplest one proposed in Baesens et al (1991 Physica D 49 387–475). To achieve this, we define ‘simplest’ by sequentially minimising the numbers of equilibria, Bogdanov–Takens points, closed curves of centre and of neutral saddle, intersections of curves of centre and neutral saddle, Reeb components, other invariant annuli, arcs of rotational homoclinic bifurcation of horizontal homotopy type, necklace points, contractible periodic orbits, points of neutral horizontal homoclinic bifurcation and half-plane fan points. We obtain two types of simplest case, including that initially proposed. In part 2, we analyse the bifurcation diagram for an explicit monotone family of vector fields on a torus and prove that it has at most two equilibria, precisely four Bogdanov–Takens points, no closed curves of centre nor closed curves of neutral saddle, at most two Reeb components, precisely four arcs of rotational homoclinic connection of ‘horizontal’ homotopy type, eight horizontal saddle-node loop points, two necklace points, four points of neutral horizontal homoclinic connection, and two half-plane fan points, and there is no simultaneous existence of centre and neutral saddle, nor contractible homoclinic connection to a neutral saddle. Furthermore, we prove that all saddle-nodes, Bogdanov–Takens points, non-neutral and neutral horizontal homoclinic bifurcations are non-degenerate and the Hopf condition is satisfied for all centres. We also find it has four points of degenerate Hopf bifurcation. It thus provides an example of a family satisfying all the assumptions of part 1 except the one of at most one contractible periodic orbit.

  19. Raman D-band in the irradiated graphene: Origin of the non-monotonous dependence of its intensity with defect concentration

    International Nuclear Information System (INIS)

    Codorniu Pujals, Daniel

    2013-01-01

    Raman spectroscopy is one of the most used experimental techniques in studying irradiated carbon nanostructures, in particular graphene, due to its high sensibility to the presence of defects in the crystalline lattice. Special attention has been given to the variation of the intensity of the Raman D-band of graphene with the concentration of defects produced by irradiation. Nowadays, there are enough experimental evidences about the non-monotonous character of that dependence, but the explanation of this behavior is still controversial. In the present work we developed a simplified mathematical model to obtain a functional relationship between these two magnitudes and showed that the non-monotonous dependence is intrinsic to the nature of the D-band and that it is not necessarily linked to amorphization processes. The obtained functional dependence was used to fit experimental data taken from other authors. The determination coefficient of the fitting was 0.96.

  20. Comportement des poteaux mixtes acier-béton soumis aux sollicitations de type monotone. Étude expérimentale

    Directory of Open Access Journals (Sweden)

    Cristina Câmpian

    2006-01-01

    Full Text Available For more than one hundred years the construction system based on steel or composite steel -- concrete frames became one of the more utilized types of building in civil engineering domain. For an optimal dimensioning of the structure, the engineers had to found a compromise between the structural exigency for the resistance, stiffness and ductility, on one side, and architectural exigency on the other side. Three monotonic tests and nine cyclic tests according to ECCS loading procedure were carried out in Cluj Laboratory of Concrete. The tested composite columns of fully encased type were subject to a variable transverse load at one end while keeping a constant value of the axial compression force into them. An analytical interpretation is given for the calculus of column stiffness for the monotonic tests, making a comparation with the latest versions of the Eurocode 4 stiffness formula.

  1. Brain 'talks over' boring quotes: top-down activation of voice-selective areas while listening to monotonous direct speech quotations.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2012-04-15

    In human communication, direct speech (e.g., Mary said, "I'm hungry") is perceived as more vivid than indirect speech (e.g., Mary said that she was hungry). This vividness distinction has previously been found to underlie silent reading of quotations: Using functional magnetic resonance imaging (fMRI), we found that direct speech elicited higher brain activity in the temporal voice areas (TVA) of the auditory cortex than indirect speech, consistent with an "inner voice" experience in reading direct speech. Here we show that listening to monotonously spoken direct versus indirect speech quotations also engenders differential TVA activity. This suggests that individuals engage in top-down simulations or imagery of enriched supra-segmental acoustic representations while listening to monotonous direct speech. The findings shed new light on the acoustic nature of the "inner voice" in understanding direct speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Low dose effects and non-monotonic dose responses for endocrine active chemicals: Science to practice workshop: Workshop summary

    DEFF Research Database (Denmark)

    Beausoleil, Claire; Ormsby, Jean-Nicolas; Gies, Andreas

    2013-01-01

    A workshop was held in Berlin September 12–14th 2012 to assess the state of the science of the data supporting low dose effects and non-monotonic dose responses (“low dose hypothesis”) for chemicals with endocrine activity (endocrine disrupting chemicals or EDCs). This workshop consisted of lectu...

  3. Solvability conditions of the Cauchy problem for two-dimensional systems of linear functional differential equations with monotone operators

    Czech Academy of Sciences Publication Activity Database

    Šremr, Jiří

    2007-01-01

    Roč. 132, č. 3 (2007), s. 263-295 ISSN 0862-7959 R&D Projects: GA ČR GP201/04/P183 Institutional research plan: CEZ:AV0Z10190503 Keywords : system of functional differential equations with monotone operators * initial value problem * unique solvability Subject RIV: BA - General Mathematics

  4. The Marotto Theorem on planar monotone or competitive maps

    International Nuclear Information System (INIS)

    Yu Huang

    2004-01-01

    In 1978, Marotto generalized Li-Yorke's results on the criterion for chaos from one-dimensional discrete dynamical systems to n-dimensional discrete dynamical systems, showing that the existence of a non-degenerate snap-back repeller implies chaos in the sense of Li-Yorke. This theorem is very useful in predicting and analyzing discrete chaos in multi-dimensional dynamical systems. Yet, besides it is well known that there exists an error in the conditions of the original Marotto Theorem, and several authors had tried to correct it in different way, Chen, Hsu and Zhou pointed out that the verification of 'non-degeneracy' of a snap-back repeller is the most difficult in general and expected, 'almost beyond reasonable doubt', that the existence of only degenerate snap-back repeller still implies chaotic, which was posed as a conjecture by them. In this paper, we shall give necessary and sufficient conditions of chaos in the sense of Li-Yorke for planar monotone or competitive discrete dynamical systems and solve Chen-Hsu-Zhou Conjecture for such kinds of systems

  5. The Monotonic Lagrangian Grid for Fast Air-Traffic Evaluation

    Science.gov (United States)

    Alexandrov, Natalia; Kaplan, Carolyn; Oran, Elaine; Boris, Jay

    2010-01-01

    This paper describes the continued development of a dynamic air-traffic model, ATMLG, intended for rapid evaluation of rules and methods to control and optimize transport systems. The underlying data structure is based on the Monotonic Lagrangian Grid (MLG), which is used for sorting and ordering positions and other data needed to describe N moving bodies, and their interactions. In ATMLG, the MLG is combined with algorithms for collision avoidance and updating aircraft trajectories. Aircraft that are close to each other in physical space are always near neighbors in the MLG data arrays, resulting in a fast nearest-neighbor interaction algorithm that scales as N. In this paper, we use ATMLG to examine how the ability to maintain a required separation between aircraft decreases as the number of aircraft in the volume increases. This requires keeping track of the primary and subsequent collision avoidance maneuvers necessary to maintain a five mile separation distance between all aircraft. Simulation results show that the number of collision avoidance moves increases exponentially with the number of aircraft in the volume.

  6. The response of a linear monostable system and its application in parameters estimation for PSK signals

    Science.gov (United States)

    Duan, Chaowei; Zhan, Yafeng

    2016-03-01

    The output characteristics of a linear monostable system driven with a periodic signal and an additive white Gaussian noise are studied in this paper. Theoretical analysis shows that the output signal-to-noise ratio (SNR) decreases monotonously with the increasing noise intensity but the output SNR-gain is stable. Inspired by this high SNR-gain phenomenon, this paper applies the linear monostable system in the parameters estimation algorithm for phase shift keying (PSK) signals and improves the estimation performance.

  7. Evaluation of the Monotonic Lagrangian Grid and Lat-Long Grid for Air Traffic Management

    Science.gov (United States)

    Kaplan, Carolyn; Dahm, Johann; Oran, Elaine; Alexandrov, Natalia; Boris, Jay

    2011-01-01

    The Air Traffic Monotonic Lagrangian Grid (ATMLG) is used to simulate a 24 hour period of air traffic flow in the National Airspace System (NAS). During this time period, there are 41,594 flights over the United States, and the flight plan information (departure and arrival airports and times, and waypoints along the way) are obtained from an Federal Aviation Administration (FAA) Enhanced Traffic Management System (ETMS) dataset. Two simulation procedures are tested and compared: one based on the Monotonic Lagrangian Grid (MLG), and the other based on the stationary Latitude-Longitude (Lat- Long) grid. Simulating one full day of air traffic over the United States required the following amounts of CPU time on a single processor of an SGI Altix: 88 s for the MLG method, and 163 s for the Lat-Long grid method. We present a discussion of the amount of CPU time required for each of the simulation processes (updating aircraft trajectories, sorting, conflict detection and resolution, etc.), and show that the main advantage of the MLG method is that it is a general sorting algorithm that can sort on multiple properties. We discuss how many MLG neighbors must be considered in the separation assurance procedure in order to ensure a five-mile separation buffer between aircraft, and we investigate the effect of removing waypoints from aircraft trajectories. When aircraft choose their own trajectory, there are more flights with shorter duration times and fewer CD&R maneuvers, resulting in significant fuel savings.

  8. Application of a repetitive process setting to design of monotonically convergent iterative learning control

    Science.gov (United States)

    Boski, Marcin; Paszke, Wojciech

    2015-11-01

    This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.

  9. Monotonous and oscillation instability of mechanical equilibrium of isothermal three-components mixture with zero-gradient density

    International Nuclear Information System (INIS)

    Zhavrin, Yu.I.; Kosov, V.N.; Kul'zhanov, D.U.; Karataev, K.K.

    2000-01-01

    Presence of two types of instabilities of mechanical equilibrium of a mixture experimentally is shown at an isothermal diffusion of multicomponent system with zero gradient of density/ Theoretically is proved, that partial Rayleigh numbers R 1 , R 2 having different signs, there are two areas with monotonous (R 1 2 < by 0) instability. The experimental data confirm presence of these areas and satisfactory are described by the represented theory. (author)

  10. TumorBoost: Normalization of allele-specific tumor copy numbers from a single pair of tumor-normal genotyping microarrays

    Directory of Open Access Journals (Sweden)

    Neuvial Pierre

    2010-05-01

    Full Text Available Abstract Background High-throughput genotyping microarrays assess both total DNA copy number and allelic composition, which makes them a tool of choice for copy number studies in cancer, including total copy number and loss of heterozygosity (LOH analyses. Even after state of the art preprocessing methods, allelic signal estimates from genotyping arrays still suffer from systematic effects that make them difficult to use effectively for such downstream analyses. Results We propose a method, TumorBoost, for normalizing allelic estimates of one tumor sample based on estimates from a single matched normal. The method applies to any paired tumor-normal estimates from any microarray-based technology, combined with any preprocessing method. We demonstrate that it increases the signal-to-noise ratio of allelic signals, making it significantly easier to detect allelic imbalances. Conclusions TumorBoost increases the power to detect somatic copy-number events (including copy-neutral LOH in the tumor from allelic signals of Affymetrix or Illumina origin. We also conclude that high-precision allelic estimates can be obtained from a single pair of tumor-normal hybridizations, if TumorBoost is combined with single-array preprocessing methods such as (allele-specific CRMA v2 for Affymetrix or BeadStudio's (proprietary XY-normalization method for Illumina. A bounded-memory implementation is available in the open-source and cross-platform R package aroma.cn, which is part of the Aroma Project (http://www.aroma-project.org/.

  11. Bone marrow cellularity in normal and polycythemic mice estimated by DNA incorporation of /sup 3/H-TdR

    Energy Technology Data Exchange (ETDEWEB)

    Blackwell, L.H.; Ledney, G.D.

    1982-07-01

    Nucleated bone marrow cell numbers in normal and polycythemic mice were determined using /sup 3/H-thymidine (/sup 3/H-TdR). The cellularities were estimated by extrapolating the exponential disappearance of labeled cells after a single injection of /sup 3/H-TdR to the time of injection. Dermestid beetles (Anthrenus piceus) were used to prepare tissue-free skeletons labeled with /sup 3/H-TdR. The correlation between tritium activity in bone marrow DNA and tritium derived from the combusted skeleton was determined. The total skeletal cellularity determined by isotope dilution analysis in both normal and polycythemic mice was 2.6 x 10(8) cells/mouse or 17.6 x 10(9) cells/kg body weight. Although the red cell component of the marrow was reduced in the polycythemic mouse, the total numbers of nucleated cells in both types of animals were similar. The differential distribution of cells in the polycythemic animal showed a twofold increase in granulocytic cells, which may explain the identical nucleated cell count in normal and in polycythemic mice.

  12. Dynamical zeta functions for piecewise monotone maps of the interval

    CERN Document Server

    Ruelle, David

    2004-01-01

    Consider a space M, a map f:M\\to M, and a function g:M \\to {\\mathbb C}. The formal power series \\zeta (z) = \\exp \\sum ^\\infty _{m=1} \\frac {z^m}{m} \\sum _{x \\in \\mathrm {Fix}\\,f^m} \\prod ^{m-1}_{k=0} g (f^kx) yields an example of a dynamical zeta function. Such functions have unexpected analytic properties and interesting relations to the theory of dynamical systems, statistical mechanics, and the spectral theory of certain operators (transfer operators). The first part of this monograph presents a general introduction to this subject. The second part is a detailed study of the zeta functions associated with piecewise monotone maps of the interval [0,1]. In particular, Ruelle gives a proof of a generalized form of the Baladi-Keller theorem relating the poles of \\zeta (z) and the eigenvalues of the transfer operator. He also proves a theorem expressing the largest eigenvalue of the transfer operator in terms of the ergodic properties of (M,f,g).

  13. Intuitionistic Fuzzy Normalized Weighted Bonferroni Mean and Its Application in Multicriteria Decision Making

    Directory of Open Access Journals (Sweden)

    Wei Zhou

    2012-01-01

    Full Text Available The Bonferroni mean (BM was introduced by Bonferroni six decades ago but has been a hot research topic recently since its usefulness of the aggregation techniques. The desirable characteristic of the BM is its capability to capture the interrelationship between input arguments. However, the classical BM and GBM ignore the weight vector of aggregated arguments, the general weighted BM (WBM has not the reducibility, and the revised generalized weighted BM (GWBM cannot reflect the interrelationship between the individual criterion and other criteria. To deal with these issues, in this paper, we propose the normalized weighted Bonferroni mean (NWBM and the generalized normalized weighted Bonferroni mean (GNWBM and study their desirable properties, such as reducibility, idempotency, monotonicity, and boundedness. Furthermore, we investigate the NWBM and GNWBM operators under the intuitionistic fuzzy environment which is more common phenomenon in modern life and develop two new intuitionistic fuzzy aggregation operators based on the NWBM and GNWBM, that is, the intuitionistic fuzzy normalized weighted Bonferroni mean (IFNWBM and the generalized intuitionistic fuzzy normalized weighted Bonferroni mean (GIFNWBM. Finally, based on the GIFNWBM, we propose an approach to multicriteria decision making under the intuitionistic fuzzy environment, and a practical example is provided to illustrate our results.

  14. Non-Interior Continuation Method for Solving the Monotone Semidefinite Complementarity Problem

    International Nuclear Information System (INIS)

    Huang, Z.H.; Han, J.

    2003-01-01

    Recently, Chen and Tseng extended non-interior continuation smoothing methods for solving linear/ nonlinear complementarity problems to semidefinite complementarity problems (SDCP). In this paper we propose a non-interior continuation method for solving the monotone SDCP based on the smoothed Fischer-Burmeister function, which is shown to be globally linearly and locally quadratically convergent under suitable assumptions. Our algorithm needs at most to solve a linear system of equations at each iteration. In addition, in our analysis on global linear convergence of the algorithm, we need not use the assumption that the Frechet derivative of the function involved in the SDCP is Lipschitz continuous. For non-interior continuation/ smoothing methods for solving the nonlinear complementarity problem, such an assumption has been used widely in the literature in order to achieve global linear convergence results of the algorithms

  15. Complete Monotonicity of a Difference Between the Exponential and Trigamma Functions and Properties Related to a Modified Bessel Function

    DEFF Research Database (Denmark)

    Qi, Feng; Berg, Christian

    2013-01-01

    In the paper, the authors find necessary and sufficient conditions for a difference between the exponential function αeβ/t, α, β > 0, and the trigamma function ψ (t) to be completely monotonic on (0,∞). While proving the complete onotonicity, the authors discover some properties related to the fi...

  16. Surfactants non-monotonically modify the onset of Faraday waves

    Science.gov (United States)

    Strickland, Stephen; Shearer, Michael; Daniels, Karen

    2017-11-01

    When a water-filled container is vertically vibrated, subharmonic Faraday waves emerge once the driving from the vibrations exceeds viscous dissipation. In the presence of an insoluble surfactant, a viscous boundary layer forms at the contaminated surface to balance the Marangoni and Boussinesq stresses. For linear gravity-capillary waves in an undriven fluid, the surfactant-induced boundary layer increases the amount of viscous dissipation. In our analysis and experiments, we consider whether similar effects occur for nonlinear Faraday (gravity-capillary) waves. Assuming a finite-depth, infinite-breadth, low-viscosity fluid, we derive an analytic expression for the onset acceleration up to second order in ɛ =√{ 1 / Re } . This expression allows us to include fluid depth and driving frequency as parameters, in addition to the Marangoni and Boussinesq numbers. For millimetric fluid depths and driving frequencies of 30 to 120 Hz, our analysis recovers prior numerical results and agrees with our measurements of NBD-PC surfactant on DI water. In both case, the onset acceleration increases non-monotonically as a function of Marangoni and Boussinesq numbers. For shallower systems, our model predicts that surfactants could decrease the onset acceleration. DMS-0968258.

  17. Non-existence of Normal Tokamak Equilibria with Negative Central Current

    International Nuclear Information System (INIS)

    Hammett, G.W.; Jardin, S.C.; Stratton, B.C.

    2003-01-01

    Recent tokamak experiments employing off-axis, non-inductive current drive have found that a large central current hole can be produced. The current density is measured to be approximately zero in this region, though in principle there was sufficient current-drive power for the central current density to have gone significantly negative. Recent papers have used a large aspect-ratio expansion to show that normal MHD equilibria (with axisymmetric nested flux surfaces, non-singular fields, and monotonic peaked pressure profiles) can not exist with negative central current. We extend that proof here to arbitrary aspect ratio, using a variant of the virial theorem to derive a relatively simple integral constraint on the equilibrium. However, this constraint does not, by itself, exclude equilibria with non-nested flux surfaces, or equilibria with singular fields and/or hollow pressure profiles that may be spontaneously generated

  18. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    Science.gov (United States)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures

  19. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub

  20. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub

  1. Condition-based inspection/replacement policies for non-monotone deteriorating systems with environmental covariates

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Xuejing [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); School of mathematics and statistics, Lanzhou University, Lanzhou 730000 (China); Fouladirad, Mitra, E-mail: mitra.fouladirad@utt.f [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Berenguer, Christophe [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Bordes, Laurent [Universite de Pau et des Pays de l' Adour, LMA UMR CNRS 5142, 64013 PAU Cedex (France)

    2010-08-15

    The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.

  2. Condition-based inspection/replacement policies for non-monotone deteriorating systems with environmental covariates

    International Nuclear Information System (INIS)

    Zhao Xuejing; Fouladirad, Mitra; Berenguer, Christophe; Bordes, Laurent

    2010-01-01

    The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.

  3. Simple bounds for counting processes with monotone rate of occurrence of failures

    International Nuclear Information System (INIS)

    Kaminskiy, Mark P.

    2007-01-01

    The article discusses some aspects of analogy between certain classes of distributions used as models for time to failure of nonrepairable objects, and the counting processes used as models for failure process for repairable objects. The notion of quantiles for the counting processes with strictly increasing cumulative intensity function is introduced. The classes of counting processes with increasing (decreasing) rate of occurrence of failures are considered. For these classes, the useful nonparametric bounds for cumulative intensity function based on one known quantile are obtained. These bounds, which can be used for repairable objects, are similar to the bounds introduced by Barlow and Marshall [Barlow, R. Marshall, A. Bounds for distributions with monotone hazard rate, I and II. Ann Math Stat 1964; 35: 1234-74] for IFRA (DFRA) time to failure distributions applicable to nonrepairable objects

  4. An Optimal Augmented Monotonic Tracking Controller for Aircraft Engines with Output Constraints

    Directory of Open Access Journals (Sweden)

    Jiakun Qin

    2017-01-01

    Full Text Available This paper proposes a novel min-max control scheme for aircraft engines, with the aim of transferring a set of regulated outputs between two set-points, while ensuring a set of auxiliary outputs remain within prescribed constraints. In view of this, an optimal augmented monotonic tracking controller (OAMTC is proposed, by considering a linear plant with input integration, to enhance the ability of the control system to reject uncertainty in system parameters and ensure no crossing limits. The key idea is to use the eigenvalue and eigenvector placement method and genetic algorithms to shape the output responses. The approach is validated by numerical simulation. The results show that the designed OAMTC controller can achieve a satisfactory dynamic and steady performance and keep the auxiliary outputs within constraints in the transient regime.

  5. A new efficient algorithm for computing the imprecise reliability of monotone systems

    International Nuclear Information System (INIS)

    Utkin, Lev V.

    2004-01-01

    Reliability analysis of complex systems by partial information about reliability of components and by different conditions of independence of components may be carried out by means of the imprecise probability theory which provides a unified framework (natural extension, lower and upper previsions) for computing the system reliability. However, the application of imprecise probabilities to reliability analysis meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, an efficient simplified algorithm to solve and decompose the optimization problems is proposed in the paper. This algorithm allows us to practically implement reliability analysis of monotone systems under partial and heterogeneous information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example illustrates the algorithm

  6. Estimating the concentration of urea and creatinine in the human serum of normal and dialysis patients through Raman spectroscopy.

    Science.gov (United States)

    de Almeida, Maurício Liberal; Saatkamp, Cassiano Junior; Fernandes, Adriana Barrinha; Pinheiro, Antonio Luiz Barbosa; Silveira, Landulfo

    2016-09-01

    Urea and creatinine are commonly used as biomarkers of renal function. Abnormal concentrations of these biomarkers are indicative of pathological processes such as renal failure. This study aimed to develop a model based on Raman spectroscopy to estimate the concentration values of urea and creatinine in human serum. Blood sera from 55 clinically normal subjects and 47 patients with chronic kidney disease undergoing dialysis were collected, and concentrations of urea and creatinine were determined by spectrophotometric methods. A Raman spectrum was obtained with a high-resolution dispersive Raman spectrometer (830 nm). A spectral model was developed based on partial least squares (PLS), where the concentrations of urea and creatinine were correlated with the Raman features. Principal components analysis (PCA) was used to discriminate dialysis patients from normal subjects. The PLS model showed r = 0.97 and r = 0.93 for urea and creatinine, respectively. The root mean square errors of cross-validation (RMSECV) for the model were 17.6 and 1.94 mg/dL, respectively. PCA showed high discrimination between dialysis and normality (95 % accuracy). The Raman technique was able to determine the concentrations with low error and to discriminate dialysis from normal subjects, consistent with a rapid and low-cost test.

  7. Estimated radiological effects of the normal discharge of radioactivity from nuclear power plants in the Netherlands with a total capacity of 3500 MWe

    International Nuclear Information System (INIS)

    Lugt, G. van der; Wijker, H.; Kema, N.V.

    1977-01-01

    In the Netherlands discussions are going on about the installation of three nuclear power plants, leading with the two existing plants to a total capacity of 3500 MWe. To have an impression of the radiological impact of this program, calculations were carried out concerning the population doses due to the discharge of radioactivity from the plants during normal operation. The discharge via the ventilation stack gives doses due to noble gases, halogens and particulate material. The population dose due to the halogens in the grass-milk-man chain is estimated using the real distribution of grass-land around the reactor sites. It could be concluded that the population dose due to the contamination of crops and fruit is negligeable. A conservative estimation is made for the dose due to the discharge of tritium. The population dose due to the discharge in the cooling water is calculated using the following pathways: drinking water; consumption of fish; consumption of meat from animals fed with fish products. The individual doses caused by the normal discharge of a 1000 MWe plant appeared to be very low, mostly below 1 mrem/year. The population dose is in the order of some tens manrems. The total dose of the 5 nuclear power plants to the dutch population is not more than 70 manrem. Using a linear dose-effect relationship the health effects to the population are estimated and compared with the normal frequency

  8. Estimating the level of dynamical noise in time series by using fractal dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Sase, Takumi, E-mail: sase@sat.t.u-tokyo.ac.jp [Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 153-8505 (Japan); Ramírez, Jonatán Peña [CONACYT Research Fellow, Center for Scientific Research and Higher Education at Ensenada (CICESE), Carretera Ensenada-Tijuana No. 3918, Zona Playitas, C.P. 22860, Ensenada, Baja California (Mexico); Kitajo, Keiichi [BSI-Toyota Collaboration Center, RIKEN Brain Science Institute, Wako, Saitama 351-0198 (Japan); Aihara, Kazuyuki; Hirata, Yoshito [Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 153-8505 (Japan); Institute of Industrial Science, The University of Tokyo, Tokyo 153-8505 (Japan)

    2016-03-11

    We present a method for estimating the dynamical noise level of a ‘short’ time series even if the dynamical system is unknown. The proposed method estimates the level of dynamical noise by calculating the fractal dimensions of the time series. Additionally, the method is applied to EEG data to demonstrate its possible effectiveness as an indicator of temporal changes in the level of dynamical noise. - Highlights: • A dynamical noise level estimator for time series is proposed. • The estimator does not need any information about the dynamics generating the time series. • The estimator is based on a novel definition of time series dimension (TSD). • It is demonstrated that there exists a monotonic relationship between the • TSD and the level of dynamical noise. • We apply the proposed method to human electroencephalographic data.

  9. Estimating the level of dynamical noise in time series by using fractal dimensions

    International Nuclear Information System (INIS)

    Sase, Takumi; Ramírez, Jonatán Peña; Kitajo, Keiichi; Aihara, Kazuyuki; Hirata, Yoshito

    2016-01-01

    We present a method for estimating the dynamical noise level of a ‘short’ time series even if the dynamical system is unknown. The proposed method estimates the level of dynamical noise by calculating the fractal dimensions of the time series. Additionally, the method is applied to EEG data to demonstrate its possible effectiveness as an indicator of temporal changes in the level of dynamical noise. - Highlights: • A dynamical noise level estimator for time series is proposed. • The estimator does not need any information about the dynamics generating the time series. • The estimator is based on a novel definition of time series dimension (TSD). • It is demonstrated that there exists a monotonic relationship between the • TSD and the level of dynamical noise. • We apply the proposed method to human electroencephalographic data.

  10. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    Science.gov (United States)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic

  11. External validation of equations to estimate resting energy expenditure in 14952 adults with overweight and obesity and 1948 adults with normal weight from Italy.

    Science.gov (United States)

    Bedogni, Giorgio; Bertoli, Simona; Leone, Alessandro; De Amicis, Ramona; Lucchetti, Elisa; Agosti, Fiorenza; Marazzi, Nicoletta; Battezzati, Alberto; Sartorio, Alessandro

    2017-11-24

    We cross-validated 28 equations to estimate resting energy expenditure (REE) in a very large sample of adults with overweight or obesity. 14952 Caucasian men and women with overweight or obesity and 1498 with normal weight were studied. REE was measured using indirect calorimetry and estimated using two meta-regression equations and 26 other equations. The correct classification fraction (CCF) was defined as the fraction of subjects whose estimated REE was within 10% of measured REE. The highest CCF was 79%, 80%, 72%, 64%, and 63% in subjects with normal weight, overweight, class 1 obesity, class 2 obesity, and class 3 obesity, respectively. The Henry weight and height and Mifflin equations performed equally well with CCFs of 77% vs. 77% for subjects with normal weight, 80% vs. 80% for those with overweight, 72% vs. 72% for those with class 1 obesity, 64% vs. 63% for those with class 2 obesity, and 61% vs. 60% for those with class 3 obesity. The Sabounchi meta-regression equations offered an improvement over the above equations only for class 3 obesity (63%). The accuracy of REE equations decreases with increasing values of body mass index. The Henry weight & height and Mifflin equations are similarly accurate and the Sabounchi equations offer an improvement only in subjects with class 3 obesity. Copyright © 2017 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  12. Monotone Hybrid Projection Algorithms for an Infinitely Countable Family of Lipschitz Generalized Asymptotically Quasi-Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Watcharaporn Cholamjiak

    2009-01-01

    Full Text Available We prove a weak convergence theorem of the modified Mann iteration process for a uniformly Lipschitzian and generalized asymptotically quasi-nonexpansive mapping in a uniformly convex Banach space. We also introduce two kinds of new monotone hybrid methods and obtain strong convergence theorems for an infinitely countable family of uniformly Lipschitzian and generalized asymptotically quasi-nonexpansive mappings in a Hilbert space. The results improve and extend the corresponding ones announced by Kim and Xu (2006 and Nakajo and Takahashi (2003.

  13. Use of the nonsteady monotonic heating method for complex determination of thermophysical properties of chemically reacting mixture in the case of non-equilibrium proceeding of the chemical reaction

    International Nuclear Information System (INIS)

    Serebryanyj, G.Z.

    1984-01-01

    Theoretical analysis is made for the monotonic heating method as applied for complex determination of thermophysical properties of chemically reacting gases. The possibility is shown of simultaneous determination of frozen and equilibrium heat capacity, frozen and equilibrium heat conduction provided non-equilibrium occuring of the reaction in the wide range of temperatures and pressures. The monotonic heating method can be used for complex determination of thermophysical properties of chemically reacting systems in case of non-equilibrium proceeding of the chemical reaction

  14. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  15. The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.

    Science.gov (United States)

    Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J

    2018-03-23

    In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.

  16. Laser induced non-monotonic degradation in short-circuit current of triple-junction solar cells

    Science.gov (United States)

    Dou, Peng-Cheng; Feng, Guo-Bin; Zhang, Jian-Min; Song, Ming-Ying; Zhang, Zhen; Li, Yun-Peng; Shi, Yu-Bin

    2018-06-01

    In order to study the continuous wave (CW) laser radiation effects and mechanism of GaInP/GaAs/Ge triple-junction solar cells (TJSCs), 1-on-1 mode irradiation experiments were carried out. It was found that the post-irradiation short circuit current (ISC) of the TJSCs initially decreased and then increased with increasing of irradiation laser power intensity. To explain this phenomenon, a theoretical model had been established and then verified by post-damage tests and equivalent circuit simulations. Conclusion was drawn that laser induced alterations in the surface reflection and shunt resistance were the main causes for the observed non-monotonic decrease in the ISC of the TJSCs.

  17. On the use of the GRACE normal equation of inter-satellite tracking data for estimation of soil moisture and groundwater in Australia

    Directory of Open Access Journals (Sweden)

    N. Tangdamrongsub

    2018-03-01

    Full Text Available An accurate estimation of soil moisture and groundwater is essential for monitoring the availability of water supply in domestic and agricultural sectors. In order to improve the water storage estimates, previous studies assimilated terrestrial water storage variation (ΔTWS derived from the Gravity Recovery and Climate Experiment (GRACE into land surface models (LSMs. However, the GRACE-derived ΔTWS was generally computed from the high-level products (e.g. time-variable gravity fields, i.e. level 2, and land grid from the level 3 product. The gridded data products are subjected to several drawbacks such as signal attenuation and/or distortion caused by a posteriori filters and a lack of error covariance information. The post-processing of GRACE data might lead to the undesired alteration of the signal and its statistical property. This study uses the GRACE least-squares normal equation data to exploit the GRACE information rigorously and negate these limitations. Our approach combines GRACE's least-squares normal equation (obtained from ITSG-Grace2016 product with the results from the Community Atmosphere Biosphere Land Exchange (CABLE model to improve soil moisture and groundwater estimates. This study demonstrates, for the first time, an importance of using the GRACE raw data. The GRACE-combined (GC approach is developed for optimal least-squares combination and the approach is applied to estimate the soil moisture and groundwater over 10 Australian river basins. The results are validated against the satellite soil moisture observation and the in situ groundwater data. Comparing to CABLE, we demonstrate the GC approach delivers evident improvement of water storage estimates, consistently from all basins, yielding better agreement on seasonal and inter-annual timescales. Significant improvement is found in groundwater storage while marginal improvement is observed in surface soil moisture estimates.

  18. On the use of the GRACE normal equation of inter-satellite tracking data for estimation of soil moisture and groundwater in Australia

    Science.gov (United States)

    Tangdamrongsub, Natthachet; Han, Shin-Chan; Decker, Mark; Yeo, In-Young; Kim, Hyungjun

    2018-03-01

    An accurate estimation of soil moisture and groundwater is essential for monitoring the availability of water supply in domestic and agricultural sectors. In order to improve the water storage estimates, previous studies assimilated terrestrial water storage variation (ΔTWS) derived from the Gravity Recovery and Climate Experiment (GRACE) into land surface models (LSMs). However, the GRACE-derived ΔTWS was generally computed from the high-level products (e.g. time-variable gravity fields, i.e. level 2, and land grid from the level 3 product). The gridded data products are subjected to several drawbacks such as signal attenuation and/or distortion caused by a posteriori filters and a lack of error covariance information. The post-processing of GRACE data might lead to the undesired alteration of the signal and its statistical property. This study uses the GRACE least-squares normal equation data to exploit the GRACE information rigorously and negate these limitations. Our approach combines GRACE's least-squares normal equation (obtained from ITSG-Grace2016 product) with the results from the Community Atmosphere Biosphere Land Exchange (CABLE) model to improve soil moisture and groundwater estimates. This study demonstrates, for the first time, an importance of using the GRACE raw data. The GRACE-combined (GC) approach is developed for optimal least-squares combination and the approach is applied to estimate the soil moisture and groundwater over 10 Australian river basins. The results are validated against the satellite soil moisture observation and the in situ groundwater data. Comparing to CABLE, we demonstrate the GC approach delivers evident improvement of water storage estimates, consistently from all basins, yielding better agreement on seasonal and inter-annual timescales. Significant improvement is found in groundwater storage while marginal improvement is observed in surface soil moisture estimates.

  19. Subgap resonant quasiparticle transport in normal-superconductor quantum dot devices

    Energy Technology Data Exchange (ETDEWEB)

    Gramich, J., E-mail: joerg.gramich@unibas.ch; Baumgartner, A.; Schönenberger, C. [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland)

    2016-04-25

    We report thermally activated transport resonances for biases below the superconducting energy gap in a carbon nanotube quantum dot (QD) device with a superconducting Pb and a normal metal contact. These resonances are due to the superconductor's finite quasi-particle population at elevated temperatures and can only be observed when the QD life-time broadening is considerably smaller than the gap. This condition is fulfilled in our QD devices with optimized Pd/Pb/In multi-layer contacts, which result in reproducibly large and “clean” superconducting transport gaps with a strong conductance suppression for subgap biases. We show that these gaps close monotonically with increasing magnetic field and temperature. The accurate description of the subgap resonances by a simple resonant tunneling model illustrates the ideal characteristics of the reported Pb contacts and gives an alternative access to the tunnel coupling strengths in a QD.

  20. Influence of pores on crack initiation in monotonic tensile and cyclic loadings in lost foam casting A319 alloy by using 3D in-situ analysis

    International Nuclear Information System (INIS)

    Wang, Long; Limodin, Nathalie; El Bartali, Ahmed; Witz, Jean-François; Seghir, Rian; Buffiere, Jean-Yves; Charkaluk, Eric

    2016-01-01

    Lost Foam Casting (LFC) process is replacing the conventional gravity Die Casting (DC) process in automotive industry for the purpose of geometry optimization, cost reduction and consumption control. However, due to lower cooling rate, LFC produces in a coarser microstructure that reduces fatigue life. In order to study the influence of the casting microstructure of LFC Al-Si alloy on damage micromechanisms under monotonic tensile loading and Low Cycle Fatigue (LCF) at room temperature, an experimental protocol based on the three dimensional (3D) in-situ analysis has been set up and validated. This paper focuses on the influence of pores on crack initiation in monotonic and cyclic tensile loadings. X-ray Computed Tomography (CT) allowed the microstructure of material being characterized in 3D and damage evolution being followed in-situ also in 3D. Experimental and numerical mechanical fields were obtained by using Digital Volume Correlation (DVC) technique and Finite Element Method (FEM) simulation respectively. Pores were shown to have an important influence on strain localization as large pores generate enough strain localization zones for crack initiation both in monotonic tensile and cyclic loadings.

  1. Influence of pores on crack initiation in monotonic tensile and cyclic loadings in lost foam casting A319 alloy by using 3D in-situ analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Long, E-mail: longwang_calt@163.com [Univ. Lille, CNRS, Centrale Lille, Arts et Metiers Paris tech, FRE 3723 – LML – Laboratoire de Mecanique de Lille, F-59000 Lille (France); Limodin, Nathalie; El Bartali, Ahmed; Witz, Jean-François; Seghir, Rian [Univ. Lille, CNRS, Centrale Lille, Arts et Metiers Paris tech, FRE 3723 – LML – Laboratoire de Mecanique de Lille, F-59000 Lille (France); Buffiere, Jean-Yves [Laboratoire Matériaux, Ingénierie et Sciences (MATEIS), CNRS UMR5510, INSA-Lyon, 20 Av. Albert Einstein, 69621 Villeurbanne (France); Charkaluk, Eric [Univ. Lille, CNRS, Centrale Lille, Arts et Metiers Paris tech, FRE 3723 – LML – Laboratoire de Mecanique de Lille, F-59000 Lille (France)

    2016-09-15

    Lost Foam Casting (LFC) process is replacing the conventional gravity Die Casting (DC) process in automotive industry for the purpose of geometry optimization, cost reduction and consumption control. However, due to lower cooling rate, LFC produces in a coarser microstructure that reduces fatigue life. In order to study the influence of the casting microstructure of LFC Al-Si alloy on damage micromechanisms under monotonic tensile loading and Low Cycle Fatigue (LCF) at room temperature, an experimental protocol based on the three dimensional (3D) in-situ analysis has been set up and validated. This paper focuses on the influence of pores on crack initiation in monotonic and cyclic tensile loadings. X-ray Computed Tomography (CT) allowed the microstructure of material being characterized in 3D and damage evolution being followed in-situ also in 3D. Experimental and numerical mechanical fields were obtained by using Digital Volume Correlation (DVC) technique and Finite Element Method (FEM) simulation respectively. Pores were shown to have an important influence on strain localization as large pores generate enough strain localization zones for crack initiation both in monotonic tensile and cyclic loadings.

  2. Behaviour of smart reinforced concrete beam with super elastic shape memory alloy subjected to monotonic loading

    Science.gov (United States)

    Hamid, Nubailah Abd; Ibrahim, Azmi; Adnan, Azlan; Ismail, Muhammad Hussain

    2018-05-01

    This paper discusses the superelastic behavior of shape memory alloy, NiTi when used as reinforcement in concrete beams. The ability of NiTi to recover and reduce permanent deformations of concrete beams was investigated. Small-scale concrete beams, with NiTi reinforcement were experimentally investigated under monotonic loads. The behaviour of simply supported reinforced concrete (RC) beams hybrid with NiTi rebars and the control beam subject to monotonic loads were experimentally investigated. This paper is to highlight the ability of the SMA bars to recover and reduce permanent deformations of concrete flexural members. The size of the control beam is 125 mm × 270 mm × 1000 mm with 3 numbers of 12 mm diameter bars as main reinforcement for compression and 3 numbers of 12 mm bars as tension or hanger bars while 6 mm diameter at 100 mm c/c used as shear reinforcement bars for control beam respectively. While, the minimal provision of 200mm using the 12.7mm of superelastic Shape Memory Alloys were employed to replace the steel rebar at the critical region of the beam. In conclusion, the contribution of the SMA bar in combination with high-strength steel to the conventional reinforcement showed that the SMA beam has exhibited an improve performance in term of better crack recovery and deformation. Therefore the usage of hybrid NiTi with the steel can substantially diminish the risk of the earthquake and also can reduce the associated cost aftermath.

  3. Assessing dose-response relationships for endocrine disrupting chemicals (EDCs): a focus on non-monotonicity.

    Science.gov (United States)

    Zoeller, R Thomas; Vandenberg, Laura N

    2015-05-15

    The fundamental principle in regulatory toxicology is that all chemicals are toxic and that the severity of effect is proportional to the exposure level. An ancillary assumption is that there are no effects at exposures below the lowest observed adverse effect level (LOAEL), either because no effects exist or because they are not statistically resolvable, implying that they would not be adverse. Chemicals that interfere with hormones violate these principles in two important ways: dose-response relationships can be non-monotonic, which have been reported in hundreds of studies of endocrine disrupting chemicals (EDCs); and effects are often observed below the LOAEL, including all environmental epidemiological studies examining EDCs. In recognition of the importance of this issue, Lagarde et al. have published the first proposal to qualitatively assess non-monotonic dose response (NMDR) relationships for use in risk assessments. Their proposal represents a significant step forward in the evaluation of complex datasets for use in risk assessments. Here, we comment on three elements of the Lagarde proposal that we feel need to be assessed more critically and present our arguments: 1) the use of Klimisch scores to evaluate study quality, 2) the concept of evaluating study quality without topical experts' knowledge and opinions, and 3) the requirement of establishing the biological plausibility of an NMDR before consideration for use in risk assessment. We present evidence-based logical arguments that 1) the use of the Klimisch score should be abandoned for assessing study quality; 2) evaluating study quality requires experts in the specific field; and 3) an understanding of mechanisms should not be required to accept observable, statistically valid phenomena. It is our hope to contribute to the important and ongoing debate about the impact of NMDRs on risk assessment with positive suggestions.

  4. Microarray background correction: maximum likelihood estimation for the normal-exponential convolution

    DEFF Research Database (Denmark)

    Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K

    2009-01-01

    exponentially distributed, representing background noise and signal, respectively. Using a saddle-point approximation, Ritchie and others (2007) found normexp to be the best background correction method for 2-color microarray data. This article develops the normexp method further by improving the estimation...... is developed for exact maximum likelihood estimation (MLE) using high-quality optimization software and using the saddle-point estimates as starting values. "MLE" is shown to outperform heuristic estimators proposed by other authors, both in terms of estimation accuracy and in terms of performance on real data...

  5. Driving monotonous routes in a train simulator: the effect of task demand on driving performance and subjective experience.

    Science.gov (United States)

    Dunn, Naomi; Williamson, Ann

    2012-01-01

    Although monotony is widely recognised as being detrimental to performance, its occurrence and effects are not yet well understood. This is despite the fact that task-related characteristics, such as monotony and low task demand, have been shown to contribute to performance decrements over time. Participants completed one of two simulated train-driving scenarios. Both were highly monotonous and differed only in terms of the level of cognitive demand required (i.e. low demand or high demand). These results highlight the seriously detrimental effects of the combination of monotony and low task demands and clearly show that even a relatively minor increase in cognitive demand can mitigate adverse monotony-related effects on performance for extended periods of time. Monotony is an inherent characteristic of transport industries, including rail, aviation and road transport, which can have adverse impact on safety, reliability and efficiency. This study highlights possible strategies for mitigating these adverse effects. Practitioner Summary: This study provides evidence for the importance of cognitive demand in mitigating monotony-related effects on performance. The results have clear implications for the rapid onset of performance deterioration in low demand monotonous tasks and demonstrate that these detrimental performance effects can be overcome with simple solutions, such as making the task more cognitively engaging.

  6. Cognition of normal pattern of myocardial polar map

    International Nuclear Information System (INIS)

    Fujisawa, Yasuo; Sasaki, Jiro; Kashima, Kenji; Matsumura, Yasushi; Yamamoto, Kazuhiro; Kodama, Kazuhisa

    1989-01-01

    When we diagnose the presence of ischemic heart disease by the diagrams of computer-generated polar map of exercised thallium images, the estimation of the presence of the deficit is not sufficient, because many normal subjects are considered as abnormal. The mean+2SD of defect severity index (DSI) of 118 normal subjects was 120, and we defined the patients with DSI≤120 as normal. But in 139 patients with their DSI≤120, 28 patients had significant coronary stenosis (>75%) and this means that false negative was 20%. We estimated the pattern of the deficit and found that in 109 of 111 subjects with normal coronary arteries, and 16 of 28 patients with ischemic heart disease, the patterns of the diagrams of polar map were patchy. This means that the diagram of the polar map show patchy pattern more frequently in normal subjects. In 125 patients whose diagrams of polar map were patchy, 16 patients with ischemic heart disease were included (false negative was 13%). We conclude that the estimation of DSI and the pattern of the diagram of polar map should be simultaneously considered and this makes the more accurate diagnosis possible. (author)

  7. Using an inductive approach for definition making: Monotonicity and boundedness of sequences

    Directory of Open Access Journals (Sweden)

    Deonarain Brijlall

    2009-09-01

    Full Text Available The study investigated fourth–year students’ construction of the definitions of monotonicity and boundedness of sequences, at the Edgewood Campus of the University of KwaZulu –Natal in South Africa. Structured worksheets based on a guided problem solving teaching model were used to help students to construct the twodefinitions. A group of twenty three undergraduateteacher trainees participated in the project. These students specialised in the teaching of mathematics in the Further Education and Training (FET (Grades 10 to 12 school curriculum. This paper, specifically, reports on the investigation of students’ definition constructions based on a learnig theory within the context of advanced mathematical thinking and makes a contribution to an understanding of how these students constructed the two definitions. It was found that despite the intervention of a structured design, these definitions were partially or inadequately conceptualised by some students.

  8. Normalization and gene p-value estimation: issues in microarray data processing.

    Science.gov (United States)

    Fundel, Katrin; Küffner, Robert; Aigner, Thomas; Zimmer, Ralf

    2008-05-28

    Numerous methods exist for basic processing, e.g. normalization, of microarray gene expression data. These methods have an important effect on the final analysis outcome. Therefore, it is crucial to select methods appropriate for a given dataset in order to assure the validity and reliability of expression data analysis. Furthermore, biological interpretation requires expression values for genes, which are often represented by several spots or probe sets on a microarray. How to best integrate spot/probe set values into gene values has so far been a somewhat neglected problem. We present a case study comparing different between-array normalization methods with respect to the identification of differentially expressed genes. Our results show that it is feasible and necessary to use prior knowledge on gene expression measurements to select an adequate normalization method for the given data. Furthermore, we provide evidence that combining spot/probe set p-values into gene p-values for detecting differentially expressed genes has advantages compared to combining expression values for spots/probe sets into gene expression values. The comparison of different methods suggests to use Stouffer's method for this purpose. The study has been conducted on gene expression experiments investigating human joint cartilage samples of osteoarthritis related groups: a cDNA microarray (83 samples, four groups) and an Affymetrix (26 samples, two groups) data set. The apparently straight forward steps of gene expression data analysis, e.g. between-array normalization and detection of differentially regulated genes, can be accomplished by numerous different methods. We analyzed multiple methods and the possible effects and thereby demonstrate the importance of the single decisions taken during data processing. We give guidelines for evaluating normalization outcomes. An overview of these effects via appropriate measures and plots compared to prior knowledge is essential for the biological

  9. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    Science.gov (United States)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  10. A general approach to double-moment normalization of drop size distributions

    Science.gov (United States)

    Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.

    2003-04-01

    Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.

  11. The Box-Cox power transformation on nursing sensitive indicators: does it matter if structural effects are omitted during the estimation of the transformation parameter?

    Science.gov (United States)

    Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy

    2011-08-19

    Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.

  12. The Box-Cox power transformation on nursing sensitive indicators: Does it matter if structural effects are omitted during the estimation of the transformation parameter?

    Directory of Open Access Journals (Sweden)

    Gajewski Byron J

    2011-08-01

    Full Text Available Abstract Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a estimating the transformation parameter along with factors with potential structural effects, and b estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.

  13. Metabolomics data normalization with EigenMS.

    Directory of Open Access Journals (Sweden)

    Yuliya V Karpievitch

    Full Text Available Liquid chromatography mass spectrometry has become one of the analytical platforms of choice for metabolomics studies. However, LC-MS metabolomics data can suffer from the effects of various systematic biases. These include batch effects, day-to-day variations in instrument performance, signal intensity loss due to time-dependent effects of the LC column performance, accumulation of contaminants in the MS ion source and MS sensitivity among others. In this study we aimed to test a singular value decomposition-based method, called EigenMS, for normalization of metabolomics data. We analyzed a clinical human dataset where LC-MS serum metabolomics data and physiological measurements were collected from thirty nine healthy subjects and forty with type 2 diabetes and applied EigenMS to detect and correct for any systematic bias. EigenMS works in several stages. First, EigenMS preserves the treatment group differences in the metabolomics data by estimating treatment effects with an ANOVA model (multiple fixed effects can be estimated. Singular value decomposition of the residuals matrix is then used to determine bias trends in the data. The number of bias trends is then estimated via a permutation test and the effects of the bias trends are eliminated. EigenMS removed bias of unknown complexity from the LC-MS metabolomics data, allowing for increased sensitivity in differential analysis. Moreover, normalized samples better correlated with both other normalized samples and corresponding physiological data, such as blood glucose level, glycated haemoglobin, exercise central augmentation pressure normalized to heart rate of 75, and total cholesterol. We were able to report 2578 discriminatory metabolite peaks in the normalized data (p<0.05 as compared to only 1840 metabolite signals in the raw data. Our results support the use of singular value decomposition-based normalization for metabolomics data.

  14. Mechanisms of plastic deformation (cyclic and monotonous) of Inconel X750

    International Nuclear Information System (INIS)

    Randrianarivony, H.

    1992-01-01

    Plastic deformation mechanisms under cyclic or monotonous solicitations, are analysed in function of Inconel X750 initial macrostructure. Two heat treated Inconel (first one is treated at 1366 K one hour, air cooled, aged at 977 K 20 hours, and air cooled, the second alloy is aged at 1158 K 24 hours, air cooled, aged at 977 K 20 hours, and air cooled), are characterized respectively by a fine and uniform precipitation of the γ' phase (approximative formulae: Ni 3 (Al,Ti)), and by a bimodal distribution of γ' precipitates. In both alloys, dislocations pairs (characteristic of a shearing by antiphase wall creation) are observed, and the crossing mechanism of the γ' precipitates by creation of overstructure pile defects is the same. But, glissile loops dislocations are less numerous than dislocations pairs in the first alloy, involving denser bands structure for this alloy (dislocations loops are always observed around γ' precipitates). Some comportment explications of Inconel X750 in PWR medium are given. (A.B.). refs., figs., tabs

  15. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  16. Asymptotic Poisson distribution for the number of system failures of a monotone system

    International Nuclear Information System (INIS)

    Aven, Terje; Haukis, Harald

    1997-01-01

    It is well known that for highly available monotone systems, the time to the first system failure is approximately exponentially distributed. Various normalising factors can be used as the parameter of the exponential distribution to ensure the asymptotic exponentiality. More generally, it can be shown that the number of system failures is asymptotic Poisson distributed. In this paper we study the performance of some of the normalising factors by using Monte Carlo simulation. The results show that the exponential/Poisson distribution gives in general very good approximations for highly available components. The asymptotic failure rate of the system gives best results when the process is in steady state, whereas other normalising factors seem preferable when the process is not in steady state. From a computational point of view the asymptotic system failure rate is most attractive

  17. Non-monotonic spatial distribution of the interstellar dust in astrospheres: finite gyroradius effect

    Science.gov (United States)

    Katushkina, O. A.; Alexashov, D. B.; Izmodenov, V. V.; Gvaramadze, V. V.

    2017-02-01

    High-resolution mid-infrared observations of astrospheres show that many of them have filamentary (cirrus-like) structure. Using numerical models of dust dynamics in astrospheres, we suggest that their filamentary structure might be related to specific spatial distribution of the interstellar dust around the stars, caused by a gyrorotation of charged dust grains in the interstellar magnetic field. Our numerical model describes the dust dynamics in astrospheres under an influence of the Lorentz force and assumption of a constant dust charge. Calculations are performed for the dust grains with different sizes separately. It is shown that non-monotonic spatial dust distribution (viewed as filaments) appears for dust grains with the period of gyromotion comparable with the characteristic time-scale of the dust motion in the astrosphere. Numerical modelling demonstrates that the number of filaments depends on charge-to-mass ratio of dust.

  18. Normalization of energy-dependent gamma survey data.

    Science.gov (United States)

    Whicker, Randy; Chambers, Douglas

    2015-05-01

    Instruments and methods for normalization of energy-dependent gamma radiation survey data to a less energy-dependent basis of measurement are evaluated based on relevant field data collected at 15 different sites across the western United States along with a site in Mongolia. Normalization performance is assessed relative to measurements with a high-pressure ionization chamber (HPIC) due to its "flat" energy response and accurate measurement of the true exposure rate from both cosmic and terrestrial radiation. While analytically ideal for normalization applications, cost and practicality disadvantages have increased demand for alternatives to the HPIC. Regression analysis on paired measurements between energy-dependent sodium iodide (NaI) scintillation detectors (5-cm by 5-cm crystal dimensions) and the HPIC revealed highly consistent relationships among sites not previously impacted by radiological contamination (natural sites). A resulting generalized data normalization factor based on the average sensitivity of NaI detectors to naturally occurring terrestrial radiation (0.56 nGy hHPIC per nGy hNaI), combined with the calculated site-specific estimate of cosmic radiation, produced reasonably accurate predictions of HPIC readings at natural sites. Normalization against two to potential alternative instruments (a tissue-equivalent plastic scintillator and energy-compensated NaI detector) did not perform better than the sensitivity adjustment approach at natural sites. Each approach produced unreliable estimates of HPIC readings at radiologically impacted sites, though normalization against the plastic scintillator or energy-compensated NaI detector can address incompatibilities between different energy-dependent instruments with respect to estimation of soil radionuclide levels. The appropriate data normalization method depends on the nature of the site, expected duration of the project, survey objectives, and considerations of cost and practicality.

  19. Annealing temperature dependent non-monotonic d{sup 0} ferromagnetism in pristine In{sub 2}O{sub 3} nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Haiming; Xing, Pengfei, E-mail: pfxing@tju.edu.cn; Yao, Dongsheng; Wu, Ping

    2017-05-01

    Cubic bixbyite In{sub 2}O{sub 3} nanoparticles with room temperature d{sup 0} ferromagnetism were prepared by sol-gel method with the air annealing temperature ranging from 500 to 900 °C. X-ray diffraction, X-ray photoelectron spectroscopy, Raman-scattering and photoluminescence were carried out to demonstrate the presence of oxygen vacancies. The lattice constant, the atomic ratio of crystal O and In, the Raman peak at 369 cm{sup −1}, the PL emission peak at 396 nm and the saturation magnetization of d{sup 0} ferromagnetism all had a consistent non-monotonic change with the increasing annealing temperature. With further considering the relation between the grain size and the distribution of oxygen vacancies, we think that d{sup 0} ferromagnetism in our samples is directly related with the singly charged oxygen vacancies at the surface of In{sub 2}O{sub 3} nanoparticles. - Highlights: • Effect of air-annealing temperature on the d{sup 0} ferromagnetism of pure In{sub 2}O{sub 3}. • Oxygen-deficiency states of all samples were detected by Raman scattering and PL. • Ferromagnetism changes non-monotonically with the increasing annealing temperature. • d{sup 0} ferromagnetism in our In{sub 2}O{sub 3} nanoparticles is related with the surface V{sub O}{sup +}.

  20. Semi-empirical models for the estimation of clear sky solar global and direct normal irradiances in the tropics

    International Nuclear Information System (INIS)

    Janjai, S.; Sricharoen, K.; Pattarapanitchai, S.

    2011-01-01

    Highlights: → New semi-empirical models for predicting clear sky irradiance were developed. → The proposed models compare favorably with other empirical models. → Performance of proposed models is comparable with that of widely used physical models. → The proposed models have advantage over the physical models in terms of simplicity. -- Abstract: This paper presents semi-empirical models for estimating global and direct normal solar irradiances under clear sky conditions in the tropics. The models are based on a one-year period of clear sky global and direct normal irradiances data collected at three solar radiation monitoring stations in Thailand: Chiang Mai (18.78 o N, 98.98 o E) located in the North of the country, Nakhon Pathom (13.82 o N, 100.04 o E) in the Centre and Songkhla (7.20 o N, 100.60 o E) in the South. The models describe global and direct normal irradiances as functions of the Angstrom turbidity coefficient, the Angstrom wavelength exponent, precipitable water and total column ozone. The data of Angstrom turbidity coefficient, wavelength exponent and precipitable water were obtained from AERONET sunphotometers, and column ozone was retrieved from the OMI/AURA satellite. Model validation was accomplished using data from these three stations for the data periods which were not included in the model formulation. The models were also validated against an independent data set collected at Ubon Ratchathani (15.25 o N, 104.87 o E) in the Northeast. The global and direct normal irradiances calculated from the models and those obtained from measurements are in good agreement, with the root mean square difference (RMSD) of 7.5% for both global and direct normal irradiances. The performance of the models was also compared with that of other models. The performance of the models compared favorably with that of empirical models. Additionally, the accuracy of irradiances predicted from the proposed model are comparable with that obtained from some

  1. Monotonic and cyclic responses of impact polypropylene and continuous glass fiber-reinforced impact polypropylene composites at different strain rates

    KAUST Repository

    Yudhanto, Arief

    2016-03-08

    Impact copolymer polypropylene (IPP), a blend of isotactic polypropylene and ethylene-propylene rubber, and its continuous glass fiber composite form (glass fiber-reinforced impact polypropylene, GFIPP) are promising materials for impact-prone automotive structures. However, basic mechanical properties and corresponding damage of IPP and GFIPP at different rates, which are of keen interest in the material development stage and numerical tool validation, have not been reported. Here, we applied monotonic and cyclic tensile loads to IPP and GFIPP at different strain rates (0.001/s, 0.01/s and 0.1/s) to study the mechanical properties, failure modes and the damage parameters. We used monotonic and cyclic tests to obtain mechanical properties and define damage parameters, respectively. We also used scanning electron microscopy (SEM) images to visualize the failure mode. We found that IPP generally exhibits brittle fracture (with relatively low failure strain of 2.69-3.74%) and viscoelastic-viscoplastic behavior. GFIPP [90]8 is generally insensitive to strain rate due to localized damage initiation mostly in the matrix phase leading to catastrophic transverse failure. In contrast, GFIPP [±45]s is sensitive to the strain rate as indicated by the change in shear modulus, shear strength and failure mode.

  2. A New Entropy Formula and Gradient Estimates for the Linear Heat Equation on Static Manifold

    Directory of Open Access Journals (Sweden)

    Abimbola Abolarinwa

    2014-08-01

    Full Text Available In this paper we prove a new monotonicity formula for the heat equation via a generalized family of entropy functionals. This family of entropy formulas generalizes both Perelman’s entropy for evolving metric and Ni’s entropy on static manifold. We show that this entropy satisfies a pointwise differential inequality for heat kernel. The consequences of which are various gradient and Harnack estimates for all positive solutions to the heat equation on compact manifold.

  3. A study of the normal interpedicular distance of the spine in Korean teenagers (Estimation of normal range by roentgenographic measurement)

    International Nuclear Information System (INIS)

    Lee, Myung Uk

    1979-01-01

    The radiological measurement of the interpedicular disease using a routine antero-posterior view of the spine gives important clinical criteria in evaluation of the intraspinal tumor and stenosis of the spinal canal, and aids for diagnosis of the lesions. In 1934 Elsberg and Dyke reported values of interpedicular distance as determined on roentgenograms for spine of white adult, and in 1968 Song prepared normal values of interpedicular distance for Korean adult. The present investigation was undertaken to provide normal interpedicular distance of Korean teenagers. The author observed the antero-posterior films of the spine of 200 normal teenagers which were composed of 100 male and 100 female. The normal values of the interpedicular distance of Korean teenagers were obtained, as well as 90% tolerance range for clinical use. In this statistical analysis, there were noted significant differences between male and female, and each age groups. It was observed that average male measurement were consistently larger than female by about 1 mm and the growth of the spinal canal appeared to be continued.

  4. Studies on the zeros of Bessel functions and methods for their computation: 2. Monotonicity, convexity, concavity, and other properties

    Science.gov (United States)

    Kerimov, M. K.

    2016-07-01

    This work continues the study of real zeros of first- and second-kind Bessel functions and Bessel general functions with real variables and orders begun in the first part of this paper (see M.K. Kerimov, Comput. Math. Math. Phys. 54 (9), 1337-1388 (2014)). Some new results concerning such zeros are described and analyzed. Special attention is given to the monotonicity, convexity, and concavity of zeros with respect to their ranks and other parameters.

  5. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  6. A Mixed Monotone Operator Method for the Existence and Uniqueness of Positive Solutions to Impulsive Caputo Fractional Differential Equations

    Directory of Open Access Journals (Sweden)

    Jieming Zhang

    2013-01-01

    Full Text Available We establish some sufficient conditions for the existence and uniqueness of positive solutions to a class of initial value problem for impulsive fractional differential equations involving the Caputo fractional derivative. Our analysis relies on a fixed point theorem for mixed monotone operators. Our result can not only guarantee the existence of a unique positive solution but also be applied to construct an iterative scheme for approximating it. An example is given to illustrate our main result.

  7. Behaviour of C-shaped angle shear connectors under monotonic and fully reversed cyclic loading: An experimental study

    International Nuclear Information System (INIS)

    Shariati, Mahdi; Ramli Sulong, N.H.; Suhatril, Meldi; Shariati, Ali; Arabnejad Khanouki, M.M.; Sinaei, Hamid

    2012-01-01

    Highlights: ► C-shaped angle connectors show 8.8–33.1% strength degradation under cyclic loading. ► Connector fracture type of failure was experienced in C-shaped angle shear connectors. ► In push-out samples, more cracking was observed in those slabs with longer angles. ► C-shaped angle connectors show good behaviour in terms of the ultimate shear capacity. ► C-shaped angle connectors did not fulfil the requirements for ductility criteria. -- Abstract: This paper presents an evaluation of the structural behaviour of C-shaped angle shear connectors in composite beams, suitable for transferring shear force in composite structures. The results of the experimental programme, including eight push-out tests, are presented and discussed. The results include resistance, strength degradation, ductility, and failure modes of C-shaped angle shear connectors, under monotonic and fully reversed cyclic loading. The results show that connector fracture type of failure was experienced in C-shaped angle connectors and after the failure, more cracking was observed in those slabs with longer angles. On top of that, by comparing the shear resistance of C-shaped angle shear connectors under monotonic and cyclic loading, these connectors showed 8.8–33.1% strength degradation, under fully reversed cyclic loading. Furthermore, it was concluded that the mentioned shear connector shows a proper behaviour, in terms of the ultimate shear capacity, but it does not satisfy the ductility criteria, imposed by the Eurocode 4, to perform a plastic distribution of the shear force between different connectors along the beam length.

  8. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  9. Automated Land Cover Change Detection and Mapping from Hidden Parameter Estimates of Normalized Difference Vegetation Index (NDVI) Time-Series

    Science.gov (United States)

    Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.

    2017-12-01

    Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.

  10. Score Normalization using Logistic Regression with Expected Parameters

    NARCIS (Netherlands)

    Aly, Robin

    State-of-the-art score normalization methods use generative models that rely on sometimes unrealistic assumptions. We propose a novel parameter estimation method for score normalization based on logistic regression. Experiments on the Gov2 and CluewebA collection indicate that our method is

  11. Pattern Matching Framework to Estimate the Urgency of Off-Normal Situations in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Jin Soo; Park, Sang Jun; Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Hyo Jin; Park, Soon Yeol [Korea Hydro and Nuclear Power, Yeonggwang (Korea, Republic of)

    2010-10-15

    According to power plant operators, it was said that they could quite well recognize off-normal situations from an incipient stage and also anticipate the possibility of upcoming trips in case of skilled operators, even though it is difficult to clarify the cause of the off-normal situation. From the interview, we could assure the feasibility of two assumptions for the diagnosis of off-normal conditions: One is that we can predict whether an accidental shutdown happens or not if we observe the early stage when an off-normal starts to grow. The other is the observation at the early stage can provide the remaining time to a trip as well as the cause of such an off-normal situation. For this purpose, the development of on-line monitoring systems using various data processing techniques in nuclear power plants (NPPs) has been the subject of increasing attention and becomes important contributor to improve performance and economics. Many of studies have suggested the diagnostic methodologies. One of representative methods was to use the distance discrimination as a similarity measure, for example, such as the Euclidean distance. A variety of artificial intelligence techniques such as a neural network have been developed as well. In addition, some of these methodologies were to reduce the data dimensions for more effectively work. While sharing the same motivation with the previous achievements, this study proposed non-parametric pattern matching techniques to reduce the uncertainty in pursuance of selection of models and modeling processes. This could be characterized by the following two aspects: First, for overcoming considering only a few typical scenarios in the most of the studies, this study is getting the entire sets of off-normal situations which are anticipated in NPPs, which are created by a full-scope simulator. Second, many of the existing researches adopted the process of forming a diagnosis model which is so-called a training technique or a parametric

  12. Pattern Matching Framework to Estimate the Urgency of Off-Normal Situations in NPPs

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Park, Sang Jun; Heo, Gyun Young; Park, Jin Kyun; Kim, Hyo Jin; Park, Soon Yeol

    2010-01-01

    According to power plant operators, it was said that they could quite well recognize off-normal situations from an incipient stage and also anticipate the possibility of upcoming trips in case of skilled operators, even though it is difficult to clarify the cause of the off-normal situation. From the interview, we could assure the feasibility of two assumptions for the diagnosis of off-normal conditions: One is that we can predict whether an accidental shutdown happens or not if we observe the early stage when an off-normal starts to grow. The other is the observation at the early stage can provide the remaining time to a trip as well as the cause of such an off-normal situation. For this purpose, the development of on-line monitoring systems using various data processing techniques in nuclear power plants (NPPs) has been the subject of increasing attention and becomes important contributor to improve performance and economics. Many of studies have suggested the diagnostic methodologies. One of representative methods was to use the distance discrimination as a similarity measure, for example, such as the Euclidean distance. A variety of artificial intelligence techniques such as a neural network have been developed as well. In addition, some of these methodologies were to reduce the data dimensions for more effectively work. While sharing the same motivation with the previous achievements, this study proposed non-parametric pattern matching techniques to reduce the uncertainty in pursuance of selection of models and modeling processes. This could be characterized by the following two aspects: First, for overcoming considering only a few typical scenarios in the most of the studies, this study is getting the entire sets of off-normal situations which are anticipated in NPPs, which are created by a full-scope simulator. Second, many of the existing researches adopted the process of forming a diagnosis model which is so-called a training technique or a parametric

  13. GC-Content Normalization for RNA-Seq Data

    Science.gov (United States)

    2011-01-01

    Background Transcriptome sequencing (RNA-Seq) has become the assay of choice for high-throughput studies of gene expression. However, as is the case with microarrays, major technology-related artifacts and biases affect the resulting expression measures. Normalization is therefore essential to ensure accurate inference of expression levels and subsequent analyses thereof. Results We focus on biases related to GC-content and demonstrate the existence of strong sample-specific GC-content effects on RNA-Seq read counts, which can substantially bias differential expression analysis. We propose three simple within-lane gene-level GC-content normalization approaches and assess their performance on two different RNA-Seq datasets, involving different species and experimental designs. Our methods are compared to state-of-the-art normalization procedures in terms of bias and mean squared error for expression fold-change estimation and in terms of Type I error and p-value distributions for tests of differential expression. The exploratory data analysis and normalization methods proposed in this article are implemented in the open-source Bioconductor R package EDASeq. Conclusions Our within-lane normalization procedures, followed by between-lane normalization, reduce GC-content bias and lead to more accurate estimates of expression fold-changes and tests of differential expression. Such results are crucial for the biological interpretation of RNA-Seq experiments, where downstream analyses can be sensitive to the supplied lists of genes. PMID:22177264

  14. The resource theory of quantum reference frames: manipulations and monotones

    International Nuclear Information System (INIS)

    Gour, Gilad; Spekkens, Robert W

    2008-01-01

    Every restriction on quantum operations defines a resource theory, determining how quantum states that cannot be prepared under the restriction may be manipulated and used to circumvent the restriction. A superselection rule (SSR) is a restriction that arises through the lack of a classical reference frame and the states that circumvent it (the resource) are quantum reference frames. We consider the resource theories that arise from three types of SSRs, associated respectively with lacking: (i) a phase reference, (ii) a frame for chirality, and (iii) a frame for spatial orientation. Focusing on pure unipartite quantum states (and in some cases restricting our attention even further to subsets of these), we explore single-copy and asymptotic manipulations. In particular, we identify the necessary and sufficient conditions for a deterministic transformation between two resource states to be possible and, when these conditions are not met, the maximum probability with which the transformation can be achieved. We also determine when a particular transformation can be achieved reversibly in the limit of arbitrarily many copies and find the maximum rate of conversion. A comparison of the three resource theories demonstrates that the extent to which resources can be interconverted decreases as the strength of the restriction increases. Along the way, we introduce several measures of frameness and prove that these are monotonically non-increasing under various classes of operations that are permitted by the SSR

  15. Monotonicity of the ratio of modified Bessel functions of the first kind with applications.

    Science.gov (United States)

    Yang, Zhen-Hang; Zheng, Shen-Zhou

    2018-01-01

    Let [Formula: see text] with [Formula: see text] be the modified Bessel functions of the first kind of order v . In this paper, we prove the monotonicity of the function [Formula: see text] on [Formula: see text] for different values of parameter p with [Formula: see text]. As applications, we deduce some new Simpson-Spector-type inequalities for [Formula: see text] and derive a new type of bounds [Formula: see text] ([Formula: see text]) for [Formula: see text]. In particular, we show that the upper bound [Formula: see text] for [Formula: see text] is the minimum over all upper bounds [Formula: see text], where [Formula: see text] and is not comparable with other sharpest upper bounds. We also find such type of upper bounds for [Formula: see text] with [Formula: see text] and for [Formula: see text] with [Formula: see text].

  16. Effect of Pulse Polarity on Thresholds and on Non-monotonic Loudness Growth in Cochlear Implant Users.

    Science.gov (United States)

    Macherey, Olivier; Carlyon, Robert P; Chatron, Jacques; Roman, Stéphane

    2017-06-01

    Most cochlear implants (CIs) activate their electrodes non-simultaneously in order to eliminate electrical field interactions. However, the membrane of auditory nerve fibers needs time to return to its resting state, causing the probability of firing to a pulse to be affected by previous pulses. Here, we provide new evidence on the effect of pulse polarity and current level on these interactions. In experiment 1, detection thresholds and most comfortable levels (MCLs) were measured in CI users for 100-Hz pulse trains consisting of two consecutive biphasic pulses of the same or of opposite polarity. All combinations of polarities were studied: anodic-cathodic-anodic-cathodic (ACAC), CACA, ACCA, and CAAC. Thresholds were lower when the adjacent phases of the two pulses had the same polarity (ACCA and CAAC) than when they were different (ACAC and CACA). Some subjects showed a lower threshold for ACCA than for CAAC while others showed the opposite trend demonstrating that polarity sensitivity at threshold is genuine and subject- or electrode-dependent. In contrast, anodic (CAAC) pulses always showed a lower MCL than cathodic (ACCA) pulses, confirming previous reports. In experiments 2 and 3, the subjects compared the loudness of several pulse trains differing in current level separately for ACCA and CAAC. For 40 % of the electrodes tested, loudness grew non-monotonically as a function of current level for ACCA but never for CAAC. This finding may relate to a conduction block of the action potentials along the fibers induced by a strong hyperpolarization of their central processes. Further analysis showed that the electrodes showing a lower threshold for ACCA than for CAAC were more likely to yield a non-monotonic loudness growth. It is proposed that polarity sensitivity at threshold reflects the local neural health and that anodic asymmetric pulses should preferably be used to convey sound information while avoiding abnormal loudness percepts.

  17. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    Directory of Open Access Journals (Sweden)

    Jizheng Yi

    Full Text Available Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1 we optimize the surround function; (2 we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  18. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    Science.gov (United States)

    Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel

    2015-01-01

    Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  19. The response of a linear monostable system and its application in parameters estimation for PSK signals

    International Nuclear Information System (INIS)

    Duan, Chaowei; Zhan, Yafeng

    2016-01-01

    The output characteristics of a linear monostable system driven with a periodic signal and an additive white Gaussian noise are studied in this paper. Theoretical analysis shows that the output signal-to-noise ratio (SNR) decreases monotonously with the increasing noise intensity but the output SNR-gain is stable. Inspired by this high SNR-gain phenomenon, this paper applies the linear monostable system in the parameters estimation algorithm for phase shift keying (PSK) signals and improves the estimation performance. - Highlights: • The response of a linear monostable system driven with a periodic signal and an additive white Gaussian noise is analyzed. • The optimal parameter of this linear monostable system to maximum the output SNR-gain is obtained. • Application of this linear monostable system in parameters estimation algorithm for PSK signals obtains performance improvement.

  20. Studies on the zeros of Bessel functions and methods for their computation: 3. Some new works on monotonicity, convexity, and other properties

    Science.gov (United States)

    Kerimov, M. K.

    2016-12-01

    This paper continues the study of real zeros of Bessel functions begun in the previous parts of this work (see M. K. Kerimov, Comput. Math. Math. Phys. 54 (9), 1337-1388 (2014); 56 (7), 1175-1208 (2016)). Some new results regarding the monotonicity, convexity, concavity, and other properties of zeros are described. Additionally, the zeros of q-Bessel functions are investigated.

  1. Estimating Subglottal Pressure from Neck-Surface Acceleration during Normal Voice Production

    Science.gov (United States)

    Fryd, Amanda S.; Van Stan, Jarrad H.; Hillman, Robert E.; Mehta, Daryush D.

    2016-01-01

    Purpose: The purpose of this study was to evaluate the potential for estimating subglottal air pressure using a neck-surface accelerometer and to compare the accuracy of predicting subglottal air pressure relative to predicting acoustic sound pressure level (SPL). Method: Indirect estimates of subglottal pressure (P[subscript sg]') were obtained…

  2. The estimation of the number of underground coal miners and normalization collective dose at present in China

    International Nuclear Information System (INIS)

    Liu, Fu-dong; Chen, Lu; Pan, Zi-qiang; Liu, Sen-lin; Chen, Ling; Wang, Chun-hong

    2017-01-01

    Due to the improvement of production technology and the adjustment of energy structure, as well as the town-ownership and private-ownership coal mines (TPCM) were closed or merged by national policy, the number of underground miner has changed comparing with 2004 in China, so collective dose and normalization collective dose in different type of coal mine should be changed at the same time. In this paper, according to radiation exposure by different ventilation condition and the annual output, the coal mines in China are divided into three types, which are named as national key coal mines (NKCM), station-owned local coal mines (SLCM) and TPCM. The number of underground coal miner, collective dose and normalization collective dose are estimated at present base on surveying annual output and production efficiency of raw coal in 2005-2014. The typical total value of the underground coal miners recommended in China is 5.1 million in 2005-2009, and in which there are respectively included 1 million, 0.9 million and 3.2 million for NKCM, SLCM and TPCM. There are total of 4.7 million underground coal miner in 2010-2014, and the respectively number for NKCM, SLCM and TPCM are 1.4 million, 1.2 million and 2.1 million. The collective dose in 2005-2009 is 11 335 man.Sv.y"-"1, and in which there are respectively included 280, 495 and 10 560 man.Sv.y"-"1 for NKCM, SLCM and TPCM. As far as 2010-2014, there are total of 7982 man.Sv.y"-"1, and 392, 660 and 6930 man.Sv.y"-"1 for each type of coal mines. Therefore, the main contributor of collective dose is from TPCM. The normalization collective dose in 2005-2009 is 0.0025, 0.015 and 0.117 man.Sv per 10 kt for NKCM, SLCM and TPCM, respectively. As far as 2010-2014, there are 0.0018, 0.010 and 0.107 man.Sv per 10 kt for each type of coal mines. The trend of normalization collective dose is decreased year by year. (authors)

  3. Non-monotonic swelling of surface grafted hydrogels induced by pH and/or salt concentration

    Science.gov (United States)

    Longo, Gabriel S.; Olvera de la Cruz, Monica; Szleifer, I.

    2014-09-01

    We use a molecular theory to study the thermodynamics of a weak-polyacid hydrogel film that is chemically grafted to a solid surface. We investigate the response of the material to changes in the pH and salt concentration of the buffer solution. Our results show that the pH-triggered swelling of the hydrogel film has a non-monotonic dependence on the acidity of the bath solution. At most salt concentrations, the thickness of the hydrogel film presents a maximum when the pH of the solution is increased from acidic values. The quantitative details of such swelling behavior, which is not observed when the film is physically deposited on the surface, depend on the molecular architecture of the polymer network. This swelling-deswelling transition is the consequence of the complex interplay between the chemical free energy (acid-base equilibrium), the electrostatic repulsions between charged monomers, which are both modulated by the absorption of ions, and the ability of the polymer network to regulate charge and control its volume (molecular organization). In the absence of such competition, for example, for high salt concentrations, the film swells monotonically with increasing pH. A deswelling-swelling transition is similarly predicted as a function of the salt concentration at intermediate pH values. This reentrant behavior, which is due to the coupling between charge regulation and the two opposing effects triggered by salt concentration (screening electrostatic interactions and charging/discharging the acid groups), is similar to that found in end-grafted weak polyelectrolyte layers. Understanding how to control the response of the material to different stimuli, in terms of its molecular structure and local chemical composition, can help the targeted design of applications with extended functionality. We describe the response of the material to an applied pressure and an electric potential. We present profiles that outline the local chemical composition of the

  4. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  5. On the estimation and testing of predictive panel regressions

    NARCIS (Netherlands)

    Karabiyik, H.; Westerlund, Joakim; Narayan, Paresh

    2016-01-01

    Hjalmarsson (2010) considers an OLS-based estimator of predictive panel regressions that is argued to be mixed normal under very general conditions. In a recent paper, Westerlund et al. (2016) show that while consistent, the estimator is generally not mixed normal, which invalidates standard normal

  6. Non-monotonic reorganization of brain networks with Alzheimer’s disease progression

    Directory of Open Access Journals (Sweden)

    Hyoungkyu eKim

    2015-06-01

    Full Text Available Background: Identification of stage-specific changes in brain network of patients with Alzheimer’s disease (AD is critical for rationally designed therapeutics that delays the progression of the disease. However, pathological neural processes and their resulting changes in brain network topology with disease progression are not clearly known. Methods: The current study was designed to investigate the alterations in network topology of resting state fMRI among patients in three different clinical dementia rating (CDR groups (i.e., CDR = 0.5, 1, 2 and amnestic mild cognitive impairment (aMCI and age-matched healthy subject groups. We constructed cost networks from these 5 groups and analyzed their network properties using graph theoretical measures.Results: The topological properties of AD brain networks differed in a non-monotonic, stage-specific manner. Interestingly, local and global efficiency and betweenness of the network were rather higher in the aMCI and AD (CDR 1 groups than those of prior stage groups. The number, location, and structure of rich-clubs changed dynamically as the disease progressed.Conclusions: The alterations in network topology of the brain are quite dynamic with AD progression, and these dynamic changes in network patterns should be considered meticulously for efficient therapeutic interventions of AD.

  7. Considerations for potency equivalent calculations in the Ah receptor-based CALUX bioassay: normalization of superinduction results for improved sample potency estimation.

    Science.gov (United States)

    Baston, David S; Denison, Michael S

    2011-02-15

    The chemically activated luciferase expression (CALUX) system is a mechanistically based recombinant luciferase reporter gene cell bioassay used in combination with chemical extraction and clean-up methods for the detection and relative quantitation of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like halogenated aromatic hydrocarbons in a wide variety of sample matrices. While sample extracts containing complex mixtures of chemicals can produce a variety of distinct concentration-dependent luciferase induction responses in CALUX cells, these effects are produced through a common mechanism of action (i.e. the Ah receptor (AhR)) allowing normalization of results and sample potency determination. Here we describe the diversity in CALUX response to PCDD/Fs from sediment and soil extracts and not only report the occurrence of superinduction of the CALUX bioassay, but we describe a mechanistically based approach for normalization of superinduction data that results in a more accurate estimation of the relative potency of such sample extracts. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. SU-E-I-78: Establishing a Protocol for Quick Estimation of Thyroid Internal Contamination with 131I in Normal and Emergency Situations

    Energy Technology Data Exchange (ETDEWEB)

    Naderi, S Mehdizadeh [Radiation Research Center, Shiraz university, Shiraz, Fars (Iran, Islamic Republic of); Karimipourfard, M; Lotfalizadeh, F [Radiation medicine department, school of mechanical engineering, Shiraz uni, Shiraz, Fars (Iran, Islamic Republic of); Zamani, E; Molaeimanesh, Z; Sadeghi, M; Sina, S; Faghihi, R [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Entezarmahdi, M [Shahid Beheshti University, Shiraz, Fars (Iran, Islamic Republic of)

    2015-06-15

    Purpose: I-131 is one of the most frequent radionuclides used in nuclear medicine departments. The radiation workers, who manipulate the unsealed radio-toxic iodine, should be monitored for internal contamination. In this study a protocol was established for estimating I-131 activity absorbed in the thyroid glands of the nuclear medicine staff in normal working condition and also in accidents. Methods: I-131 with the activity of 10 μCi was injected inside the thyroid gland of a home-made anthropomorphic neck phantom. The phantom is made up of PMMA as soft tissue, and Aluminium as bone. The dose rate at different distances from the surface of the neck phantom was measured using a scintillator detector for duration of two months. Then, calibration factors were obtained, for converting the dose rate at each distance to the iodine activity inside the thyroid. Results: According to the results of this study, the calibration factors for converting the dose rates (nSv/h) at distances of 0cm, 1cm, 6cm, 11cm, and 16cm to the activity (kBq) inside the thyroid were found to be 0.03, 0.04, 0.14, 0.29, and 0.49 . Conclusion: This method can be effectively used for quick estimation of the I-131 concentration inside the thyroid of the staff for daily checks in normal working conditions and also in accidents.

  9. SU-E-I-78: Establishing a Protocol for Quick Estimation of Thyroid Internal Contamination with 131I in Normal and Emergency Situations

    International Nuclear Information System (INIS)

    Naderi, S Mehdizadeh; Karimipourfard, M; Lotfalizadeh, F; Zamani, E; Molaeimanesh, Z; Sadeghi, M; Sina, S; Faghihi, R; Entezarmahdi, M

    2015-01-01

    Purpose: I-131 is one of the most frequent radionuclides used in nuclear medicine departments. The radiation workers, who manipulate the unsealed radio-toxic iodine, should be monitored for internal contamination. In this study a protocol was established for estimating I-131 activity absorbed in the thyroid glands of the nuclear medicine staff in normal working condition and also in accidents. Methods: I-131 with the activity of 10 μCi was injected inside the thyroid gland of a home-made anthropomorphic neck phantom. The phantom is made up of PMMA as soft tissue, and Aluminium as bone. The dose rate at different distances from the surface of the neck phantom was measured using a scintillator detector for duration of two months. Then, calibration factors were obtained, for converting the dose rate at each distance to the iodine activity inside the thyroid. Results: According to the results of this study, the calibration factors for converting the dose rates (nSv/h) at distances of 0cm, 1cm, 6cm, 11cm, and 16cm to the activity (kBq) inside the thyroid were found to be 0.03, 0.04, 0.14, 0.29, and 0.49 . Conclusion: This method can be effectively used for quick estimation of the I-131 concentration inside the thyroid of the staff for daily checks in normal working conditions and also in accidents

  10. Experimental Studies on Behaviour of Reinforced Geopolymer Concrete Beams Subjected to Monotonic Static Loading

    Science.gov (United States)

    Madheswaran, C. K.; Ambily, P. S.; Dattatreya, J. K.; Ramesh, G.

    2015-06-01

    This work describes the experimental investigation on behaviour of reinforced GPC beams subjected to monotonic static loading. The overall dimensions of the GPC beams are 250 mm × 300 mm × 2200 mm. The effective span of beam is 1600 mm. The beams have been designed to be critical in shear as per IS:456 provisions. The specimens were produced from a mix incorporating fly ash and ground granulated blast furnace slag, which was designed for a compressive strength of 40 MPa at 28 days. The reinforced concrete specimens are subjected to curing at ambient temperature under wet burlap. The parameters being investigated include shear span to depth ratio (a/d = 1.5 and 2.0). Experiments are conducted on 12 GPC beams and four OPCC control beams. All the beams are tested using 2000 kN servo-controlled hydraulic actuator. This paper presents the results of experimental studies.

  11. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    Science.gov (United States)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field

  12. On Better Estimating and Normalizing the Relationship between Clinical Parameters: Comparing Respiratory Modulations in the Photoplethysmogram and Blood Pressure Signal (DPOP versus PPV

    Directory of Open Access Journals (Sweden)

    Paul S. Addison

    2015-01-01

    Full Text Available DPOP (ΔPOP or Delta-POP is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method. We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements.

  13. On better estimating and normalizing the relationship between clinical parameters: comparing respiratory modulations in the photoplethysmogram and blood pressure signal (DPOP versus PPV).

    Science.gov (United States)

    Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D

    2015-01-01

    DPOP (ΔPOP or Delta-POP) is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method). We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N) is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements.

  14. Estimated Trans-Lamina Cribrosa Pressure Differences in Low-Teen and High-Teen Intraocular Pressure Normal Tension Glaucoma: The Korean National Health and Nutrition Examination Survey.

    Directory of Open Access Journals (Sweden)

    Si Hyung Lee

    Full Text Available To investigate the association between estimated trans-lamina cribrosa pressure difference (TLCPD and prevalence of normal tension glaucoma (NTG with low-teen and high-teen intraocular pressure (IOP using a population-based study design.A total of 12,743 adults (≥ 40 years of age who participated in the Korean National Health and Nutrition Examination Survey (KNHANES from 2009 to 2012 were included. Using a previously developed formula, cerebrospinal fluid pressure (CSFP in mmHg was estimated as 0.55 × body mass index (kg/m2 + 0.16 × diastolic blood pressure (mmHg-0.18 × age (years-1.91. TLCPD was calculated as IOP-CSFP. The NTG subjects were divided into two groups according to IOP level: low-teen NTG (IOP ≤ 15 mmHg and high-teen NTG (15 mmHg < IOP ≤ 21 mmHg groups. The association between TLCPD and the prevalence of NTG was assessed in the low- and high-teen IOP groups.In the normal population (n = 12,069, the weighted mean estimated CSFP was 11.69 ± 0.04 mmHg and the weighted mean TLCPD 2.31 ± 0.06 mmHg. Significantly higher TLCPD (p < 0.001; 6.48 ± 0.27 mmHg was found in the high-teen NTG compared with the normal group. On the other hand, there was no significant difference in TLCPD between normal and low-teen NTG subjects (p = 0.395; 2.31 ± 0.06 vs. 2.11 ± 0.24 mmHg. Multivariate logistic regression analysis revealed that TLCPD was significantly associated with the prevalence of NTG in the high-teen IOP group (p = 0.006; OR: 1.09; 95% CI: 1.02, 1.15, but not the low-teen IOP group (p = 0.636. Instead, the presence of hypertension was significantly associated with the prevalence of NTG in the low-teen IOP group (p < 0.001; OR: 1.65; 95% CI: 1.26, 2.16.TLCPD was significantly associated with the prevalence of NTG in high-teen IOP subjects, but not low-teen IOP subjects, in whom hypertension may be more closely associated. This study suggests that the underlying mechanisms may differ between low-teen and high-teen NTG patients.

  15. Investigation on de-trapping mechanisms related to non-monotonic kink pattern in GaN HEMT devices

    Directory of Open Access Journals (Sweden)

    Chandan Sharma

    2017-08-01

    Full Text Available This article reports an experimental approach to analyze the kink effect phenomenon which is usually observed during the GaN high electron mobility transistor (HEMT operation. De-trapping of charge carriers is one of the prominent reasons behind the kink effect. The commonly observed non-monotonic behavior of kink pattern is analyzed under two different device operating conditions and it is found that two different de-trapping mechanisms are responsible for a particular kink behavior. These different de-trapping mechanisms are investigated through a time delay analysis which shows the presence of traps with different time constants. Further voltage sweep and temperature analysis corroborates the finding that different de-trapping mechanisms play a role in kink behavior under different device operating conditions.

  16. Investigation on de-trapping mechanisms related to non-monotonic kink pattern in GaN HEMT devices

    Science.gov (United States)

    Sharma, Chandan; Laishram, Robert; Amit, Rawal, Dipendra Singh; Vinayak, Seema; Singh, Rajendra

    2017-08-01

    This article reports an experimental approach to analyze the kink effect phenomenon which is usually observed during the GaN high electron mobility transistor (HEMT) operation. De-trapping of charge carriers is one of the prominent reasons behind the kink effect. The commonly observed non-monotonic behavior of kink pattern is analyzed under two different device operating conditions and it is found that two different de-trapping mechanisms are responsible for a particular kink behavior. These different de-trapping mechanisms are investigated through a time delay analysis which shows the presence of traps with different time constants. Further voltage sweep and temperature analysis corroborates the finding that different de-trapping mechanisms play a role in kink behavior under different device operating conditions.

  17. Non-monotonic dose dependence of the Ge- and Ti-centres in quartz

    International Nuclear Information System (INIS)

    Woda, C.; Wagner, G.A.

    2007-01-01

    The dose response of the Ge- and Ti-centres in quartz is studied over a large dose range. After an initial signal increase in the low dose range, both defects show a pronounced decrease in signal intensities for high doses. The model by Euler and Kahan [1987. Radiation effects and anelastic loss in germanium-doped quartz. Phys. Rev. B 35 (9), 4351-4359], in which the signal drop is explained by an enhanced trapping of holes at the electron trapping site, is critically discussed. A generalization of the model is then developed, following similar considerations by Lawless et al. [2005. A model for non-monotonic dose dependence of thermoluminescence (TL). J. Phys. Condens. Matter 17, 737-753], who explained a signal drop in TL by an enhanced recombination rate with electrons at the recombination centre. Finally, an alternative model for the signal decay is given, based on the competition between single and double electron capture at the electron trapping site. From the critical discussion of the different models it is concluded that the double electron capture mechanism is the most probable effect for the dose response

  18. Microstructure-based modelling of the long-term monotonic and cyclic creep of the martensitic steel X 20(22) CrMoV 12 1

    International Nuclear Information System (INIS)

    Henes, D.; Straub, S.; Blum, W.; Moehlig, H.; Granacher, J.; Berger, C.

    1999-01-01

    The current state of development of the composite model of deformation of the martensitic steel X 20(22) CrMoV 12 1 under conditions of creep is briefly described. The model is able to reproduce differences in monotonic creep strength of different melts with slightly different initial microstructures and to simulate cyclic creep with alternating phases of tension and compression. (orig.)

  19. Estimation of Branch Topology Errors in Power Networks by WLAN State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hong Rae [Soonchunhyang University(Korea); Song, Kyung Bin [Kei Myoung University(Korea)

    2000-06-01

    The purpose of this paper is to detect and identify topological errors in order to maintain a reliable database for the state estimator. In this paper, a two stage estimation procedure is used to identify the topology errors. At the first stage, the WLAV state estimator which has characteristics to remove bad data during the estimation procedure is run for finding out the suspected branches at which topology errors take place. The resulting residuals are normalized and the measurements with significant normalized residuals are selected. A set of suspected branches is formed based on these selected measurements; if the selected measurement if a line flow, the corresponding branch is suspected; if it is an injection, then all the branches connecting the injection bus to its immediate neighbors are suspected. A new WLAV state estimator adding the branch flow errors in the state vector is developed to identify the branch topology errors. Sample cases of single topology error and topology error with a measurement error are applied to IEEE 14 bus test system. (author). 24 refs., 1 fig., 9 tabs.

  20. Sparse Variational Bayesian SAGE Algorithm With Application to the Estimation of Multipath Wireless Channels

    DEFF Research Database (Denmark)

    Shutin, Dmitriy; Fleury, Bernard Henri

    2011-01-01

    In this paper, we develop a sparse variational Bayesian (VB) extension of the space-alternating generalized expectation-maximization (SAGE) algorithm for the high resolution estimation of the parameters of relevant multipath components in the response of frequency and spatially selective wireless...... channels. The application context of the algorithm considered in this contribution is parameter estimation from channel sounding measurements for radio channel modeling purpose. The new sparse VB-SAGE algorithm extends the classical SAGE algorithm in two respects: i) by monotonically minimizing...... parametric sparsity priors for the weights of the multipath components. We revisit the Gaussian sparsity priors within the sparse VB-SAGE framework and extend the results by considering Laplace priors. The structure of the VB-SAGE algorithm allows for an analytical stability analysis of the update expression...

  1. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  2. Estimation of shear wave speed in the human uterine cervix.

    Science.gov (United States)

    Carlson, L C; Feltovich, H; Palmeri, M L; Dahl, J J; Munoz del Rio, A; Hall, T J

    2014-04-01

    To explore spatial variability within the cervix and the sensitivity of shear wave speed (SWS) to assess softness/stiffness differences in ripened (softened) vs unripened tissue. We obtained SWS estimates from hysterectomy specimens (n = 22), a subset of which were ripened (n = 13). Multiple measurements were made longitudinally along the cervical canal on both the anterior and posterior sides of the cervix. Statistical tests of differences in the proximal vs distal, anterior vs posterior and ripened vs unripened cervix were performed with individual two-sample t-tests and a linear mixed model. Estimates of SWS increase monotonically from distal to proximal longitudinally along the cervix, they vary in the anterior compared to the posterior cervix and they are significantly different in ripened vs unripened cervical tissue. Specifically, the mid position SWS estimates for the unripened group were 3.45 ± 0.95 m/s (anterior; mean ± SD) and 3.56 ± 0.92 m/s (posterior), and 2.11 ± 0.45 m/s (anterior) and 2.68 ± 0.57 m/s (posterior) for the ripened group (P < 0.001). We propose that SWS estimation may be a valuable research and, ultimately, diagnostic tool for objective quantification of cervical stiffness/softness. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.

  3. Pharmacokinetics of tritiated water in normal and dietary-induced obese rats

    International Nuclear Information System (INIS)

    Shum, L.Y.; Jusko, W.J.

    1986-01-01

    Tritiated water disposition was characterized in normal and dietary-induced obese rats to assess pharmacokinetic concerns in calculating water space and estimating body fat. A monoexponential decline in serum tritium activity was observed in both groups of rats, thus facilitating use of various computational methods. The volume of distribution and the total clearance of tritium in obese rats were larger than in normal rats because of the increased body weight. The values of water space (volume of distribution) estimated from moment analysis or dose divided by serum tritium activity at time zero (extrapolated) or at 2 hr were all similar. Thus, obesity does not alter the distribution equilibrium time and distribution pattern of tritium, and the conventional 2-hr single blood sampling after intravenous injection is adequate to estimate the water space of normal and obese rats

  4. Explosive percolation on directed networks due to monotonic flow of activity

    Science.gov (United States)

    Waagen, Alex; D'Souza, Raissa M.; Lu, Tsai-Ching

    2017-07-01

    An important class of real-world networks has directed edges, and in addition, some rank ordering on the nodes, for instance the popularity of users in online social networks. Yet, nearly all research related to explosive percolation has been restricted to undirected networks. Furthermore, information on such rank-ordered networks typically flows from higher-ranked to lower-ranked individuals, such as follower relations, replies, and retweets on Twitter. Here we introduce a simple percolation process on an ordered, directed network where edges are added monotonically with respect to the rank ordering. We show with a numerical approach that the emergence of a dominant strongly connected component appears to be discontinuous. Large-scale connectivity occurs at very high density compared with most percolation processes, and this holds not just for the strongly connected component structure but for the weakly connected component structure as well. We present analysis with branching processes, which explains this unusual behavior and gives basic intuition for the underlying mechanisms. We also show that before the emergence of a dominant strongly connected component, multiple giant strongly connected components may exist simultaneously. By adding a competitive percolation rule with a small bias to link uses of similar rank, we show this leads to formation of two distinct components, one of high-ranked users, and one of low-ranked users, with little flow between the two components.

  5. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution.

    Science.gov (United States)

    Han, Fang; Liu, Han

    2017-02-01

    Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.

  6. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  7. Minimum K-S estimator using PH-transform technique

    Directory of Open Access Journals (Sweden)

    Somchit Boonthiem

    2016-07-01

    Full Text Available In this paper, we propose an improvement of the Minimum Kolmogorov-Smirnov (K-S estimator using proportional hazards transform (PH-transform technique. The data of experiment is 47 fire accidents data of an insurance company in Thailand. This experiment has two operations, the first operation, we minimize K-S statistic value using grid search technique for nine distributions; Rayleigh distribution, gamma distribution, Pareto distribution, log-logistic distribution, logistic distribution, normal distribution, Weibull distribution, lognormal distribution, and exponential distribution and the second operation, we improve K-S statistic using PHtransform. The result appears that PH-transform technique can improve the Minimum K-S estimator. The algorithms give better the Minimum K-S estimator for seven distributions; Rayleigh distribution, logistic distribution, gamma distribution, Pareto distribution, log-logistic distribution, normal distribution, Weibull distribution, log-normal distribution, and exponential distribution while the Minimum K-S estimators of normal distribution and logistic distribution are unchanged

  8. Learning normalized inputs for iterative estimation in medical image segmentation.

    Science.gov (United States)

    Drozdzal, Michal; Chartrand, Gabriel; Vorontsov, Eugene; Shakeri, Mahsa; Di Jorio, Lisa; Tang, An; Romero, Adriana; Bengio, Yoshua; Pal, Chris; Kadoury, Samuel

    2018-02-01

    In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Energy expenditure estimation during normal ambulation using triaxial accelerometry and barometric pressure

    International Nuclear Information System (INIS)

    Wang, Jingjing; Redmond, Stephen J; Narayanan, Michael R; Wang, Ning; Lovell, Nigel H; Voleno, Matteo; Cerutti, Sergio

    2012-01-01

    Energy expenditure (EE) is an important parameter in the assessment of physical activity. Most reliable techniques for EE estimation are too impractical for deployment in unsupervised free-living environments; those which do prove practical for unsupervised use often poorly estimate EE when the subject is working to change their altitude by walking up or down stairs or inclines. This study evaluates the augmentation of a standard triaxial accelerometry waist-worn wearable sensor with a barometric pressure sensor (as a surrogate measure for altitude) to improve EE estimates, particularly when the subject is ascending or descending stairs. Using a number of features extracted from the accelerometry and barometric pressure signals, a state space model is trained for EE estimation. An activity classification algorithm is also presented, and this activity classification output is also investigated as a model input parameter when estimating EE. This EE estimation model is compared against a similar model which solely utilizes accelerometry-derived features. A protocol (comprising lying, sitting, standing, walking, walking up stairs, walking down stairs and transitioning between activities) was performed by 13 healthy volunteers (8 males and 5 females; age: 23.8 ± 3.7 years; weight: 70.5 ± 14.9 kg), whose instantaneous oxygen uptake was measured by means of an indirect calorimetry system (K4b 2 , COSMED, Italy). Activity classification improves from 81.65% to 90.91% when including barometric pressure information; when analyzing walking activities alone the accuracy increases from 70.23% to 98.54%. Using features derived from both accelerometry and barometry signals, combined with features relating to the activity classification in a state space model, resulted in a .VO 2 estimation bias of −0.00 095 and precision (1.96SD) of 3.54 ml min −1 kg −1 . Using only accelerometry features gives a relatively worse performance, with a bias of −0.09 and precision (1.96SD

  10. A cascadic monotonic time-discretized algorithm for finite-level quantum control computation

    Science.gov (United States)

    Ditz, P.; Borzi`, A.

    2008-03-01

    A computer package (CNMS) is presented aimed at the solution of finite-level quantum optimal control problems. This package is based on a recently developed computational strategy known as monotonic schemes. Quantum optimal control problems arise in particular in quantum optics where the optimization of a control representing laser pulses is required. The purpose of the external control field is to channel the system's wavefunction between given states in its most efficient way. Physically motivated constraints, such as limited laser resources, are accommodated through appropriately chosen cost functionals. Program summaryProgram title: CNMS Catalogue identifier: ADEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 770 No. of bytes in distributed program, including test data, etc.: 7098 Distribution format: tar.gz Programming language: MATLAB 6 Computer: AMD Athlon 64 × 2 Dual, 2:21 GHz, 1:5 GB RAM Operating system: Microsoft Windows XP Word size: 32 Classification: 4.9 Nature of problem: Quantum control Solution method: Iterative Running time: 60-600 sec

  11. Creep crack growth by grain boundary cavitation under monotonic and cyclic loading

    Science.gov (United States)

    Wen, Jian-Feng; Srivastava, Ankit; Benzerga, Amine; Tu, Shan-Tung; Needleman, Alan

    2017-11-01

    Plane strain finite deformation finite element calculations of mode I crack growth under small scale creep conditions are carried out. Attention is confined to isothermal conditions and two time histories of the applied stress intensity factor: (i) a monononic increase to a plateau value subsequently held fixed; and (ii) a cyclic time variation. The crack growth calculations are based on a micromechanics constitutive relation that couples creep deformation and damage due to grain boundary cavitation. Grain boundary cavitation, with cavity growth due to both creep and diffusion, is taken as the sole failure mechanism contributing to crack growth. The influence on the crack growth rate of loading history parameters, such as the magnitude of the applied stress intensity factor, the ratio of the applied minimum to maximum stress intensity factors, the loading rate, the hold time and the cyclic loading frequency, are explored. The crack growth rate under cyclic loading conditions is found to be greater than under monotonic creep loading with the plateau applied stress intensity factor equal to its maximum value under cyclic loading conditions. Several features of the crack growth behavior observed in creep-fatigue tests naturally emerge, for example, a Paris law type relation is obtained for cyclic loading.

  12. Hypoglycemic Activity Of Polygala arvensis In Normal And Alloxan ...

    African Journals Online (AJOL)

    Blood glucose was estimated by the glucose oxidase method in both normal and alloxan-induced diabetic rats before and 2h after the administration of drugs. The glycogen content of the liver, skeletal muscle, cardiac muscle and glucose uptake by isolated rat hemi-diaphragm were estimated. It showed significant reduction ...

  13. Spatiotemporal variability and predictability of Normalized Difference Vegetation Index (NDVI) in Alberta, Canada.

    Science.gov (United States)

    Jiang, Rengui; Xie, Jiancang; He, Hailong; Kuo, Chun-Chao; Zhu, Jiwei; Yang, Mingxiang

    2016-09-01

    As one of the most popular vegetation indices to monitor terrestrial vegetation productivity, Normalized Difference Vegetation Index (NDVI) has been widely used to study the plant growth and vegetation productivity around the world, especially the dynamic response of vegetation to climate change in terms of precipitation and temperature. Alberta is the most important agricultural and forestry province and with the best climatic observation systems in Canada. However, few studies pertaining to climate change and vegetation productivity are found. The objectives of this paper therefore were to better understand impacts of climate change on vegetation productivity in Alberta using the NDVI and provide reference for policy makers and stakeholders. We investigated the following: (1) the variations of Alberta's smoothed NDVI (sNDVI, eliminated noise compared to NDVI) and two climatic variables (precipitation and temperature) using non-parametric Mann-Kendall monotonic test and Thiel-Sen's slope; (2) the relationships between sNDVI and climatic variables, and the potential predictability of sNDVI using climatic variables as predictors based on two predicted models; and (3) the use of a linear regression model and an artificial neural network calibrated by the genetic algorithm (ANN-GA) to estimate Alberta's sNDVI using precipitation and temperature as predictors. The results showed that (1) the monthly sNDVI has increased during the past 30 years and a lengthened growing season was detected; (2) vegetation productivity in northern Alberta was mainly temperature driven and the vegetation in southern Alberta was predominantly precipitation driven for the period of 1982-2011; and (3) better performances of the sNDVI-climate relationships were obtained by nonlinear model (ANN-GA) than using linear (regression) model. Similar results detected in both monthly and summer sNDVI prediction using climatic variables as predictors revealed the applicability of two models for

  14. Rapid Estimation Method for State of Charge of Lithium-Ion Battery Based on Fractional Continual Variable Order Model

    Directory of Open Access Journals (Sweden)

    Xin Lu

    2018-03-01

    Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.

  15. Resonant scattering of energetic electrons in the plasmasphere by monotonic whistler-mode waves artificially generated by ionospheric modification

    Directory of Open Access Journals (Sweden)

    S. S. Chang

    2014-05-01

    Full Text Available Modulated high-frequency (HF heating of the ionosphere provides a feasible means of artificially generating extremely low-frequency (ELF/very low-frequency (VLF whistler waves, which can leak into the inner magnetosphere and contribute to resonant interactions with high-energy electrons in the plasmasphere. By ray tracing the magnetospheric propagation of ELF/VLF emissions artificially generated at low-invariant latitudes, we evaluate the relativistic electron resonant energies along the ray paths and show that propagating artificial ELF/VLF waves can resonate with electrons from ~ 100 keV to ~ 10 MeV. We further implement test particle simulations to investigate the effects of resonant scattering of energetic electrons due to triggered monotonic/single-frequency ELF/VLF waves. The results indicate that within the period of a resonance timescale, changes in electron pitch angle and kinetic energy are stochastic, and the overall effect is cumulative, that is, the changes averaged over all test electrons increase monotonically with time. The localized rates of wave-induced pitch-angle scattering and momentum diffusion in the plasmasphere are analyzed in detail for artificially generated ELF/VLF whistlers with an observable in situ amplitude of ~ 10 pT. While the local momentum diffusion of relativistic electrons is small, with a rate of −7 s−1, the local pitch-angle scattering can be intense near the loss cone with a rate of ~ 10−4 s−1. Our investigation further supports the feasibility of artificial triggering of ELF/VLF whistler waves for removal of high-energy electrons at lower L shells within the plasmasphere. Moreover, our test particle simulation results show quantitatively good agreement with quasi-linear diffusion coefficients, confirming the applicability of both methods to evaluate the resonant diffusion effect of artificial generated ELF/VLF whistlers.

  16. Comparison of the monotonic and cyclic mechanical properties of ultrafine-grained low carbon steels processed by continuous and conventional equal channel angular pressing

    International Nuclear Information System (INIS)

    Niendorf, T.; Böhner, A.; Höppel, H.W.; Göken, M.; Valiev, R.Z.; Maier, H.J.

    2013-01-01

    Highlights: ► UFG low-carbon steel was successfully processed by continuous ECAP-Conform. ► Continuously processed UFG steel shows high performance. ► High monotonic strength and good ductility. ► Microstructural stability under cyclic loading in the LCF regime. ► Established concepts can be used for predicting the properties. - Abstract: In the current study the mechanical properties of ultra-fine grained low carbon steel processed by conventional equal channel angular pressing and a continuous equal channel angular pressing-Conform process were investigated. Both monotonic and cyclic properties were determined for the steel in either condition and found to be very similar. Microstructural analyses employing electron backscatter diffraction were used for comparison of the low carbon steels processed by either technique. Both steels feature very similar grain sizes and misorientation angle distributions. With respect to fatigue life the low carbon steel investigated shows properties similar to ultra-fine grained interstitial-free steel processed by conventional equal channel angular pressing, and thus, the general fatigue behavior can be addressed following the same routines as proposed for interstitial-free steel. In conclusion, the continuously processed material exhibits very promising properties, and thus, equal channel angular pressing-Conform is a promising tool for production of ultra-fine grained steels in a large quantity

  17. Focus Article: Oscillatory and long-range monotonic exponential decays of electrostatic interactions in ionic liquids and other electrolytes: The significance of dielectric permittivity and renormalized charges

    Science.gov (United States)

    Kjellander, Roland

    2018-05-01

    A unified treatment of oscillatory and monotonic exponential decays of interactions in electrolytes is displayed, which highlights the role of dielectric response of the fluid in terms of renormalized (effective) dielectric permittivity and charges. An exact, but physically transparent statistical mechanical formalism is thereby used, which is presented in a systematic, pedagogical manner. Both the oscillatory and monotonic behaviors are given by an equation for the decay length of screened electrostatic interactions that is very similar to the classical expression for the Debye length. The renormalized dielectric permittivities, which have similar roles for electrolytes as the dielectric constant has for pure polar fluids, consist in general of several entities with different physical meanings. They are connected to dielectric response of the fluid on the same length scale as the decay length of the screened interactions. Only in cases where the decay length is very long, these permittivities correspond approximately to a dielectric response in the long-wavelength limit, like the dielectric constant for polar fluids. Experimentally observed long-range exponentially decaying surface forces are analyzed as well as the oscillatory forces observed for short to intermediate surface separations. Both occur in some ionic liquids and in concentrated as well as very dilute electrolyte solutions. The coexisting modes of decay are in general determined by the bulk properties of the fluid and not by the solvation of the surfaces; in the present cases, they are given by the behavior of the screened Coulomb interaction of the bulk fluid. The surface-fluid interactions influence the amplitudes and signs or phases of the different modes of the decay, but not their decay lengths and wavelengths. The similarities between some ionic liquids and very dilute electrolyte solutions as regards both the long-range monotonic and the oscillatory decays are analyzed.

  18. Length and volume of morphologically normal kidneys in Korean Children: Ultrasound measurement and estimation using body size

    International Nuclear Information System (INIS)

    Kim, Jun Hwee; Kim, Myung Joon; Lim, Sok Hwan; Lee, Mi Jung; Kim, Ji Eun

    2013-01-01

    To evaluate the relationship between anthropometric measurements and renal length and volume measured with ultrasound in Korean children who have morphologically normal kidneys, and to create simple equations to estimate the renal sizes using the anthropometric measurements. We examined 794 Korean children under 18 years of age including a total of 394 boys and 400 girls without renal problems. The maximum renal length (L) (cm), orthogonal anterior-posterior diameter (D) (cm) and width (W) (cm) of each kidney were measured on ultrasound. Kidney volume was calculated as 0.523 x L x D x W (cm 3 ). Anthropometric indices including height (cm), weight (kg) and body mass index (m 2 /kg) were collected through a medical record review. We used linear regression analysis to create simple equations to estimate the renal length and the volume with those anthropometric indices that were mostly correlated with the US-measured renal sizes. Renal length showed the strongest significant correlation with patient height (R2, 0.874 and 0.875 for the right and left kidneys, respectively, p < 0.001). Renal volume showed the strongest significant correlation with patient weight (R2, 0.842 and 0.854 for the right and left kidneys, respectively, p < 0.001). The following equations were developed to describe these relationships with an estimated 95% range of renal length and volume (R2, 0.826-0.884, p < 0.001): renal length = 2.383 + 0.045 x Height (± 1.135) and = 2.374 + 0.047 x Height (± 1.173) for the right and left kidneys, respectively; and renal volume 7.941 + 1.246 x Weight (± 15.920) and = 7.303 + 1.532 x Weight (± 18.704) for the right and left kidneys, respectively. Scatter plots between height and renal length and between weight and renal volume have been established from Korean children and simple equations between them have been developed for use in clinical practice.

  19. Length and volume of morphologically normal kidneys in Korean Children: Ultrasound measurement and estimation using body size

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jun Hwee; Kim, Myung Joon; Lim, Sok Hwan; Lee, Mi Jung [Dept. of Radiology and Research Institute of Radiological Science, Severance Children' s Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Kim, Ji Eun [Biostatistics Collaboration Unit, Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2013-08-15

    To evaluate the relationship between anthropometric measurements and renal length and volume measured with ultrasound in Korean children who have morphologically normal kidneys, and to create simple equations to estimate the renal sizes using the anthropometric measurements. We examined 794 Korean children under 18 years of age including a total of 394 boys and 400 girls without renal problems. The maximum renal length (L) (cm), orthogonal anterior-posterior diameter (D) (cm) and width (W) (cm) of each kidney were measured on ultrasound. Kidney volume was calculated as 0.523 x L x D x W (cm{sup 3}). Anthropometric indices including height (cm), weight (kg) and body mass index (m{sup 2}/kg) were collected through a medical record review. We used linear regression analysis to create simple equations to estimate the renal length and the volume with those anthropometric indices that were mostly correlated with the US-measured renal sizes. Renal length showed the strongest significant correlation with patient height (R2, 0.874 and 0.875 for the right and left kidneys, respectively, p < 0.001). Renal volume showed the strongest significant correlation with patient weight (R2, 0.842 and 0.854 for the right and left kidneys, respectively, p < 0.001). The following equations were developed to describe these relationships with an estimated 95% range of renal length and volume (R2, 0.826-0.884, p < 0.001): renal length = 2.383 + 0.045 x Height (± 1.135) and = 2.374 + 0.047 x Height (± 1.173) for the right and left kidneys, respectively; and renal volume 7.941 + 1.246 x Weight (± 15.920) and = 7.303 + 1.532 x Weight (± 18.704) for the right and left kidneys, respectively. Scatter plots between height and renal length and between weight and renal volume have been established from Korean children and simple equations between them have been developed for use in clinical practice.

  20. Depth Estimates for Slingram Electromagnetic Anomalies from Dipping Sheet-like Bodies by the Normalized Full Gradient Method

    Science.gov (United States)

    Dondurur, Derman

    2005-11-01

    The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.

  1. Cost Estimating Handbook for Environmental Restoration

    International Nuclear Information System (INIS)

    1993-01-01

    Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals

  2. Local Monotonicity and Isoperimetric Inequality on Hypersurfaces in Carnot groups

    Directory of Open Access Journals (Sweden)

    Francesco Paolo Montefalcone

    2010-12-01

    Full Text Available Let G be a k-step Carnot group of homogeneous dimension Q. Later on we shall present some of the results recently obtained in [32] and, in particular, an intrinsic isoperimetric inequality for a C2-smooth compact hypersurface S with boundary @S. We stress that S and @S are endowed with the homogeneous measures n????1 H and n????2 H , respectively, which are actually equivalent to the intrinsic (Q - 1-dimensional and (Q - 2-dimensional Hausdor measures with respect to a given homogeneous metric % on G. This result generalizes a classical inequality, involving the mean curvature of the hypersurface, proven by Michael and Simon [29] and Allard [1], independently. One may also deduce some related Sobolev-type inequalities. The strategy of the proof is inspired by the classical one and will be discussed at the rst section. After reminding some preliminary notions about Carnot groups, we shall begin by proving a linear isoperimetric inequality. The second step is a local monotonicity formula. Then we may achieve the proof by a covering argument.We stress however that there are many dierences, due to our non-Euclidean setting.Some of the tools developed ad hoc are, in order, a \\blow-up" theorem, which holds true also for characteristic points, and a smooth Coarea Formula for the HS-gradient. Other tools are the horizontal integration by parts formula and the 1st variation formula for the H-perimeter n????1H already developed in [30, 31] and then generalized to hypersurfaces having non-empty characteristic set in [32]. These results can be useful in the study of minimal and constant horizontal mean curvature hypersurfaces in Carnot groups.

  3. The Cognitive Social Network in Dreams: Transitivity, Assortativity, and Giant Component Proportion Are Monotonic.

    Science.gov (United States)

    Han, Hye Joo; Schweickert, Richard; Xi, Zhuangzhuang; Viau-Quesnel, Charles

    2016-04-01

    For five individuals, a social network was constructed from a series of his or her dreams. Three important network measures were calculated for each network: transitivity, assortativity, and giant component proportion. These were monotonically related; over the five networks as transitivity increased, assortativity increased and giant component proportion decreased. The relations indicate that characters appear in dreams systematically. Systematicity likely arises from the dreamer's memory of people and their relations, which is from the dreamer's cognitive social network. But the dream social network is not a copy of the cognitive social network. Waking life social networks tend to have positive assortativity; that is, people tend to be connected to others with similar connectivity. Instead, in our sample of dream social networks assortativity is more often negative or near 0, as in online social networks. We show that if characters appear via a random walk, negative assortativity can result, particularly if the random walk is biased as suggested by remote associations. Copyright © 2015 Cognitive Science Society, Inc.

  4. Mechanical characteristics under monotonic and cyclic simple shear of spark plasma sintered ultrafine-grained nickel

    International Nuclear Information System (INIS)

    Dirras, G.; Bouvier, S.; Gubicza, J.; Hasni, B.; Szilagyi, T.

    2009-01-01

    The present work focuses on understanding the mechanical behavior of bulk ultrafine-grained nickel specimens processed by spark plasma sintering of high purity nickel nanopowder and subsequently deformed under large amplitude monotonic simple shear tests and strain-controlled cyclic simple shear tests at room temperature. During cyclic tests, the samples were deformed up to an accumulated von Mises strain of about ε VM = 0.75 (the flow stress was in the 650-700 MPa range), which is extremely high in comparison with the low tensile/compression ductility of this class of materials at quasi-static conditions. The underlying physical mechanisms were investigated by electron microscopy and X-ray diffraction profile analysis. Lattice dislocation-based plasticity leading to cell formation and dislocation interactions with twin boundaries contributed to the work-hardening of these materials. The large amount of plastic strain that has been reached during the shear tests highlights intrinsic mechanical characteristics of the ultrafine-grained nickel studied here.

  5. Mechanical characteristics under monotonic and cyclic simple shear of spark plasma sintered ultrafine-grained nickel

    Energy Technology Data Exchange (ETDEWEB)

    Dirras, G., E-mail: dirras@univ-paris13.fr [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Bouvier, S. [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Gubicza, J. [Department of Materials Physics, Eoetvoes Lorand University, P.O.B. 32, Budapest H-1518 (Hungary); Hasni, B. [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Szilagyi, T. [Department of Materials Physics, Eoetvoes Lorand University, P.O.B. 32, Budapest H-1518 (Hungary)

    2009-11-25

    The present work focuses on understanding the mechanical behavior of bulk ultrafine-grained nickel specimens processed by spark plasma sintering of high purity nickel nanopowder and subsequently deformed under large amplitude monotonic simple shear tests and strain-controlled cyclic simple shear tests at room temperature. During cyclic tests, the samples were deformed up to an accumulated von Mises strain of about {epsilon}{sub VM} = 0.75 (the flow stress was in the 650-700 MPa range), which is extremely high in comparison with the low tensile/compression ductility of this class of materials at quasi-static conditions. The underlying physical mechanisms were investigated by electron microscopy and X-ray diffraction profile analysis. Lattice dislocation-based plasticity leading to cell formation and dislocation interactions with twin boundaries contributed to the work-hardening of these materials. The large amount of plastic strain that has been reached during the shear tests highlights intrinsic mechanical characteristics of the ultrafine-grained nickel studied here.

  6. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  7. Austenite Grain Size Estimtion from Chord Lengths of Logarithmic-Normal Distribution

    Directory of Open Access Journals (Sweden)

    Adrian H.

    2017-12-01

    Full Text Available Linear section of grains in polyhedral material microstructure is a system of chords. The mean length of chords is the linear grain size of the microstructure. For the prior austenite grains of low alloy structural steels, the chord length is a random variable of gamma- or logarithmic-normal distribution. The statistical grain size estimation belongs to the quantitative metallographic problems. The so-called point estimation is a well known procedure. The interval estimation (grain size confidence interval for the gamma distribution was given elsewhere, but for the logarithmic-normal distribution is the subject of the present contribution. The statistical analysis is analogous to the one for the gamma distribution.

  8. Quantum parameter estimation in the Unruh–DeWitt detector model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Xiang, E-mail: xhao@phas.ubc.ca [School of Mathematics and Physics, Suzhou University of Science and Technology, Suzhou, Jiangsu 215011 (China); Pacific Institute of Theoretical Physics, Department of Physics and Astronomy, University of British Columbia, 6224 Agriculture Rd., Vancouver B.C., Canada V6T 1Z1 (Canada); Wu, Yinzhong [School of Mathematics and Physics, Suzhou University of Science and Technology, Suzhou, Jiangsu 215011 (China)

    2016-09-15

    Relativistic effects on the precision of quantum metrology for particle detectors, such as two-level atoms are studied. The quantum Fisher information is used to estimate the phase sensitivity of atoms in non-inertial motions or in gravitational fields. The Unruh–DeWitt model is applicable to the investigation of the dynamics of a uniformly accelerated atom weakly coupled to a massless scalar vacuum field. When a measuring device is in the same relativistic motion as the atom, the dynamical behavior of quantum Fisher information as a function of Rindler proper time is obtained. It is found out that monotonic decrease in phase sensitivity is characteristic of dynamics of relativistic quantum estimation. The origin of the decay of quantum Fisher information is the thermal bath that the accelerated detector finds itself in due to the Unruh effect. To improve relativistic quantum metrology, we reasonably take into account two reflecting plane boundaries perpendicular to each other. The presence of the reflecting boundary can shield the detector from the thermal bath in some sense.

  9. Estimated Trans-Lamina Cribrosa Pressure Differences in Low-Teen and High-Teen Intraocular Pressure Normal Tension Glaucoma: The Korean National Health and Nutrition Examination Survey

    OpenAIRE

    Lee, Si Hyung; Kwak, Seung Woo; Kang, Eun Min; Kim, Gyu Ah; Lee, Sang Yeop; Bae, Hyoung Won; Seong, Gong Je; Kim, Chan Yun

    2016-01-01

    Background To investigate the association between estimated trans-lamina cribrosa pressure difference (TLCPD) and prevalence of normal tension glaucoma (NTG) with low-teen and high-teen intraocular pressure (IOP) using a population-based study design. Methods A total of 12,743 adults (? 40 years of age) who participated in the Korean National Health and Nutrition Examination Survey (KNHANES) from 2009 to 2012 were included. Using a previously developed formula, cerebrospinal fluid pressure (C...

  10. Quantitative proteome profiling of normal human circulating microparticles

    DEFF Research Database (Denmark)

    Østergaard, Ole; Nielsen, Christoffer T; Iversen, Line V

    2012-01-01

    Circulating microparticles (MPs) are produced as part of normal physiology. Their numbers, origin, and composition change in pathology. Despite this, the normal MP proteome has not yet been characterized with standardized high-resolution methods. We here quantitatively profile the normal MP...... proteome using nano-LC-MS/MS on an LTQ-Orbitrap with optimized sample collection, preparation, and analysis of 12 different normal samples. Analytical and procedural variation were estimated in triply processed samples analyzed in triplicate from two different donors. Label-free quantitation was validated...... by the correlation of cytoskeletal protein intensities with MP numbers obtained by flow cytometry. Finally, the validity of using pooled samples was evaluated using overlap protein identification numbers and multivariate data analysis. Using conservative parameters, 536 different unique proteins were quantitated...

  11. Comparing the normalization methods for the differential analysis of Illumina high-throughput RNA-Seq data.

    Science.gov (United States)

    Li, Peipei; Piao, Yongjun; Shon, Ho Sun; Ryu, Keun Ho

    2015-10-28

    Recently, rapid improvements in technology and decrease in sequencing costs have made RNA-Seq a widely used technique to quantify gene expression levels. Various normalization approaches have been proposed, owing to the importance of normalization in the analysis of RNA-Seq data. A comparison of recently proposed normalization methods is required to generate suitable guidelines for the selection of the most appropriate approach for future experiments. In this paper, we compared eight non-abundance (RC, UQ, Med, TMM, DESeq, Q, RPKM, and ERPKM) and two abundance estimation normalization methods (RSEM and Sailfish). The experiments were based on real Illumina high-throughput RNA-Seq of 35- and 76-nucleotide sequences produced in the MAQC project and simulation reads. Reads were mapped with human genome obtained from UCSC Genome Browser Database. For precise evaluation, we investigated Spearman correlation between the normalization results from RNA-Seq and MAQC qRT-PCR values for 996 genes. Based on this work, we showed that out of the eight non-abundance estimation normalization methods, RC, UQ, Med, TMM, DESeq, and Q gave similar normalization results for all data sets. For RNA-Seq of a 35-nucleotide sequence, RPKM showed the highest correlation results, but for RNA-Seq of a 76-nucleotide sequence, least correlation was observed than the other methods. ERPKM did not improve results than RPKM. Between two abundance estimation normalization methods, for RNA-Seq of a 35-nucleotide sequence, higher correlation was obtained with Sailfish than that with RSEM, which was better than without using abundance estimation methods. However, for RNA-Seq of a 76-nucleotide sequence, the results achieved by RSEM were similar to without applying abundance estimation methods, and were much better than with Sailfish. Furthermore, we found that adding a poly-A tail increased alignment numbers, but did not improve normalization results. Spearman correlation analysis revealed that RC, UQ

  12. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Winn, W.G.

    1994-01-01

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  13. Estimation of a noise level using coarse-grained entropy of experimental time series of internal pressure in a combustion engine

    International Nuclear Information System (INIS)

    Litak, Grzegorz; Taccani, Rodolfo; Radu, Robert; Urbanowicz, Krzysztof; HoIyst, Janusz A.; Wendeker, MirosIaw; Giadrossi, Alessandro

    2005-01-01

    We report our results on non-periodic experimental time series of pressure in a single cylinder spark ignition engine. The experiments were performed for different levels of loading. We estimate the noise level in internal pressure calculating the coarse-grained entropy from variations of maximal pressures in successive cycles. The results show that the dynamics of the combustion is a non-linear multidimensional process mediated by noise. Our results show that so defined level of noise in internal pressure is not monotonous function of loading

  14. Rates of convergence and asymptotic normality of curve estimators for ergodic diffusion processes

    NARCIS (Netherlands)

    J.H. van Zanten (Harry)

    2000-01-01

    textabstractFor ergodic diffusion processes, we study kernel-type estimators for the invariant density, its derivatives and the drift function. We determine rates of convergence and find the joint asymptotic distribution of the estimators at different points.

  15. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  16. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  17. Analysis of the normal optical, Michel and molecular potentials on ...

    Indian Academy of Sciences (India)

    6. — journal of. June 2016 physics pp. 1275–1286. Analysis of the normal ... the levels are obtained for the three optical potentials to estimate the quality ... The experimental angular distribution data for the 40Ca(6Li, d)44Ti reaction .... analysed using the normal optical, Michel and molecular potentials within the framework.

  18. Common Nearly Best Linear Estimates of Location and Scale ...

    African Journals Online (AJOL)

    Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...

  19. Inelastic behavior of cold-formed braced walls under monotonic and cyclic loading

    Science.gov (United States)

    Gerami, Mohsen; Lotfi, Mohsen; Nejat, Roya

    2015-06-01

    The ever-increasing need for housing generated the search for new and innovative building methods to increase speed and efficiency and enhance quality. One method is the use of light thin steel profiles as load-bearing elements having different solutions for interior and exterior cladding. Due to the increase in CFS construction in low-rise residential structures in the modern construction industry, there is an increased demand for performance inelastic analysis of CFS walls. In this study, the nonlinear behavior of cold-formed steel frames with various bracing arrangements including cross, chevron and k-shape straps was evaluated under cyclic and monotonic loading and using nonlinear finite element analysis methods. In total, 68 frames with different bracing arrangements and different ratios of dimensions were studied. Also, seismic parameters including resistance reduction factor, ductility and force reduction factor due to ductility were evaluated for all samples. On the other hand, the seismic response modification factor was calculated for these systems. It was concluded that the highest response modification factor would be obtained for walls with bilateral cross bracing systems with a value of 3.14. In all samples, on increasing the distance of straps from each other, shear strength increased and shear strength of the wall with bilateral bracing system was 60 % greater than that with lateral bracing system.

  20. Pharmacokinetics and normal organ dosimetry following intraperitoneal rhenium-186-labeled monoclonal antibody

    International Nuclear Information System (INIS)

    Breitz, H.B.; Durham, J.S.; Fisher, D.R.

    1995-01-01

    Pharmacokinetics, biodistribution and radiation dose estimates following intraperitoneal administration of a 186 Re-labeled murine antibody, NR-LU-10, were assessed in 27 patients with advanced ovarian cancer. Quantitative gamma camera imaging and gamma counting of serum and intraperitoneal fluid radioactivity were used to obtain data for dosimetry estimation. The MIRD intraperitoneal model was used to estimate dose to normal organs from radioactivity within the peritoneal cavity. The absorbed dose to normal peritoneum was estimated in two ways: from the gamma camera activity and peritoneal fluid samples. Serum activity peaked at 44 hr and depended on the concentration of radioactivity in the peritoneal fluid. Mean cumulative urinary excretion of 186 Re was 50% by 140 hr. Estimates of radiation absorbed dose to normal organs in rad/mCi administered (mean ± s.d.) were whole body 0.7 ± 0.3; marrow 0.4 ±0.1; liver 1.9 ±0.9; lungs 1.3 ± 0.7; kidneys 0.2 ± 0.2; intestine 0.2 ±0.2. Peritoneal surface dose estimates varied depending on the volume of fluid infused and the method of dose determination. Using gamma camera data, the peritoneal dose ranged for 7 to 36 rad/mCi. Using peritoneal fluid sample data, the dose ranged from 2 to 25 rad/mCi. Significant myelosuppression was observed at marrow doses above 100 rad. Noninvasive methods of dose estimation for intraperitoneal administration of radioimmunoconjugates provide reasonable estimates when compared with previously described methods. 31 refs., 6 figs., 2 tabs

  1. On the efficient simulation of the left-tail of the sum of correlated log-normal variates

    KAUST Repository

    Alouini, Mohamed-Slim

    2018-04-04

    The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However, these methods are not accurate in the tail regions. These regions are of primordial interest as small probability values have to be evaluated with high precision. Variance reduction techniques are known to yield accurate, yet efficient, estimates of small probability values. Most of the existing approaches have focused on estimating the right-tail of the sum of log-normal random variables (RVs). Here, we instead consider the left-tail of the sum of correlated log-normal variates with Gaussian copula, under a mild assumption on the covariance matrix. We propose an estimator combining an existing mean-shifting importance sampling approach with a control variate technique. This estimator has an asymptotically vanishing relative error, which represents a major finding in the context of the left-tail simulation of the sum of log-normal RVs. Finally, we perform simulations to evaluate the performances of the proposed estimator in comparison with existing ones.

  2. Comparative analysis of old-age mortality estimations in Africa.

    Directory of Open Access Journals (Sweden)

    Eran Bendavid

    Full Text Available Survival to old ages is increasing in many African countries. While demographic tools for estimating mortality up to age 60 have improved greatly, mortality patterns above age 60 rely on models based on little or no demographic data. These estimates are important for social planning and demographic projections. We provide direct estimations of older-age mortality using survey data.Since 2005, nationally representative household surveys in ten sub-Saharan countries record counts of living and recently deceased household members: Burkina Faso, Côte d'Ivoire, Ethiopia, Namibia, Nigeria, Swaziland, Tanzania, Uganda, Zambia, and Zimbabwe. After accounting for age heaping using multiple imputation, we use this information to estimate probability of death in 5-year intervals ((5q(x. We then compare our (5q(x estimates to those provided by the World Health Organization (WHO and the United Nations Population Division (UNPD to estimate the differences in mortality estimates, especially among individuals older than 60 years old.We obtained information on 505,827 individuals (18.4% over age 60, 1.64% deceased. WHO and UNPD mortality models match our estimates closely up to age 60 (mean difference in probability of death -1.1%. However, mortality probabilities above age 60 are lower using our estimations than either WHO or UNPD. The mean difference between our sample and the WHO is 5.9% (95% CI 3.8-7.9% and between our sample is UNPD is 13.5% (95% CI 11.6-15.5%. Regardless of the comparator, the difference in mortality estimations rises monotonically above age 60.Mortality estimations above age 60 in ten African countries exhibit large variations depending on the method of estimation. The observed patterns suggest the possibility that survival in some African countries among adults older than age 60 is better than previously thought. Improving the quality and coverage of vital information in developing countries will become increasingly important with

  3. APPLICATION OF A PRIMAL-DUAL INTERIOR POINT ALGORITHM USING EXACT SECOND ORDER INFORMATION WITH A NOVEL NON-MONOTONE LINE SEARCH METHOD TO GENERALLY CONSTRAINED MINIMAX OPTIMISATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    INTAN S. AHMAD

    2008-04-01

    Full Text Available This work presents the application of a primal-dual interior point method to minimax optimisation problems. The algorithm differs significantly from previous approaches as it involves a novel non-monotone line search procedure, which is based on the use of standard penalty methods as the merit function used for line search. The crucial novel concept is the discretisation of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order. An implementation within a logarithmic barrier algorithm for bounds handling is presented with capabilities for large scale application. Case studies presented demonstrate the capabilities of the proposed methodology, which relies on the reformulation of minimax models into standard nonlinear optimisation models. Some previously reported case studies from the open literature have been solved, and with significantly better optimal solutions identified. We believe that the nature of the non-monotone line search scheme allows the search procedure to escape from local minima, hence the encouraging results obtained.

  4. Non-monotonic compositional dependence of isothermal bulk modulus of the (Mg1–xMnxCr2O4 spinel solid solutions, and its origin and implication

    Directory of Open Access Journals (Sweden)

    Xi Liu

    2016-12-01

    Full Text Available The compressibility of the spinel solid solutions, (Mg1−xMnxCr2O4 with x = 0.00 (0, 0.20 (0, 0.44 (2, 0.61 (2, 0.77 (2 and 1.00 (0, has been investigated by using a diamond-anvil cell coupled with synchrotron X-ray radiation up to ∼10 GPa (ambient T. The second-order Birch–Murnaghan equation of state was used to fit the PV data, yielding the following values for the isothermal bulk moduli (KT, 198.2 (36, 187.8 (87, 176.1 (32, 168.7 (52, 192.9 (61 and 199.2 (61 GPa, for the spinel solid solutions with x = 0.00 (0, 0.20 (0, 0.44 (2, 0.61 (2, 0.77 (2 and 1.00 (0, respectively (KT′ fixed as 4. The KT value of the MgCr2O4 spinel is in good agreement with existing experimental determinations and theoretical calculations. The correlation between the KT and x is not monotonic, with the KT values similar at both ends of the binary MgCr2O4MnCr2O4, but decreasing towards the middle. This non-monotonic correlation can be described by two equations, KT = −49.2 (11x + 198.0 (4 (x ≤ ∼0.6 and KT = 92 (41x + 115 (30 (x ≥ ∼0.6, and can be explained by the evolution of the average bond lengths of the tetrahedra and octahedra of the spinel solid solutions. Additionally, the relationship between the thermal expansion coefficient and composition is correspondingly reinterpreted, the continuous deformation of the oxygen array is demonstrated, and the evolution of the component polyhedra is discussed for this series of spinel solid solutions. Our results suggest that the correlation between the KT and composition of a solid solution series may be complicated, and great care should be paid while estimating the KT of some intermediate compositions from the KT of the end-members.

  5. Quantifying lead-time bias in risk factor studies of cancer through simulation.

    Science.gov (United States)

    Jansen, Rick J; Alexander, Bruce H; Anderson, Kristin E; Church, Timothy R

    2013-11-01

    Lead-time is inherent in early detection and creates bias in observational studies of screening efficacy, but its potential to bias effect estimates in risk factor studies is not always recognized. We describe a form of this bias that conventional analyses cannot address and develop a model to quantify it. Surveillance Epidemiology and End Results (SEER) data form the basis for estimates of age-specific preclinical incidence, and log-normal distributions describe the preclinical duration distribution. Simulations assume a joint null hypothesis of no effect of either the risk factor or screening on the preclinical incidence of cancer, and then quantify the bias as the risk-factor odds ratio (OR) from this null study. This bias can be used as a factor to adjust observed OR in the actual study. For this particular study design, as average preclinical duration increased, the bias in the total-physical activity OR monotonically increased from 1% to 22% above the null, but the smoking OR monotonically decreased from 1% above the null to 5% below the null. The finding of nontrivial bias in fixed risk-factor effect estimates demonstrates the importance of quantitatively evaluating it in susceptible studies. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Growth rates and age at adult size of loggerhead sea turtles (Caretta caretta in the Mediterranean Sea, estimated through capture-mark-recapture records

    Directory of Open Access Journals (Sweden)

    Paolo Casale

    2009-09-01

    Full Text Available Growth rates of the juvenile phase of loggerhead turtles (Caretta caretta were estimated for the first time in the Mediterranean Sea from capture-mark-recapture records. Thirty-eight turtles were released from Italian coasts and re-encountered after 1.0-10.9 years in the period 1986-2007. Their mean CCL (curved carapace length ranged from 32.5 to 82.0 cm and they showed variable growth rates, ranging from 0 to 5.97 cm/yr (mean: 2.5. The association between annual growth rate and three covariates (mean year, mean size and time interval was investigated through a non-parametric modelling approach. Only mean size showed a clear effect on growth rate, described by a monotonic declining curve. Variability indicates that factors not included in the model, probably individual-related ones, have an important effect on growth rates. Based on the monotonic decreasing growth function which indicates no growth spurt, a von Bertalanffy growth function was used to estimate the time required by turtles to grow within the observed size range. The results indicate that turtles would take 16-28 years to reach 66.5-84.7 cm CCL, the average nesting female sizes observed at the most important Mediterranean nesting sites, which can be considered an approximation of the size at maturity.

  7. Gold in semen: Level in seminal plasma and spermatozoa of normal ...

    African Journals Online (AJOL)

    The study was conducted to understand the amount of gold in semen of normal and different infertile conditions. Gold was estimated in normal (n38) and pathological conditions (n86) by employing Atomic Absorption Spectrophotometer. Gold level observed in seminal plasma was as follows: in normozoospermia (n38) ...

  8. Uncertainties in estimating working level months

    International Nuclear Information System (INIS)

    Johnson, J.R.

    1978-11-01

    A statistical procedure is presented that can be used to estimate the number of Working Level (WL) measurements that are required to calculate the average WL to any required precision, at given confidence levels. The procedure assumes that the WL measurements have a normal distribution. WL measurement from Canadian Uranium mines are used to illustrate a procedure of insuring that estimated Working Level Months can be calculated to the required precision. An addendum reports the results of tests of normality of the WL data using the W-test and the Kolmagornov-Smirnov test. (author)

  9. A study of the up-and-down method for non-normal distribution functions

    DEFF Research Database (Denmark)

    Vibholm, Svend; Thyregod, Poul

    1988-01-01

    The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...

  10. Sufficient Condition for Monotonicity in Constructing the Distribution Function With Bernoulli Scheme

    Directory of Open Access Journals (Sweden)

    Vedenyapin Aleksandr Dmitrievich

    2015-11-01

    Full Text Available This paper is the construction of the distribution function using the Bernoulli scheme, and is also designed to correct some of the mistakes that were made in the article [2]. Namely, a function built in [2] need not be monotonous, and some formulas need to be adjusted. The idea of building as well as in [2], is based on the model of Cox-Ross-Rubinstein "binary market". The essence of the model was to divide time into N steps, and assuming that the price of an asset at each step can move either up to a certain value with probability p, or down also by some certain value with probability q = 1 - p. Prices in step N can take only a finite number of values. "Success" or "failure" was the changing price for some fixed value in the model of Cox-Ross-Rubinstein. Here as a "success" or "failure" at every step we consider the affiliation of changing the index value to the section [r, S] either to the interval [I, r. Further a function P(r was introduced, which at any step gives us the probability of "success". The maximum index value increase for the all period of time [T, 2T] will be equal nS, and the maximum possible reduction will be equal nI. Then let x ∈ [nI, nS]. This segment will reflect every possible total variation that we can get at the end of a period of time [T, 2T]. The further introduced inequality k ≥ (x - nI/(S - I gives us the minimum number of successes that needed for total changing could be in the section [x, nS] if was n - k reductions with the index value to I. Then was introduced the function r(x, kmin which is defined on the interval (nI, nS] and provided us some assurance that the total index changing could be in the section [x, nS] if successful interval is [r(x, kmin, S] and the amount of success is satisfying to our inequality. The probability of k "successes" and n - k "failures" is calculated according to the formula of Bernoulli, where the probability of "success" is determined by the function P(r, and r is determined

  11. Response of skirted suction caissons to monotonic lateral loading in saturated medium sand

    Science.gov (United States)

    Li, Da-yong; Zhang, Yu-kun; Feng, Ling-yun; Guo, Yan-xue

    2014-08-01

    Monotonic lateral load model tests were carried out on steel skirted suction caissons embedded in the saturated medium sand to study the bearing capacity. A three-dimensional continuum finite element model was developed with Z_SOIL software. The numerical model was calibrated against experimental results. Soil deformation and earth pressures on skirted caissons were investigated by using the finite element model to extend the model tests. It shows that the "skirted" structure can significantly increase the lateral capacity and limit the deflection, especially suitable for offshore wind turbines, compared with regular suction caissons without the "skirted" at the same load level. In addition, appropriate determination of rotation centers plays a crucial role in calculating the lateral capacity by using the analytical method. It was also found that the rotation center is related to dimensions of skirted suction caissons and loading process, i.e. the rotation center moves upwards with the increase of the "skirted" width and length; moreover, the rotation center moves downwards with the increase of loading and keeps constant when all the sand along the caisson's wall yields. It is so complex that we cannot simply determine its position like the regular suction caisson commonly with a specified position to the length ratio of the caisson.

  12. Measurement and evaluation of EDM bearing currents by the normalized Joule integral

    International Nuclear Information System (INIS)

    Vidmar, Gregor; Miljavec, Damijan; Agrež, Dušan

    2014-01-01

    Apparent current density is the most common criterion used in literature to estimate bearing endangerment due to bearing currents. In the paper, a new criterion called the normalized Joule integral is proposed as a more reliable and accurate one. This approach is more general and gives good correlation between current in the bypass bridge and bearing current. Furthermore, it considers the whole current that causes bearing damage, not just its peak value. The choice of the normalized Joule integral is theoretically explained and supported by measurements and simulations. The levels of bearing endangerment related to the normalized Joule integral of bearing currents are estimated. (paper)

  13. Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.

    Science.gov (United States)

    Hui, Zhuo; Sankaranarayanan, Aswin C

    2017-10-01

    This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.

  14. Proportionate-type normalized last mean square algorithms

    CERN Document Server

    Wagner, Kevin

    2013-01-01

    The topic of this book is proportionate-type normalized least mean squares (PtNLMS) adaptive filtering algorithms, which attempt to estimate an unknown impulse response by adaptively giving gains proportionate to an estimate of the impulse response and the current measured error. These algorithms offer low computational complexity and fast convergence times for sparse impulse responses in network and acoustic echo cancellation applications. New PtNLMS algorithms are developed by choosing gains that optimize user-defined criteria, such as mean square error, at all times. PtNLMS algorithms ar

  15. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  16. Monotonic and cyclic bond behavior of confined concrete using NiTiNb SMA wires

    International Nuclear Information System (INIS)

    Choi, Eunsoo; Chung, Young-Soo; Kim, Yeon-Wook; Kim, Joo-Woo

    2011-01-01

    This study conducts bond tests of reinforced concrete confined by shape memory alloy (SMA) wires which provide active and passive confinement of concrete. This study uses NiTiNb SMA which usually shows wide temperature hysteresis; this is a good advantage for the application of shape memory effects. The aims of this study are to investigate the behavior of SMA wire under residual stress and the performance of SMA wire jackets in improving bond behavior through monotonic-loading tests. This study also conducts cyclic bond tests and analyzes cyclic bond behavior. The use of SMA wire jackets transfers the bond failure from splitting to pull-out mode and satisfactorily increases bond strength and ductile behavior. The active confinement provided by the SMA plays a major role in providing external pressure on the concrete because the developed passive confinement is much smaller than the active confinement. For cyclic behavior, slip and circumferential strain are recovered more with larger bond stress. This recovery of slip and circumferential strain are mainly due to the external pressure of the SMA wires since cracked concrete cannot provide any elastic recovery

  17. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  18. Validação da curva normal de peso fetal estimado pela ultra-sonografia para o diagnóstico do peso neonatal Validity of the normal fetal weight curve estimated by ultrasound for diagnosis of neonatal weight

    Directory of Open Access Journals (Sweden)

    José Guilherme Cecatti

    2003-02-01

    Full Text Available OBJETIVO: avaliar a concordância entre o peso fetal estimado (PFE por ultra-sonografia e o neonatal, o desempenho da curva normal de PFE por idade gestacional no diagnóstico de desvios do peso fetal/neonatal e fatores associados. MÉTODOS: participaram do estudo 186 grávidas atendidas de novembro de 1998 a janeiro de 2000, com avaliação ultra-sonográfica até 3 dias antes do parto, determinação do PFE e do índice de líquido amniótico e parto na instituição. O PFE foi calculado e classificado de acordo com a curva de valores normais de PFE em: pequeno para a idade gestacional (PIG, adequado para a idade gestacional (AIG e grande para a idade gestacional (GIG. A mesma classificação foi feita para o peso neonatal. A variabilidade das medidas e o grau de correlação linear entre o PFE e o peso neonatal foram calculados, bem como a sensibilidade, especificidade e valores preditivos para o uso da curva de valores normais de PFE para o diagnóstico dos desvios do peso neonatal. RESULTADOS: diferença entre o PFE e o peso neonatal variou entre -540 e +594 g, com média de +47,1 g, e as duas medidas apresentaram um coeficiente de correlação linear de 0,94. A curva normal de PFE teve sensibilidade de 100% e especificidade de 90,5% em detectar PIG ao nascimento, e de 94,4 e 92,8%, respectivamente, em detectar GIG, porém os valores preditivos positivos foram baixos para ambos. CONCLUSÕES: a estimativa ultra-sonográfica do peso fetal foi concordante com o peso neonatal, superestimando-o em apenas cerca de 47 g e a curva do PFE teve bom desempenho no rastreamento diagnóstico de recém-nascidos PIG e GIG.PURPOSE: tocompare the ultrasound estimation of fetal weight (EFW with neonatal weight and to evaluate the performance of the normal EFW curve according to gestational age for the diagnosis of fetal/neonatal weight deviation and associated factors. METHODS: one hundred and eighty-six pregnant women who delivered at the institution from

  19. Lifetime analysis of the ITER first wall under steady-state and off-normal loads

    International Nuclear Information System (INIS)

    Mitteau, R; Sugihara, M; Raffray, R; Carpentier-Chouchana, S; Merola, M; Pitts, R A; Labidi, H; Stangeby, P

    2011-01-01

    The lifetime of the beryllium armor of the ITER first wall is evaluated for normal and off-normal operation. For the individual events considered, the lifetime spans between 930 and 35×10 6 discharges. The discrepancy between low and high estimates is caused by uncertainties about the behavior of the melt layer during off-normal events, variable plasma operation parameters and variability of the sputtering yields. These large uncertainties in beryllium armor loss estimates are a good example of the experimental nature of the ITER project and will not be truly resolved until ITER begins burning plasma operation.

  20. Estimating the basilar-membrane input-output function in normal-hearing and hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    To partly characterize the function of cochlear processing in humans, the basilar membrane (BM) input-output function can be estimated. In recent studies, forward masking has been used to estimate BM compression. If an on-frequency masker is processed compressively, while an off-frequency masker...... is transformed more linearly, the ratio between the slopes of growth of masking (GOM) functions provides an estimate of BM compression at the signal frequency. In this study, this paradigm is extended to also estimate the knee-point of the I/O-function between linear rocessing at low levels and compressive...... processing at medium levels. If a signal can be masked by a low-level on-frequency masker such that signal and masker fall in the linear region of the I/O-function, then a steeper GOM function is expected. The knee-point can then be estimated in the input level region where the GOM changes significantly...

  1. Effects of censoring on parameter estimates and power in genetic modeling

    NARCIS (Netherlands)

    Derks, Eske M.; Dolan, Conor V.; Boomsma, Dorret I.

    2004-01-01

    Genetic and environmental influences on variance in phenotypic traits may be estimated with normal theory Maximum Likelihood (ML). However, when the assumption of multivariate normality is not met, this method may result in biased parameter estimates and incorrect likelihood ratio tests. We

  2. Effects of censoring on parameter estimates and power in genetic modeling.

    NARCIS (Netherlands)

    Derks, E.M.; Dolan, C.V.; Boomsma, D.I.

    2004-01-01

    Genetic and environmental influences on variance in phenotypic traits may be estimated with normal theory Maximum Likelihood (ML). However, when the assumption of multivariate normality is not met, this method may result in biased parameter estimates and incorrect likelihood ratio tests. We

  3. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V.

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  4. Small Sample Robust Testing for Normality against Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2012-01-01

    Roč. 41, č. 7 (2012), s. 1167-1194 ISSN 0361-0918 Grant - others:Aktion(CZ-AT) 51p7, 54p21, 50p14, 54p13 Institutional research plan: CEZ:AV0Z10300504 Keywords : consistency * Hill estimator * t-Hill estimator * location functional * Pareto tail * power comparison * returns * robust tests for normality Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.295, year: 2012

  5. Statistical properties of the normalized ice particle size distribution

    Science.gov (United States)

    Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.

    2005-05-01

    Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000

  6. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  7. INTERVAL STATE ESTIMATION FOR SINGULAR DIFFERENTIAL EQUATION SYSTEMS WITH DELAYS

    Directory of Open Access Journals (Sweden)

    T. A. Kharkovskaia

    2016-07-01

    Full Text Available The paper deals with linear differential equation systems with algebraic restrictions (singular systems and a method of interval observer design for this kind of systems. The systems contain constant time delay, measurement noise and disturbances. Interval observer synthesis is based on monotone and cooperative systems technique, linear matrix inequations, Lyapunov function theory and interval arithmetic. The set of conditions that gives the possibility for interval observer synthesis is proposed. Results of synthesized observer operation are shown on the example of dynamical interindustry balance model. The advantages of proposed method are that it is adapted to observer design for uncertain systems, if the intervals of admissible values for uncertain parameters are given. The designed observer is capable to provide asymptotically definite limits on the estimation accuracy, since the interval of admissible values for the object state is defined at every instant. The obtained result provides an opportunity to develop the interval estimation theory for complex systems that contain parametric uncertainty, varying delay and nonlinear elements. Interval observers increasingly find applications in economics, electrical engineering, mechanical systems with constraints and optimal flow control.

  8. Topological characteristics of multi-valued maps and Lipschitzian functionals

    International Nuclear Information System (INIS)

    Klimov, V S

    2008-01-01

    This paper deals with the operator inclusion O element of F(x)+N Q (x), where F is a multi-valued map of monotonic type from a reflexive space V to its conjugate V * and N Q is the cone normal to the closed set Q, which, generally speaking, is not convex. To estimate the number of solutions of this inclusion we introduce topological characteristics of multi-valued maps and Lipschitzian functionals that have the properties of additivity and homotopy invariance. We prove some infinite-dimensional versions of the Poincare-Hopf theorem

  9. Quorum-Sensing Synchronization of Synthetic Toggle Switches: A Design Based on Monotone Dynamical Systems Theory.

    Directory of Open Access Journals (Sweden)

    Evgeni V Nikolaev

    2016-04-01

    Full Text Available Synthetic constructs in biotechnology, biocomputing, and modern gene therapy interventions are often based on plasmids or transfected circuits which implement some form of "on-off" switch. For example, the expression of a protein used for therapeutic purposes might be triggered by the recognition of a specific combination of inducers (e.g., antigens, and memory of this event should be maintained across a cell population until a specific stimulus commands a coordinated shut-off. The robustness of such a design is hampered by molecular ("intrinsic" or environmental ("extrinsic" noise, which may lead to spontaneous changes of state in a subset of the population and is reflected in the bimodality of protein expression, as measured for example using flow cytometry. In this context, a "majority-vote" correction circuit, which brings deviant cells back into the required state, is highly desirable, and quorum-sensing has been suggested as a way for cells to broadcast their states to the population as a whole so as to facilitate consensus. In this paper, we propose what we believe is the first such a design that has mathematically guaranteed properties of stability and auto-correction under certain conditions. Our approach is guided by concepts and theory from the field of "monotone" dynamical systems developed by M. Hirsch, H. Smith, and others. We benchmark our design by comparing it to an existing design which has been the subject of experimental and theoretical studies, illustrating its superiority in stability and self-correction of synchronization errors. Our stability analysis, based on dynamical systems theory, guarantees global convergence to steady states, ruling out unpredictable ("chaotic" behaviors and even sustained oscillations in the limit of convergence. These results are valid no matter what are the values of parameters, and are based only on the wiring diagram. The theory is complemented by extensive computational bifurcation analysis

  10. Fourier Spot Volatility Estimator: Asymptotic Normality and Efficiency with Liquid and Illiquid High-Frequency Data

    Science.gov (United States)

    2015-01-01

    The recent availability of high frequency data has permitted more efficient ways of computing volatility. However, estimation of volatility from asset price observations is challenging because observed high frequency data are generally affected by noise-microstructure effects. We address this issue by using the Fourier estimator of instantaneous volatility introduced in Malliavin and Mancino 2002. We prove a central limit theorem for this estimator with optimal rate and asymptotic variance. An extensive simulation study shows the accuracy of the spot volatility estimates obtained using the Fourier estimator and its robustness even in the presence of different microstructure noise specifications. An empirical analysis on high frequency data (U.S. S&P500 and FIB 30 indices) illustrates how the Fourier spot volatility estimates can be successfully used to study intraday variations of volatility and to predict intraday Value at Risk. PMID:26421617

  11. Bias-reduced estimation of long memory stochastic volatility

    DEFF Research Database (Denmark)

    Frederiksen, Per; Nielsen, Morten Ørregaard

    We propose to use a variant of the local polynomial Whittle estimator to estimate the memory parameter in volatility for long memory stochastic volatility models with potential nonstation- arity in the volatility process. We show that the estimator is asymptotically normal and capable of obtaining...

  12. Efficient stereological approaches for the volumetry of a normal or enlarged spleen from MDCT images

    Energy Technology Data Exchange (ETDEWEB)

    Mazonakis, Michalis; Stratakis, John; Damilakis, John [University of Crete, Department of Medical Physics, Faculty of Medicine, P.O. Box 2208, Iraklion, Crete (Greece)

    2015-06-01

    To introduce efficient stereological approaches for estimating the volume of a normal or enlarged spleen from MDCT. All study participants underwent an abdominal MDCT. The first group included 20 consecutive patients with splenomegaly and the second group consisted of 20 subjects with a normal spleen. Splenic volume estimations were performed using the stereological point counting method. Stereological assessments were optimized using the systematic slice sampling procedure. Planimetric measurements based on manual tracing of splenic boundaries on each slice were taken as reference values. Stereological analysis using five to eight systematically sampled slices provided enlarged splenic volume estimations with a mean precision of 4.9 ± 1.0 % in a mean time of 2.3 ± 0.4 min. A similar measurement duration and error was observed for normal splenic volume assessment using four to seven systematically selected slices. These stereological approaches slightly but insignificantly overestimated the volume of a normal and enlarged spleen compared to planimetry (P > 0.05) with a mean difference of -1.3 ± 4.3 % and -2.7 ± 5.2 %, respectively. The two methods were highly correlated (r ≥ 0.96). The variability of repeated stereological estimations was below 3.8 %. The proposed stereological approaches enable the rapid, reproducible, and accurate splenic volume estimation from MDCT data in patients with or without splenomegaly. (orig.)

  13. Bearing Capacity of Foundations subjected to Impact Loads

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Jakobsen, Kim Parsberg

    1996-01-01

    In the design process for foundations, the bearing capacity calculations are normally restricted to monotonic loads. Even in cases where the impact load is of significance the dynamic aspects are neglected by use of a traditional deterministic ultimate limit state analysis. Nevertheless it is com......In the design process for foundations, the bearing capacity calculations are normally restricted to monotonic loads. Even in cases where the impact load is of significance the dynamic aspects are neglected by use of a traditional deterministic ultimate limit state analysis. Nevertheless...

  14. Analysis of a Dynamic Viscoelastic Contact Problem with Normal Compliance, Normal Damped Response, and Nonmonotone Slip Rate Dependent Friction

    Directory of Open Access Journals (Sweden)

    Mikaël Barboteu

    2016-01-01

    Full Text Available We consider a mathematical model which describes the dynamic evolution of a viscoelastic body in frictional contact with an obstacle. The contact is modelled with a combination of a normal compliance and a normal damped response law associated with a slip rate-dependent version of Coulomb’s law of dry friction. We derive a variational formulation and an existence and uniqueness result of the weak solution of the problem is presented. Next, we introduce a fully discrete approximation of the variational problem based on a finite element method and on an implicit time integration scheme. We study this fully discrete approximation schemes and bound the errors of the approximate solutions. Under regularity assumptions imposed on the exact solution, optimal order error estimates are derived for the fully discrete solution. Finally, after recalling the solution of the frictional contact problem, some numerical simulations are provided in order to illustrate both the behavior of the solution related to the frictional contact conditions and the theoretical error estimate result.

  15. Pellet cladding interaction (PCI) fuel duty during normal operation of ASEA-ATOM BWRs

    International Nuclear Information System (INIS)

    Vaernild, O.; Olsson, S.

    1983-01-01

    Local power changes may under special conditions cause PCI fuel failures in a power reactor. By restricting the local power increase rate in certain situations it is possible to prevent PCI failures. Fine motion control rod drives, large operating range of the main recirculation pumps and an advanced burnable absorber design have minimized the impact of the PCI restrictions. With current ICFM schemes the power of an assembly is due to the burnup of the gadolinia gradually increasing during the first cycle of operation. After this the power is essentially decreasing monotonously during the remaining life of the assembly. Some assemblies are for short burnup intervals operated at very low power in control cells. The control rods in these cells may however be withdrawn without restrictions leading to energy production losses. Base load operation would in the normal case lead to very minor PCI loads on the fuel regardless of any PCI related operating restrictions. At the return to full power after a short shutdown or in connection with load follow operation, the xenon transient may cause PCI loads on the fuel. To avoid this a few hoursholdtime before going back to full power is recommended. (author)

  16. Pellet-cladding interaction (PCI) fuel duty during normal operation of ASEA-ATOM BWRs

    International Nuclear Information System (INIS)

    Vaernild, O.; Olsson, S.

    1985-01-01

    Local power changes may, under special conditions, cause PCI fuel failures in a power reactor. By restricting the local power increase rate in certain situations it is possible to prevent PCI failures. Fine motion control rod drives, large operating range of the main recirculation pumps and an advanced burnable absorber design have minimized the impact of the PCI restrictions. With current ICFM schemes the power of an assembly is due to the burnup of the gadolinia gradually increasing during the first cycle of operation. After this the power is essentially decreasing monotonously during the remaining life of the assembly. Some assemblies are for short burnup intervals operated at very low power in control cells. The control rods in these cells may, however, be withdrawn without restrictions leading to energy production losses. Base load operation would in the normal case lead to very minor PCI loads on the fuel regardless of any PCI-related operating restrictions. At the return to full power after a short shutdown or in connection with load follow operation, the xenon transient may cause PCI loads on the fuel. To avoid this a few hours hold-time before going back to full power is recommended. (author)

  17. Estimation of the Mean of a Univariate Normal Distribution When the Variance is not Known

    NARCIS (Netherlands)

    Danilov, D.L.; Magnus, J.R.

    2002-01-01

    We consider the problem of estimating the first k coeffcients in a regression equation with k + 1 variables.For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002).We investigate properties of this estimator in

  18. Estimation of the mean of a univariate normal distribution when the variance is not known

    NARCIS (Netherlands)

    Danilov, Dmitri

    2005-01-01

    We consider the problem of estimating the first k coefficients in a regression equation with k+1 variables. For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002). We generalize this estimator to the case

  19. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    Science.gov (United States)

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results

  20. Cumulative and current exposure to potentially nephrotoxic antiretrovirals and development of chronic kidney disease in HIV-positive individuals with a normal baseline estimated glomerular filtration rate

    DEFF Research Database (Denmark)

    Mocroft, Amanda; Lundgren, Jens D; Ross, Michael

    2016-01-01

    BACKGROUND: Whether or not the association between some antiretrovirals used in HIV infection and chronic kidney disease is cumulative is a controversial topic, especially in patients with initially normal renal function. In this study, we aimed to investigate the association between duration...... of exposure to antiretrovirals and the development of chronic kidney disease in people with initially normal renal function, as measured by estimated glomerular filtration rate (eGFR). METHODS: In this prospective international cohort study, HIV-positive adult participants (aged ≥16 years) from the D......:A:D study (based in Europe, the USA, and Australia) with first eGFR greater than 90 mL/min per 1·73 m(2) were followed from baseline (first eGFR measurement after Jan 1, 2004) until the occurrence of one of the following: chronic kidney disease; last eGFR measurement; Feb 1, 2014; or final visit plus 6...

  1. An empirical comparison of effective concentration estimators for evaluating aquatic toxicity test responses

    Energy Technology Data Exchange (ETDEWEB)

    Bailer, A.J.; Hughes, M.R.; Denton, D.L.; Oris, J.T.

    2000-01-01

    Aquatic toxicity tests are statistically evaluated by either hypothesis testing procedures to derive a no-observed-effect concentration or by inverting regression models to calculate the concentration associated with a specific reduction from the control response. These latter methods can be described as potency estimation methods. Standard US Environmental Protection Agency (USEPA) potency estimation methods are based on two different techniques. For continuous or count response data, a nominally nonparametric method that assumes monotonic decreasing responses and piecewise linear patterns between successive concentration groups is used. For quantal responses, a probit regression model with a linear dose term is fit. These techniques were compared with a recently developed parametric regression-based estimator, the relative inhibition estimator, RIp. This method is based on fitting generalized linear models, followed by estimation of the concentration associated with a particular decrement relative to control responses. These estimators, with levels of inhibition (p) of 25 and 50%, were applied to a series of chronic toxicity tests in a US EPA region 9 database of reference toxicity tests. Biological responses evaluated in these toxicity tests included the number of young produced in three broods by the water flea (Ceriodaphnia dubia) and germination success and tube length data from the giant kelp (Macrocystis pyrifera). The greatest discrepancy between the RIp and standard US EPA estimators was observed for C. dubia. The concentration-response pattern for this biological endpoint exhibited nonmonotonicity more frequently than for any of the other endpoint. Future work should consider optimal experimental designs to estimate these quantities, methods for constructing confidence intervals, and simulation studies to explore the behavior of these estimators under known conditions.

  2. Normalized glandular dose (DgN) coefficients for flat-panel CT breast imaging

    International Nuclear Information System (INIS)

    Thacker, Samta C; Glick, Stephen J

    2004-01-01

    The development of new digital mammography techniques such as dual-energy imaging, tomosynthesis and CT breast imaging will require investigation of optimal camera design parameters and optimal imaging acquisition parameters. In optimizing these acquisition protocols and imaging systems it is important to have knowledge of the radiation dose to the breast. This study presents a methodology for estimating the normalized glandular dose to the uncompressed breast using the geometry proposed for flat-panel CT breast imaging. The simulation uses the GEANT 3 Monte Carlo code to model x-ray transport and absorption within the breast phantom. The Monte Carlo software was validated for breast dosimetry by comparing results of the normalized glandular dose (DgN) values of the compressed breast to those reported in the literature. The normalized glandular dose was then estimated for a range of breast diameters from 10 cm to 18 cm using an uncompressed breast model with a homogeneous composition of adipose and glandular tissue, and for monoenergetic x-rays from 10 keV to 120 keV. These data were fit providing expressions for the normalized glandular dose. Using these expressions for the DgN coefficients and input variables such as the diameter, height and composition of the breast phantom, the mean glandular dose for any spectra can be estimated. A computer program to provide normalized glandular dose values has been made available online. In addition, figures displaying energy deposition maps are presented to better understand the spatial distribution of dose in CT breast imaging

  3. The Application of Normal Stress Reduction Function in Tilt Tests for Different Block Shapes

    Science.gov (United States)

    Kim, Dong Hyun; Gratchev, Ivan; Hein, Maw; Balasubramaniam, Arumugam

    2016-08-01

    This paper focuses on the influence of the shapes of rock cores, which control the sliding or toppling behaviours in tilt tests for the estimation of rock joint roughness coefficients (JRC). When the JRC values are estimated by performing tilt tests, the values are directly proportional to the basic friction of the rock material and the applied normal stress on the sliding planes. Normal stress obviously varies with the shape of the sliding block, and the basic friction angle is also affected by the sample shapes in tilt tests. In this study, the shapes of core blocks are classified into three representative shapes and those are created using plaster. Using the various shaped artificial cores, a set of tilt tests is carried out to identify the shape influences on the normal stress and the basic friction angle in tilt tests. The test results propose a normal stress reduction function to estimate the normal stress for tilt tests according to the sample shapes based on Barton's empirical equation. The proposed normal stress reduction functions are verified by tilt tests using artificial plaster joints and real rock joint sets. The plaster joint sets are well matched and cast in detailed printed moulds using a 3D printing technique. With the application of the functions, the obtained JRC values from the tilt tests using the plaster samples and the natural rock samples are distributed within a reasonable JRC range when compared with the measured values.

  4. Do centimetres matter? Self-reported versus estimated height measurements in parents.

    Science.gov (United States)

    Gozzi, T; Flück, Ce; L'allemand, D; Dattani, M T; Hindmarsh, P C; Mullis, P E

    2010-04-01

    An impressive discrepancy between reported and measured parental height is often observed. The aims of this study were: (a) to assess whether there is a significant difference between the reported and measured parental height; (b) to focus on the reported and, thereafter, measured height of the partner; (c) to analyse its impact on the calculated target height range. A total of 1542 individual parents were enrolled. The parents were subdivided into three groups: normal height (3-97th Centile), short (97%) stature. Overall, compared with men, women were far better in estimating their own height (p Women of normal stature underestimated the short partner and overestimated the tall partner, whereas male partners of normal stature overestimated both their short as well as tall partners. Women of tall stature estimated the heights of their short partners correctly, whereas heights of normal statured men were underestimated. On the other hand, tall men overestimated the heights of their female partners who are of normal and short stature. Furthermore, women of short stature estimated the partners of normal stature adequately, and the heights of their tall partners were overestimated. Interestingly, the short men significantly underestimated the normal, but overestimated tall female partners. Only measured heights should be used to perform accurate evaluations of height, particularly when diagnostic tests or treatment interventions are contemplated. For clinical trails, we suggest that only quality measured parental heights are acceptable, as the errors incurred in estimates may enhance/conceal true treatment effects.

  5. CT assessment of normal splenic size in children

    International Nuclear Information System (INIS)

    Prassopoulos, P.; Cavouras, D.

    1994-01-01

    The size of the normal spleen was estimated by CT in 153 children, examined with indication unrelated to splenic disease. In each patient the width, thickness, length and volume of the spleen were calculated. Measurements were also normalized to the transverse diameter of the body of the first lumbar vertebra. The spleen underwent significant growth during the first 4 years of life and reached maximum size at the age of 13. There were no differences in splenic volume between boys and girls. Splenic thickness correlated best with normal splenic volume. The strongest correlation was also found between splenic thickness and volume in a group of 45 children with clinically evident splenomegaly. Splenic thickness, an easy-to-use measurement, may be employed in everyday practice to represent splenic volume on CT. (orig.)

  6. Normal Spin Asymmetries in Elastic Electron-Proton Scattering

    International Nuclear Information System (INIS)

    M. Gorchtein; P.A.M. Guichon; M. Vanderhaeghen

    2004-01-01

    We discuss the two-photon exchange contribution to observables which involve lepton helicity flip in elastic lepton-nucleon scattering. This contribution is accessed through the single spin asymmetry for a lepton beam polarized normal to the scattering plane. We estimate this beam normal spin asymmetry at large momentum transfer using a parton model and we express the corresponding amplitude in terms of generalized parton distributions. We further discuss this observable in the quasi-RCS kinematics which may be dominant at certain kinematical conditions and find it to be governed by the photon helicity-flip RCS amplitudes

  7. Normal Spin Asymmetries in Elastic Electron-Proton Scattering

    International Nuclear Information System (INIS)

    Gorchtein, M.; Guichon, P.A.M.; Vanderhaeghen, M.

    2005-01-01

    We discuss the two-photon exchange contribution to observables which involve lepton helicity flip in elastic lepton-nucleon scattering. This contribution is accessed through the single spin asymmetry for a lepton beam polarized normal to the scattering plane. We estimate this beam normal spin asymmetry at large momentum transfer using a parton model and we express the corresponding amplitude in terms of generalized parton distributions. We further discuss this observable in the quasi-RCS kinematics which may be dominant at certain kinematical conditions and find it to be governed by the photon helicity-flip RCS amplitudes

  8. Geomagnetic reversal in brunhes normal polarity epoch.

    Science.gov (United States)

    Smith, J D; Foster, J H

    1969-02-07

    The magnetic stratigraphly of seven cores of deep-sea sediment established the existence of a short interval of reversed polarity in the upper part of the Brunches epoch of normal polarity. The reversed zone in the cores correlates well with paleontological boundaries and is named the Blake event. Its boundaries are estimated to be 108,000 and 114,000 years ago +/- 10 percent.

  9. Essential Oil of Japanese Cedar (Cryptomeria japonica) Wood Increases Salivary Dehydroepiandrosterone Sulfate Levels after Monotonous Work.

    Science.gov (United States)

    Matsubara, Eri; Tsunetsugu, Yuko; Ohira, Tatsuro; Sugiyama, Masaki

    2017-01-21

    Employee problems arising from mental illnesses have steadily increased and become a serious social problem in recent years. Wood is a widely available plant material, and knowledge of the psychophysiological effects of inhalation of woody volatile compounds has grown considerably. In this study, we established an experimental method to evaluate the effects of Japanese cedar wood essential oil on subjects performing monotonous work. Two experiment conditions, one with and another without diffusion of the essential oil were prepared. Salivary stress markers were determined during and after a calculation task followed by distribution of questionnaires to achieve subjective odor assessment. We found that inhalation of air containing the volatile compounds of Japanese cedar wood essential oil increased the secretion of dehydroepiandrosterone sulfate (DHEA-s). Slight differences in the subjective assessment of the odor of the experiment rooms were observed. The results of the present study indicate that the volatile compounds of Japanese cedar wood essential oil affect the endocrine regulatory mechanism to facilitate stress responses. Thus, we suggest that this essential oil can improve employees' mental health.

  10. The Problems of Multiple Feedback Estimation.

    Science.gov (United States)

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  11. Bayesian estimation of isotopic age differences

    International Nuclear Information System (INIS)

    Curl, R.L.

    1988-01-01

    Isotopic dating is subject to uncertainties arising from counting statistics and experimental errors. These uncertainties are additive when an isotopic age difference is calculated. If large, they can lead to no significant age difference by classical statistics. In many cases, relative ages are known because of stratigraphic order or other clues. Such information can be used to establish a Bayes estimate of age difference which will include prior knowledge of age order. Age measurement errors are assumed to be log-normal and a noninformative but constrained bivariate prior for two true ages in known order is adopted. True-age ratio is distributed as a truncated log-normal variate. Its expected value gives an age-ratio estimate, and its variance provides credible intervals. Bayesian estimates of ages are different and in correct order even if measured ages are identical or reversed in order. For example, age measurements on two samples might both yield 100 ka with coefficients of variation of 0.2. Bayesian estimates are 22.7 ka for age difference with a 75% credible interval of [4.4, 43.7] ka

  12. Attention and normalization circuits in macaque V1

    Science.gov (United States)

    Sanayei, M; Herrero, J L; Distler, C; Thiele, A

    2015-01-01

    Attention affects neuronal processing and improves behavioural performance. In extrastriate visual cortex these effects have been explained by normalization models, which assume that attention influences the circuit that mediates surround suppression. While normalization models have been able to explain attentional effects, their validity has rarely been tested against alternative models. Here we investigate how attention and surround/mask stimuli affect neuronal firing rates and orientation tuning in macaque V1. Surround/mask stimuli provide an estimate to what extent V1 neurons are affected by normalization, which was compared against effects of spatial top down attention. For some attention/surround effect comparisons, the strength of attentional modulation was correlated with the strength of surround modulation, suggesting that attention and surround/mask stimulation (i.e. normalization) might use a common mechanism. To explore this in detail, we fitted multiplicative and additive models of attention to our data. In one class of models, attention contributed to normalization mechanisms, whereas in a different class of models it did not. Model selection based on Akaike's and on Bayesian information criteria demonstrated that in most cells the effects of attention were best described by models where attention did not contribute to normalization mechanisms. This demonstrates that attentional influences on neuronal responses in primary visual cortex often bypass normalization mechanisms. PMID:25757941

  13. Using partially labeled data for normal mixture identification with application to class definition

    Science.gov (United States)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.

  14. Effect of Smart Meter Measurements Data On Distribution State Estimation

    DEFF Research Database (Denmark)

    Pokhrel, Basanta Raj; Nainar, Karthikeyan; Bak-Jensen, Birgitte

    2018-01-01

    Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements in the phy......Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements...... in the physical grid can enforce significant stress not only on the communication infrastructure but also in the control algorithms. This paper aims to propose a methodology to analyze needed real time smart meter data from low voltage distribution grids and their applicability in distribution state estimation...

  15. A Lattice-Misfit-Dependent Damage Model for Non-linear Damage Accumulations Under Monotonous Creep in Single Crystal Superalloys

    Science.gov (United States)

    le Graverend, J.-B.

    2018-05-01

    A lattice-misfit-dependent damage density function is developed to predict the non-linear accumulation of damage when a thermal jump from 1050 °C to 1200 °C is introduced somewhere in the creep life. Furthermore, a phenomenological model aimed at describing the evolution of the constrained lattice misfit during monotonous creep load is also formulated. The response of the lattice-misfit-dependent plasticity-coupled damage model is compared with the experimental results obtained at 140 and 160 MPa on the first generation Ni-based single crystal superalloy MC2. The comparison reveals that the damage model is well suited at 160 MPa and less at 140 MPa because the transfer of stress to the γ' phase occurs for stresses above 150 MPa which leads to larger variations and, therefore, larger effects of the constrained lattice misfit on the lifetime during thermo-mechanical loading.

  16. Feasibility of Residual Stress Nondestructive Estimation Using the Nonlinear Property of Critical Refraction Longitudinal Wave

    Directory of Open Access Journals (Sweden)

    Yu-Hua Zhang

    2017-01-01

    Full Text Available Residual stress has significant influence on the performance of mechanical components, and the nondestructive estimation of residual stress is always a difficult problem. This study applies the relative nonlinear coefficient of critical refraction longitudinal (LCR wave to nondestructively characterize the stress state of materials; the feasibility of residual stress estimation using the nonlinear property of LCR wave is verified. The nonlinear ultrasonic measurements based on LCR wave are conducted on components with known stress state to calculate the relative nonlinear coefficient. Experimental results indicate that the relative nonlinear coefficient monotonically increases with prestress and the increment of relative nonlinear coefficient is about 80%, while the wave velocity only decreases about 0.2%. The sensitivity of the relative nonlinear coefficient for stress is much higher than wave velocity. Furthermore, the dependence between the relative nonlinear coefficient and deformation state of components is found. The stress detection resolution based on the nonlinear property of LCR wave is 10 MPa, which has higher resolution than wave velocity. These results demonstrate that the nonlinear property of LCR wave is more suitable for stress characterization than wave velocity, and this quantitative information could be used for residual stress estimation.

  17. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  18. Concentrations of proanthocyanidins in common foods and estimations of normal consumption.

    Science.gov (United States)

    Gu, Liwei; Kelm, Mark A; Hammerstone, John F; Beecher, Gary; Holden, Joanne; Haytowitz, David; Gebhardt, Susan; Prior, Ronald L

    2004-03-01

    Proanthocyanidins (PAs) have been shown to have potential health benefits. However, no data exist concerning their dietary intake. Therefore, PAs in common and infant foods from the U.S. were analyzed. On the bases of our data and those from the USDA's Continuing Survey of Food Intakes by Individuals (CSFII) of 1994-1996, the mean daily intake of PAs in the U.S. population (>2 y old) was estimated to be 57.7 mg/person. Monomers, dimers, trimers, and those above trimers contribute 7.1, 11.2, 7.8, and 73.9% of total PAs, respectively. The major sources of PAs in the American diet are apples (32.0%), followed by chocolate (17.9%) and grapes (17.8%). The 2- to 5-y-old age group (68.2 mg/person) and men >60 y old (70.8 mg/person) consume more PAs daily than other groups because they consume more fruit. The daily intake of PAs for 4- to 6-mo-old and 6- to 10-mo-old infants was estimated to be 1.3 mg and 26.9 mg, respectively, based on the recommendations of the American Academy of Pediatrics. This study supports the concept that PAs account for a major fraction of the total flavonoids ingested in Western diets.

  19. Quaternion normalization in additive EKF for spacecraft attitude determination

    Science.gov (United States)

    Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.

    1991-01-01

    This work introduces, examines, and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter (EKF) to spacecraft attitude determination, which is based on vector measurements. Two new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstrate the performance of all three schemes. A fourth scheme is suggested for future research. Although the schemes were tested for spacecraft attitude determination, the conclusions are general and hold for attitude determination of any three dimensional body when based on vector measurements, and use an additive EKF for estimation, and the quaternion for specifying the attitude.

  20. Weighted conditional least-squares estimation

    International Nuclear Information System (INIS)

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered