WorldWideScience

Sample records for monotone normal means

  1. A Survey on Operator Monotonicity, Operator Convexity, and Operator Means

    Directory of Open Access Journals (Sweden)

    Pattrawut Chansangiam

    2015-01-01

    Full Text Available This paper is an expository devoted to an important class of real-valued functions introduced by Löwner, namely, operator monotone functions. This concept is closely related to operator convex/concave functions. Various characterizations for such functions are given from the viewpoint of differential analysis in terms of matrix of divided differences. From the viewpoint of operator inequalities, various characterizations and the relationship between operator monotonicity and operator convexity are given by Hansen and Pedersen. In the viewpoint of measure theory, operator monotone functions on the nonnegative reals admit meaningful integral representations with respect to Borel measures on the unit interval. Furthermore, Kubo-Ando theory asserts the correspondence between operator monotone functions and operator means.

  2. Monotone numerical methods for finite-state mean-field games

    KAUST Repository

    Gomes, Diogo A.; Saude, Joao

    2017-01-01

    Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.

  3. Monotone numerical methods for finite-state mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2017-04-29

    Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.

  4. On the Monotonicity and Log-Convexity of a Four-Parameter Homogeneous Mean

    Directory of Open Access Journals (Sweden)

    Yang Zhen-Hang

    2008-01-01

    Full Text Available Abstract A four-parameter homogeneous mean is defined by another approach. The criterion of its monotonicity and logarithmically convexity is presented, and three refined chains of inequalities for two-parameter mean values are deduced which contain many new and classical inequalities for means.

  5. On the Computation of Optimal Monotone Mean-Variance Portfolios via Truncated Quadratic Utility

    OpenAIRE

    Ales Cerný; Fabio Maccheroni; Massimo Marinacci; Aldo Rustichini

    2008-01-01

    We report a surprising link between optimal portfolios generated by a special type of variational preferences called divergence preferences (cf. [8]) and optimal portfolios generated by classical expected utility. As a special case we connect optimization of truncated quadratic utility (cf. [2]) to the optimal monotone mean-variance portfolios (cf. [9]), thus simplifying the computation of the latter.

  6. Intuitionistic Fuzzy Normalized Weighted Bonferroni Mean and Its Application in Multicriteria Decision Making

    Directory of Open Access Journals (Sweden)

    Wei Zhou

    2012-01-01

    Full Text Available The Bonferroni mean (BM was introduced by Bonferroni six decades ago but has been a hot research topic recently since its usefulness of the aggregation techniques. The desirable characteristic of the BM is its capability to capture the interrelationship between input arguments. However, the classical BM and GBM ignore the weight vector of aggregated arguments, the general weighted BM (WBM has not the reducibility, and the revised generalized weighted BM (GWBM cannot reflect the interrelationship between the individual criterion and other criteria. To deal with these issues, in this paper, we propose the normalized weighted Bonferroni mean (NWBM and the generalized normalized weighted Bonferroni mean (GNWBM and study their desirable properties, such as reducibility, idempotency, monotonicity, and boundedness. Furthermore, we investigate the NWBM and GNWBM operators under the intuitionistic fuzzy environment which is more common phenomenon in modern life and develop two new intuitionistic fuzzy aggregation operators based on the NWBM and GNWBM, that is, the intuitionistic fuzzy normalized weighted Bonferroni mean (IFNWBM and the generalized intuitionistic fuzzy normalized weighted Bonferroni mean (GIFNWBM. Finally, based on the GIFNWBM, we propose an approach to multicriteria decision making under the intuitionistic fuzzy environment, and a practical example is provided to illustrate our results.

  7. Monotonism.

    Science.gov (United States)

    Franklin, Elda

    1981-01-01

    Reviews studies on the etiology of monotonism, the monotone being that type of uncertain or inaccurate singer who cannot vocally match pitches and who has trouble accurately reproducing even a familiar song. Neurological factors (amusia, right brain abnormalities), age, and sex differences are considered. (Author/SJL)

  8. Unordered Monotonicity.

    Science.gov (United States)

    Heckman, James J; Pinto, Rodrigo

    2018-01-01

    This paper defines and analyzes a new monotonicity condition for the identification of counterfactuals and treatment effects in unordered discrete choice models with multiple treatments, heterogenous agents and discrete-valued instruments. Unordered monotonicity implies and is implied by additive separability of choice of treatment equations in terms of observed and unobserved variables. These results follow from properties of binary matrices developed in this paper. We investigate conditions under which unordered monotonicity arises as a consequence of choice behavior. We characterize IV estimators of counterfactuals as solutions to discrete mixture problems.

  9. Monotone piecewise bicubic interpolation

    International Nuclear Information System (INIS)

    Carlson, R.E.; Fritsch, F.N.

    1985-01-01

    In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables

  10. Monotone Boolean functions

    International Nuclear Information System (INIS)

    Korshunov, A D

    2003-01-01

    Monotone Boolean functions are an important object in discrete mathematics and mathematical cybernetics. Topics related to these functions have been actively studied for several decades. Many results have been obtained, and many papers published. However, until now there has been no sufficiently complete monograph or survey of results of investigations concerning monotone Boolean functions. The object of this survey is to present the main results on monotone Boolean functions obtained during the last 50 years

  11. Normalization based K means Clustering Algorithm

    OpenAIRE

    Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika

    2015-01-01

    K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...

  12. Strong Convergence of Monotone Hybrid Method for Maximal Monotone Operators and Hemirelatively Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Chakkrid Klin-eam

    2009-01-01

    Full Text Available We prove strong convergence theorems for finding a common element of the zero point set of a maximal monotone operator and the fixed point set of a hemirelatively nonexpansive mapping in a Banach space by using monotone hybrid iteration method. By using these results, we obtain new convergence results for resolvents of maximal monotone operators and hemirelatively nonexpansive mappings in a Banach space.

  13. Matching by Monotonic Tone Mapping.

    Science.gov (United States)

    Kovacs, Gyorgy

    2018-06-01

    In this paper, a novel dissimilarity measure called Matching by Monotonic Tone Mapping (MMTM) is proposed. The MMTM technique allows matching under non-linear monotonic tone mappings and can be computed efficiently when the tone mappings are approximated by piecewise constant or piecewise linear functions. The proposed method is evaluated in various template matching scenarios involving simulated and real images, and compared to other measures developed to be invariant to monotonic intensity transformations. The results show that the MMTM technique is a highly competitive alternative of conventional measures in problems where possible tone mappings are close to monotonic.

  14. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  15. Generalized monotone operators in Banach spaces

    International Nuclear Information System (INIS)

    Nanda, S.

    1988-07-01

    The concept of F-monotonicity was first introduced by Kato and this generalizes the notion of monotonicity introduced by Minty. The purpose of this paper is to define various types of F-monotonicities and discuss the relationships among them. (author). 6 refs

  16. The regularized monotonicity method: detecting irregular indefinite inclusions

    DEFF Research Database (Denmark)

    Garde, Henrik; Staboulis, Stratos

    2018-01-01

    inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...

  17. Non-monotonic behavior of electron temperature in argon inductively coupled plasma and its analysis via novel electron mean energy equation

    Science.gov (United States)

    Zhao, Shu-Xia

    2018-03-01

    In this work, the behavior of electron temperature against the power in argon inductively coupled plasma is investigated by a fluid model. The model properly reproduces the non-monotonic variation of temperature with power observed in experiments. By means of a novel electron mean energy equation proposed for the first time in this article, this electron temperature behavior is interpreted. In the overall considered power range, the skin effect of radio frequency electric field results in localized deposited power density, responsible for an increase of electron temperature with power by means of one parameter defined as power density divided by electron density. At low powers, the rate fraction of multistep and Penning ionizations of metastables that consume electron energy two times significantly increases with power, which dominates over the skin effect and consequently leads to the decrease of temperature with power. In the middle power regime, a transition region of temperature is given by the competition between the ionizing effect of metastables and the skin effect of electric field. The power location where the temperature alters its trend moves to the low power end as increasing the pressure due to the lack of metastables. The non-monotonic curve of temperature is asymmetric at the short chamber due to the weak role of skin effect in increasing the temperature and tends symmetric when axially prolonging the chamber. Still, the validity of the fluid model in this prediction is estimated and the role of neutral gas heating is guessed. This finding is helpful for people understanding the different trends of temperature with power in the literature.

  18. Optimal Monotone Drawings of Trees

    OpenAIRE

    He, Dayu; He, Xin

    2016-01-01

    A monotone drawing of a graph G is a straight-line drawing of G such that, for every pair of vertices u,w in G, there exists abpath P_{uw} in G that is monotone in some direction l_{uw}. (Namely, the order of the orthogonal projections of the vertices of P_{uw} on l_{uw} is the same as the order they appear in P_{uw}.) The problem of finding monotone drawings for trees has been studied in several recent papers. The main focus is to reduce the size of the drawing. Currently, the smallest drawi...

  19. Multipartite classical and quantum secrecy monotones

    International Nuclear Information System (INIS)

    Cerf, N.J.; Massar, S.; Schneider, S.

    2002-01-01

    In order to study multipartite quantum cryptography, we introduce quantities which vanish on product probability distributions, and which can only decrease if the parties carry out local operations or public classical communication. These 'secrecy monotones' therefore measure how much secret correlation is shared by the parties. In the bipartite case we show that the mutual information is a secrecy monotone. In the multipartite case we describe two different generalizations of the mutual information, both of which are secrecy monotones. The existence of two distinct secrecy monotones allows us to show that in multipartite quantum cryptography the parties must make irreversible choices about which multipartite correlations they want to obtain. Secrecy monotones can be extended to the quantum domain and are then defined on density matrices. We illustrate this generalization by considering tripartite quantum cryptography based on the Greenberger-Horne-Zeilinger state. We show that before carrying out measurements on the state, the parties must make an irreversible decision about what probability distribution they want to obtain

  20. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  1. Commutative $C^*$-algebras and $\\sigma$-normal morphisms

    OpenAIRE

    de Jeu, Marcel

    2003-01-01

    We prove in an elementary fashion that the image of a commutative monotone $\\sigma$-complete $C^*$-algebra under a $\\sigma$-normal morphism is again monotone $\\sigma$-complete and give an application of this result in spectral theory.

  2. The Monotonicity Puzzle: An Experimental Investigation of Incentive Structures

    Directory of Open Access Journals (Sweden)

    Jeannette Brosig

    2010-05-01

    Full Text Available Non-monotone incentive structures, which - according to theory - are able to induce optimal behavior, are often regarded as empirically less relevant for labor relationships. We compare the performance of a theoretically optimal non-monotone contract with a monotone one under controlled laboratory conditions. Implementing some features relevant to real-world employment relationships, our paper demonstrates that, in fact, the frequency of income-maximizing decisions made by agents is higher under the monotone contract. Although this observed behavior does not change the superiority of the non-monotone contract for principals, they do not choose this contract type in a significant way. This is what we call the monotonicity puzzle. Detailed investigations of decisions provide a clue for solving the puzzle and a possible explanation for the popularity of monotone contracts.

  3. Testing Manifest Monotonicity Using Order-Constrained Statistical Inference

    Science.gov (United States)

    Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas

    2013-01-01

    Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores,…

  4. Strong monotonicity in mixed-state entanglement manipulation

    International Nuclear Information System (INIS)

    Ishizaka, Satoshi

    2006-01-01

    A strong entanglement monotone, which never increases under local operations and classical communications (LOCC), restricts quantum entanglement manipulation more strongly than the usual monotone since the usual one does not increase on average under LOCC. We propose strong monotones in mixed-state entanglement manipulation under LOCC. These are related to the decomposability and one-positivity of an operator constructed from a quantum state, and reveal geometrical characteristics of entangled states. These are lower bounded by the negativity or generalized robustness of entanglement

  5. A locally adaptive normal distribution

    DEFF Research Database (Denmark)

    Arvanitidis, Georgios; Hansen, Lars Kai; Hauberg, Søren

    2016-01-01

    entropy distribution under the given metric. The underlying metric is, however, non-parametric. We develop a maximum likelihood algorithm to infer the distribution parameters that relies on a combination of gradient descent and Monte Carlo integration. We further extend the LAND to mixture models......The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric. We suggest to replace this metric with a locally adaptive, smoothly changing (Riemannian) metric that favors regions of high local density...

  6. Generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators with applications in non-compact settings and minimization problems

    Directory of Open Access Journals (Sweden)

    Chowdhury Molhammad SR

    2000-01-01

    Full Text Available Results are obtained on existence theorems of generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators in both compact and non-compact settings. We shall use the concept of escaping sequences introduced by Border (Fixed Point Theorem with Applications to Economics and Game Theory, Cambridge University Press, Cambridge, 1985 to obtain results in non-compact settings. Existence theorems on non-compact generalized bi-complementarity problems for quasi-semi-monotone and bi-quasi-semi-monotone operators are also obtained. Moreover, as applications of some results of this paper on generalized bi-quasi-variational inequalities, we shall obtain existence of solutions for some kind of minimization problems with quasi- semi-monotone and bi-quasi-semi-monotone operators.

  7. Specific non-monotonous interactions increase persistence of ecological networks.

    Science.gov (United States)

    Yan, Chuan; Zhang, Zhibin

    2014-03-22

    The relationship between stability and biodiversity has long been debated in ecology due to opposing empirical observations and theoretical predictions. Species interaction strength is often assumed to be monotonically related to population density, but the effects on stability of ecological networks of non-monotonous interactions that change signs have not been investigated previously. We demonstrate that for four kinds of non-monotonous interactions, shifting signs to negative or neutral interactions at high population density increases persistence (a measure of stability) of ecological networks, while for the other two kinds of non-monotonous interactions shifting signs to positive interactions at high population density decreases persistence of networks. Our results reveal a novel mechanism of network stabilization caused by specific non-monotonous interaction types through either increasing stable equilibrium points or reducing unstable equilibrium points (or both). These specific non-monotonous interactions may be important in maintaining stable and complex ecological networks, as well as other networks such as genes, neurons, the internet and human societies.

  8. Monotonous property of non-oscillations of the damped Duffing's equation

    International Nuclear Information System (INIS)

    Feng Zhaosheng

    2006-01-01

    In this paper, we give a qualitative study to the damped Duffing's equation by means of the qualitative theory of planar systems. Under certain parametric conditions, the monotonous property of the bounded non-oscillations is obtained. Explicit exact solutions are obtained by a direct method and application of this approach to a reaction-diffusion equation is presented

  9. On the size of monotone span programs

    NARCIS (Netherlands)

    Nikov, V.S.; Nikova, S.I.; Preneel, B.; Blundo, C.; Cimato, S.

    2005-01-01

    Span programs provide a linear algebraic model of computation. Monotone span programs (MSP) correspond to linear secret sharing schemes. This paper studies the properties of monotone span programs related to their size. Using the results of van Dijk (connecting codes and MSPs) and a construction for

  10. Structural analysis of reinforced concrete structures under monotonous and cyclic loadings: numerical aspects

    International Nuclear Information System (INIS)

    Lepretre, C.; Millard, A.; Nahas, G.

    1989-01-01

    The structural analysis of reinforced concrete structures is usually performed either by means of simplified methods of strength of materials type i.e. global methods, or by means of detailed methods of continuum mechanics type, i.e. local methods. For this second type, some constitutive models are available for concrete and rebars in a certain number of finite element systems. These models are often validated on simple homogeneous tests. Therefore, it is important to appraise the validity of the results when applying them to the analysis of a reinforced concrete structure, in order to be able to make correct predictions of the actual behaviour, under normal and faulty conditions. For this purpose, some tests have been performed at I.N.S.A. de Lyon on reinforced concrete beams, subjected to monotonous and cyclic loadings, in order to generate reference solutions to be compared with the numerical predictions given by two finite element systems: - CASTEM, developed by C.E.A./.D.E.M.T. - ELEFINI, developed by I.N.S.A. de Lyon

  11. Stepsize Restrictions for Boundedness and Monotonicity of Multistep Methods

    KAUST Repository

    Hundsdorfer, W.

    2011-04-29

    In this paper nonlinear monotonicity and boundedness properties are analyzed for linear multistep methods. We focus on methods which satisfy a weaker boundedness condition than strict monotonicity for arbitrary starting values. In this way, many linear multistep methods of practical interest are included in the theory. Moreover, it will be shown that for such methods monotonicity can still be valid with suitable Runge-Kutta starting procedures. Restrictions on the stepsizes are derived that are not only sufficient but also necessary for these boundedness and monotonicity properties. © 2011 Springer Science+Business Media, LLC.

  12. Type monotonic allocation schemes for multi-glove games

    OpenAIRE

    Brânzei, R.; Solymosi, T.; Tijs, S.H.

    2007-01-01

    Multiglove markets and corresponding games are considered.For this class of games we introduce the notion of type monotonic allocation scheme.Allocation rules for multiglove markets based on weight systems are introduced and characterized.These allocation rules generate type monotonic allocation schemes for multiglove games and are also helpful in proving that each core element of the corresponding game is extendable to a type monotonic allocation scheme.The T-value turns out to generate a ty...

  13. Estimation of a monotone percentile residual life function under random censorship.

    Science.gov (United States)

    Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo

    2013-01-01

    In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Moduli and Characteristics of Monotonicity in Some Banach Lattices

    Directory of Open Access Journals (Sweden)

    Miroslav Krbec

    2010-01-01

    Full Text Available First the characteristic of monotonicity of any Banach lattice X is expressed in terms of the left limit of the modulus of monotonicity of X at the point 1. It is also shown that for Köthe spaces the classical characteristic of monotonicity is the same as the characteristic of monotonicity corresponding to another modulus of monotonicity δ^m,E. The characteristic of monotonicity of Orlicz function spaces and Orlicz sequence spaces equipped with the Luxemburg norm are calculated. In the first case the characteristic is expressed in terms of the generating Orlicz function only, but in the sequence case the formula is not so direct. Three examples show why in the sequence case so direct formula is rather impossible. Some other auxiliary and complemented results are also presented. By the results of Betiuk-Pilarska and Prus (2008 which establish that Banach lattices X with ε0,m(X<1 and weak orthogonality property have the weak fixed point property, our results are related to the fixed point theory (Kirk and Sims (2001.

  15. Edit Distance to Monotonicity in Sliding Windows

    DEFF Research Database (Denmark)

    Chan, Ho-Leung; Lam, Tak-Wah; Lee, Lap Kei

    2011-01-01

    Given a stream of items each associated with a numerical value, its edit distance to monotonicity is the minimum number of items to remove so that the remaining items are non-decreasing with respect to the numerical value. The space complexity of estimating the edit distance to monotonicity of a ...

  16. A non-parametric test for partial monotonicity in multiple regression

    NARCIS (Netherlands)

    van Beek, M.; Daniëls, H.A.M.

    Partial positive (negative) monotonicity in a dataset is the property that an increase in an independent variable, ceteris paribus, generates an increase (decrease) in the dependent variable. A test for partial monotonicity in datasets could (1) increase model performance if monotonicity may be

  17. Testing manifest monotonicity using order-constrained statistical inference

    NARCIS (Netherlands)

    Tijmstra, J.; Hessen, D.J.; van der Heijden, P.G.M.; Sijtsma, K.

    2013-01-01

    Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest

  18. Monotonicity-based electrical impedance tomography for lung imaging

    Science.gov (United States)

    Zhou, Liangdong; Harrach, Bastian; Seo, Jin Keun

    2018-04-01

    This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e. the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used these monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.

  19. Proportionate-type normalized last mean square algorithms

    CERN Document Server

    Wagner, Kevin

    2013-01-01

    The topic of this book is proportionate-type normalized least mean squares (PtNLMS) adaptive filtering algorithms, which attempt to estimate an unknown impulse response by adaptively giving gains proportionate to an estimate of the impulse response and the current measured error. These algorithms offer low computational complexity and fast convergence times for sparse impulse responses in network and acoustic echo cancellation applications. New PtNLMS algorithms are developed by choosing gains that optimize user-defined criteria, such as mean square error, at all times. PtNLMS algorithms ar

  20. Stepsize Restrictions for Boundedness and Monotonicity of Multistep Methods

    KAUST Repository

    Hundsdorfer, W.; Mozartova, A.; Spijker, M. N.

    2011-01-01

    In this paper nonlinear monotonicity and boundedness properties are analyzed for linear multistep methods. We focus on methods which satisfy a weaker boundedness condition than strict monotonicity for arbitrary starting values. In this way, many

  1. Logarithmically completely monotonic functions involving the Generalized Gamma Function

    OpenAIRE

    Faton Merovci; Valmir Krasniqi

    2010-01-01

    By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain  problems of traffic flow are proved to be logarithmically  completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.

  2. Data-driven intensity normalization of PET group comparison studies is superior to global mean normalization

    DEFF Research Database (Denmark)

    Borghammer, Per; Aanerud, Joel; Gjedde, Albert

    2009-01-01

    BACKGROUND: Global mean (GM) normalization is one of the most commonly used methods of normalization in PET and SPECT group comparison studies of neurodegenerative disorders. It requires that no between-group GM difference is present, which may be strongly violated in neurodegenerative disorders....... Importantly, such GM differences often elude detection due to the large intrinsic variance in absolute values of cerebral blood flow or glucose consumption. Alternative methods of normalization are needed for this type of data. MATERIALS AND METHODS: Two types of simulation were performed using CBF images...

  3. Monotonic Loading of Circular Surface Footings on Clay

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Barari, Amin

    2011-01-01

    Appropriate modeling of offshore foundations under monotonic loading is a significant challenge in geotechnical engineering. This paper reports experimental and numerical analyses, specifically investigating the response of circular surface footings during monotonic loading and elastoplastic...... behavior during reloading. By using the findings presented in this paper, it is possible to extend the model to simulate the vertical-load displacement response of offshore bucket foundations....

  4. Proofs with monotone cuts

    Czech Academy of Sciences Publication Activity Database

    Jeřábek, Emil

    2012-01-01

    Roč. 58, č. 3 (2012), s. 177-187 ISSN 0942-5616 R&D Projects: GA AV ČR IAA100190902; GA MŠk(CZ) 1M0545 Institutional support: RVO:67985840 Keywords : proof complexity * monotone sequent calculus Subject RIV: BA - General Mathematics Impact factor: 0.376, year: 2012 http://onlinelibrary.wiley.com/doi/10.1002/malq.201020071/full

  5. Logarithmically completely monotonic functions involving the Generalized Gamma Function

    Directory of Open Access Journals (Sweden)

    Faton Merovci

    2010-12-01

    Full Text Available By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain  problems of traffic flow are proved to be logarithmically  completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.

  6. Stability of dynamical systems on the role of monotonic and non-monotonic Lyapunov functions

    CERN Document Server

    Michel, Anthony N; Liu, Derong

    2015-01-01

    The second edition of this textbook provides a single source for the analysis of system models represented by continuous-time and discrete-time, finite-dimensional and infinite-dimensional, and continuous and discontinuous dynamical systems.  For these system models, it presents results which comprise the classical Lyapunov stability theory involving monotonic Lyapunov functions, as well as corresponding contemporary stability results involving non-monotonicLyapunov functions.Specific examples from several diverse areas are given to demonstrate the applicability of the developed theory to many important classes of systems, including digital control systems, nonlinear regulator systems, pulse-width-modulated feedback control systems, and artificial neural networks.   The authors cover the following four general topics:   -          Representation and modeling of dynamical systems of the types described above -          Presentation of Lyapunov and Lagrange stability theory for dynamical sy...

  7. Obliquely Propagating Non-Monotonic Double Layer in a Hot Magnetized Plasma

    International Nuclear Information System (INIS)

    Kim, T.H.; Kim, S.S.; Hwang, J.H.; Kim, H.Y.

    2005-01-01

    Obliquely propagating non-monotonic double layer is investigated in a hot magnetized plasma, which consists of a positively charged hot ion fluid and trapped, as well as free electrons. A model equation (modified Korteweg-de Vries equation) is derived by the usual reductive perturbation method from a set of basic hydrodynamic equations. A time stationary obliquely propagating non-monotonic double layer solution is obtained in a hot magnetized-plasma. This solution is an analytic extension of the monotonic double layer and the solitary hole. The effects of obliqueness, external magnetic field and ion temperature on the properties of the non-monotonic double layer are discussed

  8. POLARIZED LINE FORMATION IN NON-MONOTONIC VELOCITY FIELDS

    Energy Technology Data Exchange (ETDEWEB)

    Sampoorna, M.; Nagendra, K. N., E-mail: sampoorna@iiap.res.in, E-mail: knn@iiap.res.in [Indian Institute of Astrophysics, Koramangala, Bengaluru 560034 (India)

    2016-12-10

    For a correct interpretation of the observed spectro-polarimetric data from astrophysical objects such as the Sun, it is necessary to solve the polarized line transfer problems taking into account a realistic temperature structure, the dynamical state of the atmosphere, a realistic scattering mechanism (namely, the partial frequency redistribution—PRD), and the magnetic fields. In a recent paper, we studied the effects of monotonic vertical velocity fields on linearly polarized line profiles formed in isothermal atmospheres with and without magnetic fields. However, in general the velocity fields that prevail in dynamical atmospheres of astrophysical objects are non-monotonic. Stellar atmospheres with shocks, multi-component supernova atmospheres, and various kinds of wave motions in solar and stellar atmospheres are examples of non-monotonic velocity fields. Here we present studies on the effect of non-relativistic non-monotonic vertical velocity fields on the linearly polarized line profiles formed in semi-empirical atmospheres. We consider a two-level atom model and PRD scattering mechanism. We solve the polarized transfer equation in the comoving frame (CMF) of the fluid using a polarized accelerated lambda iteration method that has been appropriately modified for the problem at hand. We present numerical tests to validate the CMF method and also discuss the accuracy and numerical instabilities associated with it.

  9. Monotonicity of social welfare optima

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Østerdal, Lars Peter Raahave

    2010-01-01

    This paper considers the problem of maximizing social welfare subject to participation constraints. It is shown that for an income allocation method that maximizes a social welfare function there is a monotonic relationship between the incomes allocated to individual agents in a given coalition...

  10. Generalized Yosida Approximations Based on Relatively A-Maximal m-Relaxed Monotonicity Frameworks

    Directory of Open Access Journals (Sweden)

    Heng-you Lan

    2013-01-01

    Full Text Available We introduce and study a new notion of relatively A-maximal m-relaxed monotonicity framework and discuss some properties of a new class of generalized relatively resolvent operator associated with the relatively A-maximal m-relaxed monotone operator and the new generalized Yosida approximations based on relatively A-maximal m-relaxed monotonicity framework. Furthermore, we give some remarks to show that the theory of the new generalized relatively resolvent operator and Yosida approximations associated with relatively A-maximal m-relaxed monotone operators generalizes most of the existing notions on (relatively maximal monotone mappings in Hilbert as well as Banach space and can be applied to study variational inclusion problems and first-order evolution equations as well as evolution inclusions.

  11. Non-monotonic wetting behavior of chitosan films induced by silver nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Praxedes, A.P.P.; Webler, G.D.; Souza, S.T. [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Ribeiro, A.S. [Instituto de Química e Biotecnologia, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Fonseca, E.J.S. [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Oliveira, I.N. de, E-mail: italo@fis.ufal.br [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil)

    2016-05-01

    Highlights: • The addition of silver nanoparticles modifies the morphology of chitosan films. • Metallic nanoparticles can be used to control wetting properties of chitosan films. • The contact angle shows a non-monotonic dependence on the silver concentration. - Abstract: The present work is devoted to the study of structural and wetting properties of chitosan-based films containing silver nanoparticles. In particular, the effects of silver concentration on the morphology of chitosan films are characterized by different techniques, such as atomic force microscopy (AFM), X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR). By means of dynamic contact angle measurements, we study the modification on surface properties of chitosan-based films due to the addition of silver nanoparticles. The results are analyzed in the light of molecular-kinetic theory which describes the wetting phenomena in terms of statistical dynamics for the displacement of liquid molecules in a solid substrate. Our results show that the wetting properties of chitosan-based films are high sensitive to the fraction of silver nanoparticles, with the equilibrium contact angle exhibiting a non-monotonic behavior.

  12. Effect of meal glycemic load and caffeine consumption on prolonged monotonous driving performance.

    Science.gov (United States)

    Bragg, Christopher; Desbrow, Ben; Hall, Susan; Irwin, Christopher

    2017-11-01

    Monotonous driving involves low levels of stimulation and high levels of repetition and is essentially an exercise in sustained attention and vigilance. The aim of this study was to determine the effects of consuming a high or low glycemic load meal on prolonged monotonous driving performance. The effect of consuming caffeine with a high glycemic load meal was also examined. Ten healthy, non-diabetic participants (7 males, age 51±7yrs, mean±SD) completed a repeated measures investigation involving 3 experimental trials. On separate occasions, participants were provided one of three treatments prior to undertaking a 90min computer-based simulated drive. The 3 treatment conditions involved consuming: (1) a low glycemic load meal+placebo capsules (LGL), (2) a high glycemic load meal+placebo capsules (HGL) and (3) a high glycemic load meal+caffeine capsules (3mgkg -1 body weight) (CAF). Measures of driving performance included lateral (standard deviation of lane position (SDLP), average lane position (AVLP), total number of lane crossings (LC)) and longitudinal (average speed (AVSP) and standard deviation of speed (SDSP)) vehicle control parameters. Blood glucose levels, plasma caffeine concentrations and subjective ratings of sleepiness, alertness, mood, hunger and simulator sickness were also collected throughout each trial. No difference in either lateral or longitudinal vehicle control parameters or subjective ratings were observed between HGL and LGL treatments. A significant reduction in SDLP (0.36±0.20m vs 0.41±0.19m, p=0.004) and LC (34.4±31.4 vs 56.7±31.5, p=0.018) was observed in the CAF trial compared to the HGL trial. However, no differences in AVLP, AVSP and SDSP or subjective ratings were detected between these two trials (p>0.05). Altering the glycemic load of a breakfast meal had no effect on measures of monotonous driving performance in non-diabetic adults. Individuals planning to undertake a prolonged monotonous drive following consumption of a

  13. The approximation of the normal distribution by means of chaotic expression

    International Nuclear Information System (INIS)

    Lawnik, M

    2014-01-01

    The approximation of the normal distribution by means of a chaotic expression is achieved by means of Weierstrass function, where, for a certain set of parameters, the density of the derived recurrence renders good approximation of the bell curve

  14. Information flow in layered networks of non-monotonic units

    International Nuclear Information System (INIS)

    Neves, Fabio Schittler; Schubert, Benno Martim; Erichsen, Rubem Jr

    2015-01-01

    Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information. (paper)

  15. Information flow in layered networks of non-monotonic units

    Science.gov (United States)

    Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.

    2015-07-01

    Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.

  16. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    Science.gov (United States)

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  17. Mean fields and self consistent normal ordering of lattice spin and gauge field theories

    International Nuclear Information System (INIS)

    Ruehl, W.

    1986-01-01

    Classical Heisenberg spin models on lattices possess mean field theories that are well defined real field theories on finite lattices. These mean field theories can be self consistently normal ordered. This leads to a considerable improvement over standard mean field theory. This concept is carried over to lattice gauge theories. We construct first an appropriate real mean field theory. The equations determining the Gaussian kernel necessary for self-consistent normal ordering of this mean field theory are derived. (orig.)

  18. Iterates of piecewise monotone mappings on an interval

    CERN Document Server

    Preston, Chris

    1988-01-01

    Piecewise monotone mappings on an interval provide simple examples of discrete dynamical systems whose behaviour can be very complicated. These notes are concerned with the properties of the iterates of such mappings. The material presented can be understood by anyone who has had a basic course in (one-dimensional) real analysis. The account concentrates on the topological (as opposed to the measure theoretical) aspects of the theory of piecewise monotone mappings. As well as offering an elementary introduction to this theory, these notes also contain a more advanced treatment of the problem of classifying such mappings up to topological conjugacy.

  19. Risk-Sensitive Control with Near Monotone Cost

    International Nuclear Information System (INIS)

    Biswas, Anup; Borkar, V. S.; Suresh Kumar, K.

    2010-01-01

    The infinite horizon risk-sensitive control problem for non-degenerate controlled diffusions is analyzed under a 'near monotonicity' condition on the running cost that penalizes large excursions of the process.

  20. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  1. Failure mechanisms of closed-cell aluminum foam under monotonic and cyclic loading

    International Nuclear Information System (INIS)

    Amsterdam, E.; De Hosson, J.Th.M.; Onck, P.R.

    2006-01-01

    This paper concentrates on the differences in failure mechanisms of Alporas closed-cell aluminum foam under either monotonic or cyclic loading. The emphasis lies on aspects of crack nucleation and crack propagation in relation to the microstructure. The cell wall material consists of Al dendrites and an interdendritic network of Al 4 Ca and Al 22 CaTi 2 precipitates. In situ scanning electron microscopy monotonic tensile tests were performed on small samples to study crack nucleation and propagation. Digital image correlation was employed to map the strain in the cell wall on the characteristic microstructural length scale. Monotonic tensile tests and tension-tension fatigue tests were performed on larger samples to observe the overall fracture behavior and crack path in monotonic and cyclic loading. The crack nucleation and propagation path in both loading conditions are revealed and it can be concluded that during monotonic tension cracks nucleate in and propagate partly through the Al 4 Ca interdendritic network, whereas under cyclic loading cracks nucleate and propagate through the Al dendrites

  2. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  3. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2007-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  4. Completely monotonic functions related to logarithmic derivatives of entire functions

    DEFF Research Database (Denmark)

    Pedersen, Henrik Laurberg

    2011-01-01

    The logarithmic derivative l(x) of an entire function of genus p and having only non-positive zeros is represented in terms of a Stieltjes function. As a consequence, (-1)p(xml(x))(m+p) is a completely monotonic function for all m ≥ 0. This generalizes earlier results on complete monotonicity...... of functions related to Euler's psi-function. Applications to Barnes' multiple gamma functions are given....

  5. Monotonic childhoods: representations of otherness in research writing

    Directory of Open Access Journals (Sweden)

    Denise Marcos Bussoletti

    2011-12-01

    Full Text Available This paper is part of a doctoral thesis entitled “Monotonic childhoods – a rhapsody of hope”. It follows the perspective of a critical psychosocial and cultural study, and aims at discussing the other’s representation in research writing, electing childhood as an allegorical and refl ective place. It takes into consideration, by means of analysis, the drawings and poems of children from the Terezin ghetto during the Second World War. The work is mostly based on Serge Moscovici’s Social Representation Theory, but it is also in constant dialogue with other theories and knowledge fi elds, especially Walter Benjamin’s and Mikhail Bakhtin’s contributions. At the end, the paper supports the thesis that conceives poetics as one of the translation axes of childhood cultures.

  6. A discrete wavelet spectrum approach for identifying non-monotonic trends in hydroclimate data

    Science.gov (United States)

    Sang, Yan-Fang; Sun, Fubao; Singh, Vijay P.; Xie, Ping; Sun, Jian

    2018-01-01

    The hydroclimatic process is changing non-monotonically and identifying its trends is a great challenge. Building on the discrete wavelet transform theory, we developed a discrete wavelet spectrum (DWS) approach for identifying non-monotonic trends in hydroclimate time series and evaluating their statistical significance. After validating the DWS approach using two typical synthetic time series, we examined annual temperature and potential evaporation over China from 1961-2013 and found that the DWS approach detected both the warming and the warming hiatus in temperature, and the reversed changes in potential evaporation. Further, the identified non-monotonic trends showed stable significance when the time series was longer than 30 years or so (i.e. the widely defined climate timescale). The significance of trends in potential evaporation measured at 150 stations in China, with an obvious non-monotonic trend, was underestimated and was not detected by the Mann-Kendall test. Comparatively, the DWS approach overcame the problem and detected those significant non-monotonic trends at 380 stations, which helped understand and interpret the spatiotemporal variability in the hydroclimatic process. Our results suggest that non-monotonic trends of hydroclimate time series and their significance should be carefully identified, and the DWS approach proposed has the potential for wide use in the hydrological and climate sciences.

  7. In some symmetric spaces monotonicity properties can be reduced to the cone of rearrangements

    Czech Academy of Sciences Publication Activity Database

    Hudzik, H.; Kaczmarek, R.; Krbec, Miroslav

    2016-01-01

    Roč. 90, č. 1 (2016), s. 249-261 ISSN 0001-9054 Institutional support: RVO:67985840 Keywords : symmetric spaces * K-monotone symmetric Banach spaces * strict monotonicity * lower local uniform monotonicity Subject RIV: BA - General Mathematics Impact factor: 0.826, year: 2016 http://link.springer.com/article/10.1007%2Fs00010-015-0379-6

  8. Alternans by non-monotonic conduction velocity restitution, bistability and memory

    International Nuclear Information System (INIS)

    Kim, Tae Yun; Hong, Jin Hee; Heo, Ryoun; Lee, Kyoung J

    2013-01-01

    Conduction velocity (CV) restitution is a key property that characterizes any medium supporting traveling waves. It reflects not only the dynamics of the individual constituents but also the coupling mechanism that mediates their interaction. Recent studies have suggested that cardiac tissues, which have a non-monotonic CV-restitution property, can support alternans, a period-2 oscillatory response of periodically paced cardiac tissue. This study finds that single-hump, non-monotonic, CV-restitution curves are a common feature of in vitro cultures of rat cardiac cells. We also find that the Fenton–Karma model, one of the well-established mathematical models of cardiac tissue, supports a very similar non-monotonic CV restitution in a physiologically relevant parameter regime. Surprisingly, the mathematical model as well as the cell cultures support bistability and show cardiac memory that tends to work against the generation of an alternans. Bistability was realized by adopting two different stimulation protocols, ‘S1S2’, which produces a period-1 wave train, and ‘alternans-pacing’, which favors a concordant alternans. Thus, we conclude that the single-hump non-monotonicity in the CV-restitution curve is not sufficient to guarantee a cardiac alternans, since cardiac memory interferes and the way the system is paced matters. (paper)

  9. Log-supermodularity of weight functions and the loading monotonicity of weighted insurance premiums

    OpenAIRE

    Hristo S. Sendov; Ying Wang; Ricardas Zitikis

    2010-01-01

    The paper is motivated by a problem concerning the monotonicity of insurance premiums with respect to their loading parameter: the larger the parameter, the larger the insurance premium is expected to be. This property, usually called loading monotonicity, is satisfied by premiums that appear in the literature. The increased interest in constructing new insurance premiums has raised a question as to what weight functions would produce loading-monotonic premiums. In this paper we demonstrate a...

  10. Estimating monotonic rates from biological data using local linear regression.

    Science.gov (United States)

    Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R

    2017-03-01

    Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.

  11. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  12. Adaptive Bayesian inference on the mean of an infinite-dimensional normal distribution

    NARCIS (Netherlands)

    Belitser, E.; Ghosal, S.

    2003-01-01

    We consider the problem of estimating the mean of an infinite-break dimensional normal distribution from the Bayesian perspective. Under the assumption that the unknown true mean satisfies a "smoothness condition," we first derive the convergence rate of the posterior distribution for a prior that

  13. First order mean field games - explicit solutions, perturbations and connection with classical mechanics

    KAUST Repository

    Gomes, Diogo A.

    2016-01-06

    We present recent developments in the theory of first-order mean-field games (MFGs). A standard assumption in MFGs is that the cost function of the agents is monotone in the density of the distribution. This assumption leads to a comprehensive existence theory and to the uniqueness of smooth solutions. Here, our goals are to understand the role of local monotonicity in the small perturbation regime and the properties of solutions for problems without monotonicity. Under a local monotonicity assumption, we show that small perturbations of MFGs have unique smooth solutions. In addition, we explore the connection between first-order MFGs and classical mechanics and KAM theory. Next, for non-monotone problems, we construct non-unique explicit solutions for a broad class of first-order mean-field games. We provide an alternative formulation of MFGs in terms of a new current variable. These examples illustrate two new phenomena: the non-uniqueness of solutions and the breakdown of regularity.

  14. First order mean field games - explicit solutions, perturbations and connection with classical mechanics

    KAUST Repository

    Gomes, Diogo A.; Nurbekyan, Levon; Prazeres, Mariana

    2016-01-01

    We present recent developments in the theory of first-order mean-field games (MFGs). A standard assumption in MFGs is that the cost function of the agents is monotone in the density of the distribution. This assumption leads to a comprehensive existence theory and to the uniqueness of smooth solutions. Here, our goals are to understand the role of local monotonicity in the small perturbation regime and the properties of solutions for problems without monotonicity. Under a local monotonicity assumption, we show that small perturbations of MFGs have unique smooth solutions. In addition, we explore the connection between first-order MFGs and classical mechanics and KAM theory. Next, for non-monotone problems, we construct non-unique explicit solutions for a broad class of first-order mean-field games. We provide an alternative formulation of MFGs in terms of a new current variable. These examples illustrate two new phenomena: the non-uniqueness of solutions and the breakdown of regularity.

  15. Monotonicity and bounds on Bessel functions

    Directory of Open Access Journals (Sweden)

    Larry Landau

    2000-07-01

    Full Text Available survey my recent results on monotonicity with respect to order of general Bessel functions, which follow from a new identity and lead to best possible uniform bounds. Application may be made to the "spreading of the wave packet" for a free quantum particle on a lattice and to estimates for perturbative expansions.

  16. A discrete wavelet spectrum approach for identifying non-monotonic trends in hydroclimate data

    Directory of Open Access Journals (Sweden)

    Y.-F. Sang

    2018-01-01

    Full Text Available The hydroclimatic process is changing non-monotonically and identifying its trends is a great challenge. Building on the discrete wavelet transform theory, we developed a discrete wavelet spectrum (DWS approach for identifying non-monotonic trends in hydroclimate time series and evaluating their statistical significance. After validating the DWS approach using two typical synthetic time series, we examined annual temperature and potential evaporation over China from 1961–2013 and found that the DWS approach detected both the warming and the warming hiatus in temperature, and the reversed changes in potential evaporation. Further, the identified non-monotonic trends showed stable significance when the time series was longer than 30 years or so (i.e. the widely defined climate timescale. The significance of trends in potential evaporation measured at 150 stations in China, with an obvious non-monotonic trend, was underestimated and was not detected by the Mann–Kendall test. Comparatively, the DWS approach overcame the problem and detected those significant non-monotonic trends at 380 stations, which helped understand and interpret the spatiotemporal variability in the hydroclimatic process. Our results suggest that non-monotonic trends of hydroclimate time series and their significance should be carefully identified, and the DWS approach proposed has the potential for wide use in the hydrological and climate sciences.

  17. A System of Generalized Variational Inclusions Involving a New Monotone Mapping in Banach Spaces

    Directory of Open Access Journals (Sweden)

    Jinlin Guan

    2013-01-01

    Full Text Available We introduce a new monotone mapping in Banach spaces, which is an extension of the -monotone mapping studied by Nazemi (2012, and we generalize the variational inclusion involving the -monotone mapping. Based on the new monotone mapping, we propose a new proximal mapping which combines the proximal mapping studied by Nazemi (2012 with the mapping studied by Lan et al. (2011 and show its Lipschitz continuity. Based on the new proximal mapping, we give an iterative algorithm. Furthermore, we prove the convergence of iterative sequences generated by the algorithm under some appropriate conditions. Our results improve and extend corresponding ones announced by many others.

  18. A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-01-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlog⁡n) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376

  19. A simple algorithm for computing positively weighted straight skeletons of monotone polygons.

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-02-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.

  20. Artefactual subcortical hyperperfusion in PET studies normalized to global mean: lessons from Parkinson's disease

    DEFF Research Database (Denmark)

    Borghammer, Per; Cumming, Paul; Aanerud, Joel

    2008-01-01

    not be detected with present instrumentation and typically-used sample sizes. CONCLUSION: Imposing focal decreases on cortical CBF in conjunction with global mean normalization gives rise to spurious relative CBF increases in all of the regions reported to be hyperactive in PD. Since no PET study has reported......AIM: Recent studies of Parkinson's disease (PD) report subcortical increases of cerebral blood flow (CBF) or cerebral metabolic rate of glucose (CMRglc), after conventional normalization to the global mean. However, if the global mean CBF or CMRglc is decreased in the PD group, this normalization...... necessarily generates artificial relative increases in regions unaffected by the disease. This potential bias may explain the reported subcortical increases in PD. To test this hypothesis, we performed simulations with manipulation and subsequently analysis of sets of quantitative CBF maps by voxel...

  1. Modelling Embedded Systems by Non-Monotonic Refinement

    NARCIS (Netherlands)

    Mader, Angelika H.; Marincic, J.; Wupper, H.

    2008-01-01

    This paper addresses the process of modelling embedded sys- tems for formal verification. We propose a modelling process built on non-monotonic refinement and a number of guidelines. The outcome of the modelling process is a model, together with a correctness argument that justifies our modelling

  2. An analysis of the stability and monotonicity of a kind of control models

    Directory of Open Access Journals (Sweden)

    LU Yifa

    2013-06-01

    Full Text Available The stability and monotonicity of control systems with parameters are considered.By the iterative relationship of the coefficients of characteristic polynomials and the Mathematica software,some sufficient conditions for the monotonicity and stability of systems are given.

  3. CFD simulation of simultaneous monotonic cooling and surface heat transfer coefficient

    International Nuclear Information System (INIS)

    Mihálka, Peter; Matiašovský, Peter

    2016-01-01

    The monotonic heating regime method for determination of thermal diffusivity is based on the analysis of an unsteady-state (stabilised) thermal process characterised by an independence of the space-time temperature distribution on initial conditions. At the first kind of the monotonic regime a sample of simple geometry is heated / cooled at constant ambient temperature. The determination of thermal diffusivity requires the determination rate of a temperature change and simultaneous determination of the first eigenvalue. According to a characteristic equation the first eigenvalue is a function of the Biot number defined by a surface heat transfer coefficient and thermal conductivity of an analysed material. Knowing the surface heat transfer coefficient and the first eigenvalue the thermal conductivity can be determined. The surface heat transport coefficient during the monotonic regime can be determined by the continuous measurement of long-wave radiation heat flow and the photoelectric measurement of the air refractive index gradient in a boundary layer. CFD simulation of the cooling process was carried out to analyse local convective and radiative heat transfer coefficients more in detail. Influence of ambient air flow was analysed. The obtained eigenvalues and corresponding surface heat transfer coefficient values enable to determine thermal conductivity of the analysed specimen together with its thermal diffusivity during a monotonic heating regime.

  4. Non-monotonicity and divergent time scale in Axelrod model dynamics

    Science.gov (United States)

    Vazquez, F.; Redner, S.

    2007-04-01

    We study the evolution of the Axelrod model for cultural diversity, a prototypical non-equilibrium process that exhibits rich dynamics and a dynamic phase transition between diversity and an inactive state. We consider a simple version of the model in which each individual possesses two features that can assume q possibilities. Within a mean-field description in which each individual has just a few interaction partners, we find a phase transition at a critical value qc between an active, diverse state for q < qc and a frozen state. For q lesssim qc, the density of active links is non-monotonic in time and the asymptotic approach to the steady state is controlled by a time scale that diverges as (q-qc)-1/2.

  5. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  6. An Examination of Cooper's Test for Monotonic Trend

    Science.gov (United States)

    Hsu, Louis

    1977-01-01

    A statistic for testing monotonic trend that has been presented in the literature is shown not to be the binomial random variable it is contended to be, but rather it is linearly related to Kendall's tau statistic. (JKS)

  7. A novel mean-centering method for normalizing microRNA expression from high-throughput RT-qPCR data

    Directory of Open Access Journals (Sweden)

    Wylie Dennis

    2011-12-01

    Full Text Available Abstract Background Normalization is critical for accurate gene expression analysis. A significant challenge in the quantitation of gene expression from biofluids samples is the inability to quantify RNA concentration prior to analysis, underscoring the need for robust normalization tools for this sample type. In this investigation, we evaluated various methods of normalization to determine the optimal approach for quantifying microRNA (miRNA expression from biofluids and tissue samples when using the TaqMan® Megaplex™ high-throughput RT-qPCR platform with low RNA inputs. Findings We compared seven normalization methods in the analysis of variation of miRNA expression from biofluid and tissue samples. We developed a novel variant of the common mean-centering normalization strategy, herein referred to as mean-centering restricted (MCR normalization, which is adapted to the TaqMan Megaplex RT-qPCR platform, but is likely applicable to other high-throughput RT-qPCR-based platforms. Our results indicate that MCR normalization performs comparable to or better than both standard mean-centering and other normalization methods. We also propose an extension of this method to be used when migrating biomarker signatures from Megaplex to singleplex RT-qPCR platforms, based on the identification of a small number of normalizer miRNAs that closely track the mean of expressed miRNAs. Conclusions We developed the MCR method for normalizing miRNA expression from biofluids samples when using the TaqMan Megaplex RT-qPCR platform. Our results suggest that normalization based on the mean of all fully observed (fully detected miRNAs minimizes technical variance in normalized expression values, and that a small number of normalizer miRNAs can be selected when migrating from Megaplex to singleplex assays. In our study, we find that normalization methods that focus on a restricted set of miRNAs tend to perform better than methods that focus on all miRNAs, including

  8. Rational functions with maximal radius of absolute monotonicity

    KAUST Repository

    Loczi, Lajos; Ketcheson, David I.

    2014-01-01

    -Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend

  9. A note on monotone real circuits

    Czech Academy of Sciences Publication Activity Database

    Hrubeš, Pavel; Pudlák, Pavel

    2018-01-01

    Roč. 131, March (2018), s. 15-19 ISSN 0020-0190 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : computational complexity * monotone real circuit * Karchmer-Wigderson game Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.748, year: 2016 http ://www.sciencedirect.com/science/article/pii/S0020019017301965?via%3Dihub

  10. A note on monotone real circuits

    Czech Academy of Sciences Publication Activity Database

    Hrubeš, Pavel; Pudlák, Pavel

    2018-01-01

    Roč. 131, March (2018), s. 15-19 ISSN 0020-0190 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : computational complexity * monotone real circuit * Karchmer-Wigderson game Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.748, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020019017301965?via%3Dihub

  11. Interval Routing and Minor-Monotone Graph Parameters

    NARCIS (Netherlands)

    Bakker, E.M.; Bodlaender, H.L.; Tan, R.B.; Leeuwen, J. van

    2006-01-01

    We survey a number of minor-monotone graph parameters and their relationship to the complexity of routing on graphs. In particular we compare the interval routing parameters κslir(G) and κsir(G) with Colin de Verdi`ere’s graph invariant μ(G) and its variants λ(G) and κ(G). We show that for all the

  12. A Hybrid Approach to Proving Memory Reference Monotonicity

    KAUST Repository

    Oancea, Cosmin E.

    2013-01-01

    Array references indexed by non-linear expressions or subscript arrays represent a major obstacle to compiler analysis and to automatic parallelization. Most previous proposed solutions either enhance the static analysis repertoire to recognize more patterns, to infer array-value properties, and to refine the mathematical support, or apply expensive run time analysis of memory reference traces to disambiguate these accesses. This paper presents an automated solution based on static construction of access summaries, in which the reference non-linearity problem can be solved for a large number of reference patterns by extracting arbitrarily-shaped predicates that can (in)validate the reference monotonicity property and thus (dis)prove loop independence. Experiments on six benchmarks show that our general technique for dynamic validation of the monotonicity property can cover a large class of codes, incurs minimal run-time overhead and obtains good speedups. © 2013 Springer-Verlag.

  13. Use Residual Correction Method and Monotone Iterative Technique to Calculate the Upper and Lower Approximate Solutions of Singularly Perturbed Non-linear Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Chi-Chang Wang

    2013-09-01

    Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.

  14. Monotonous consumption of fibre-enriched bread at breakfast increases satiety and influences subsequent food intake.

    Science.gov (United States)

    Touyarou, Peio; Sulmont-Rossé, Claire; Gagnaire, Aude; Issanchou, Sylvie; Brondel, Laurent

    2012-04-01

    This study aimed to observe the influence of the monotonous consumption of two types of fibre-enriched bread at breakfast on hedonic liking for the bread, subsequent hunger and energy intake. Two groups of unrestrained normal weight participants were given either white sandwich bread (WS) or multigrain sandwich bread (MG) at breakfast (the sensory properties of the WS were more similar to the usual bread eaten by the participants than those of the MG). In each group, two 15-day cross-over conditions were set up. During the experimental condition the usual breakfast of each participant was replaced by an isocaloric portion of plain bread (WS or MG). During the control condition, participants consumed only 10 g of the corresponding bread and completed their breakfast with other foods they wanted. The results showed that bread appreciation did not change over exposure even in the experimental condition. Hunger was lower in the experimental condition than in the control condition. The consumption of WS decreased energy intake while the consumption of MG did not in the experimental condition compared to the corresponding control one. In conclusion, a monotonous breakfast composed solely of a fibre-enriched bread may decrease subsequent hunger and, when similar to a familiar bread, food intake. Copyright © 2011. Published by Elsevier Ltd.

  15. Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods

    KAUST Repository

    Mozartova, A.; Savostianov, I.; Hundsdorfer, W.

    2015-01-01

    © 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.

  16. Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods

    KAUST Repository

    Mozartova, A.

    2015-05-01

    © 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.

  17. A note on monotonicity of item response functions for ordered polytomous item response theory models.

    Science.gov (United States)

    Kang, Hyeon-Ah; Su, Ya-Hui; Chang, Hua-Hua

    2018-03-08

    A monotone relationship between a true score (τ) and a latent trait level (θ) has been a key assumption for many psychometric applications. The monotonicity property in dichotomous response models is evident as a result of a transformation via a test characteristic curve. Monotonicity in polytomous models, in contrast, is not immediately obvious because item response functions are determined by a set of response category curves, which are conceivably non-monotonic in θ. The purpose of the present note is to demonstrate strict monotonicity in ordered polytomous item response models. Five models that are widely used in operational assessments are considered for proof: the generalized partial credit model (Muraki, 1992, Applied Psychological Measurement, 16, 159), the nominal model (Bock, 1972, Psychometrika, 37, 29), the partial credit model (Masters, 1982, Psychometrika, 47, 147), the rating scale model (Andrich, 1978, Psychometrika, 43, 561), and the graded response model (Samejima, 1972, A general model for free-response data (Psychometric Monograph no. 18). Psychometric Society, Richmond). The study asserts that the item response functions in these models strictly increase in θ and thus there exists strict monotonicity between τ and θ under certain specified conditions. This conclusion validates the practice of customarily using τ in place of θ in applied settings and provides theoretical grounds for one-to-one transformations between the two scales. © 2018 The British Psychological Society.

  18. Non-monotone positive solutions of second-order linear differential equations: existence, nonexistence and criteria

    Directory of Open Access Journals (Sweden)

    Mervan Pašić

    2016-10-01

    Full Text Available We study non-monotone positive solutions of the second-order linear differential equations: $(p(tx'' + q(t x = e(t$, with positive $p(t$ and $q(t$. For the first time, some criteria as well as the existence and nonexistence of non-monotone positive solutions are proved in the framework of some properties of solutions $\\theta (t$ of the corresponding integrable linear equation: $(p(t\\theta''=e(t$. The main results are illustrated by many examples dealing with equations which allow exact non-monotone positive solutions not necessarily periodic. Finally, we pose some open questions.

  19. New concurrent iterative methods with monotonic convergence

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Qingchuan [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    This paper proposes the new concurrent iterative methods without using any derivatives for finding all zeros of polynomials simultaneously. The new methods are of monotonic convergence for both simple and multiple real-zeros of polynomials and are quadratically convergent. The corresponding accelerated concurrent iterative methods are obtained too. The new methods are good candidates for the application in solving symmetric eigenproblems.

  20. The Bird Core for Minimum Cost Spanning Tree problems Revisited : Monotonicity and Additivity Aspects

    NARCIS (Netherlands)

    Tijs, S.H.; Moretti, S.; Brânzei, R.; Norde, H.W.

    2005-01-01

    A new way is presented to define for minimum cost spanning tree (mcst-) games the irreducible core, which is introduced by Bird in 1976.The Bird core correspondence turns out to have interesting monotonicity and additivity properties and each stable cost monotonic allocation rule for mcst-problems

  1. Rational functions with maximal radius of absolute monotonicity

    KAUST Repository

    Loczi, Lajos

    2014-05-19

    We study the radius of absolute monotonicity R of rational functions with numerator and denominator of degree s that approximate the exponential function to order p. Such functions arise in the application of implicit s-stage, order p Runge-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend and Kraaijevanger. We determine the maximum attainable radius for functions in several one-parameter families of rational functions. Moreover, we prove earlier conjectured optimal radii in some families with 2 or 3 parameters via uniqueness arguments for systems of polynomial inequalities. Our results also prove the optimality of some strong stability preserving implicit and singly diagonally implicit Runge-Kutta methods. Whereas previous results in this area were primarily numerical, we give all constants as exact algebraic numbers.

  2. One-Dimensional Stationary Mean-Field Games with Local Coupling

    KAUST Repository

    Gomes, Diogo A.; Nurbekyan, Levon; Prazeres, Mariana

    2017-01-01

    A standard assumption in mean-field game (MFG) theory is that the coupling between the Hamilton–Jacobi equation and the transport equation is monotonically non-decreasing in the density of the population. In many cases, this assumption implies the existence and uniqueness of solutions. Here, we drop that assumption and construct explicit solutions for one-dimensional MFGs. These solutions exhibit phenomena not present in monotonically increasing MFGs: low-regularity, non-uniqueness, and the formation of regions with no agents.

  3. One-Dimensional Stationary Mean-Field Games with Local Coupling

    KAUST Repository

    Gomes, Diogo A.

    2017-05-25

    A standard assumption in mean-field game (MFG) theory is that the coupling between the Hamilton–Jacobi equation and the transport equation is monotonically non-decreasing in the density of the population. In many cases, this assumption implies the existence and uniqueness of solutions. Here, we drop that assumption and construct explicit solutions for one-dimensional MFGs. These solutions exhibit phenomena not present in monotonically increasing MFGs: low-regularity, non-uniqueness, and the formation of regions with no agents.

  4. Thermal effects on the enhanced ductility in non-monotonic uniaxial tension of DP780 steel sheet

    Science.gov (United States)

    Majidi, Omid; Barlat, Frederic; Korkolis, Yannis P.; Fu, Jiawei; Lee, Myoung-Gyu

    2016-11-01

    To understand the material behavior during non-monotonic loading, uniaxial tension tests were conducted in three modes, namely, the monotonic loading, loading with periodic relaxation and periodic loading-unloadingreloading, at different strain rates (0.001/s to 0.01/s). In this study, the temperature gradient developing during each test and its contribution to increasing the apparent ductility of DP780 steel sheets were considered. In order to assess the influence of temperature, isothermal uniaxial tension tests were also performed at three temperatures (298 K, 313 K and 328 K (25 °C, 40 °C and 55 °C)). A digital image correlation system coupled with an infrared thermography was used in the experiments. The results show that the non-monotonic loading modes increased the apparent ductility of the specimens. It was observed that compared with the monotonic loading, the temperature gradient became more uniform when a non-monotonic loading was applied.

  5. Bayesian nonparametric estimation of continuous monotone functions with applications to dose-response analysis.

    Science.gov (United States)

    Bornkamp, Björn; Ickstadt, Katja

    2009-03-01

    In this article, we consider monotone nonparametric regression in a Bayesian framework. The monotone function is modeled as a mixture of shifted and scaled parametric probability distribution functions, and a general random probability measure is assumed as the prior for the mixing distribution. We investigate the choice of the underlying parametric distribution function and find that the two-sided power distribution function is well suited both from a computational and mathematical point of view. The model is motivated by traditional nonlinear models for dose-response analysis, and provides possibilities to elicitate informative prior distributions on different aspects of the curve. The method is compared with other recent approaches to monotone nonparametric regression in a simulation study and is illustrated on a data set from dose-response analysis.

  6. Some Normal Intuitionistic Fuzzy Heronian Mean Operators Using Hamacher Operation and Their Application

    Directory of Open Access Journals (Sweden)

    Guofang Zhang

    2018-06-01

    Full Text Available Hamacher operation is a generalization of the algebraic and Einstein operation and expresses a family of binary operation in the unit interval [0,1]. Heronian mean can deal with correlations of different criteria or input arguments and does not bring out repeated calculation. The normal intuitionistic fuzzy numbers (NIFNs can depict normal distribution information in practical decision making. A decision-making problem was researched under the NIFN environment in this study, and a new multi-criteria group decision-making (MCGDM approach is herein introduced on the basis of Hamacher operation. Firstly, according to Hamacher operation, some operational laws of NIFNs are presented. Secondly, it is noted that Heronian mean not only takes into account mutuality between the attribute values once, but also considers the correlation between input argument and itself. Therefore, in order to aggregate NIFN information, we developed some operators and studied their properties. These operators include Hamacher Heronian mean (NIFHHM, Hamacher weighted Heronian mean (NIFHWHM, Hamacher geometric Heronian mean (NIFHGHM, and Hamacher weighted geometric Heronian mean (NIFHWGHM. Furthermore, we applied the proposed operators to the MCGDM problem and developed a new MCGDM approach. The characteristics of this new approach are that: (1 it is suitable for making a decision under the NIFN environment and it is more reasonable for aggregating the normal distribution data; (2 it utilizes Hamacher operation to provide an effective and powerful MCGDM algorithm and to make more reliable and more flexible decisions under the NIFN circumstance; (3 it uses the Heronian mean operator to deal with interrelations between the attributes or input arguments, and it does not bring about repeated calculation. Therefore, the proposed method can describe the interaction of the different criteria or input arguments and offer some reasonable and reliable MCGDM aggregation operators

  7. Reduction theorems for weighted integral inequalities on the cone of monotone functions

    International Nuclear Information System (INIS)

    Gogatishvili, A; Stepanov, V D

    2013-01-01

    This paper surveys results related to the reduction of integral inequalities involving positive operators in weighted Lebesgue spaces on the real semi-axis and valid on the cone of monotone functions, to certain more easily manageable inequalities valid on the cone of non-negative functions. The case of monotone operators is new. As an application, a complete characterization for all possible integrability parameters is obtained for a number of Volterra operators. Bibliography: 118 titles

  8. Partial coherence with application to the monotonicity problem of coherence involving skew information

    Science.gov (United States)

    Luo, Shunlong; Sun, Yuan

    2017-08-01

    Quantifications of coherence are intensively studied in the context of completely decoherent operations (i.e., von Neuamnn measurements, or equivalently, orthonormal bases) in recent years. Here we investigate partial coherence (i.e., coherence in the context of partially decoherent operations such as Lüders measurements). A bona fide measure of partial coherence is introduced. As an application, we address the monotonicity problem of K -coherence (a quantifier for coherence in terms of Wigner-Yanase skew information) [Girolami, Phys. Rev. Lett. 113, 170401 (2014), 10.1103/PhysRevLett.113.170401], which is introduced to realize a measure of coherence as axiomatized by Baumgratz, Cramer, and Plenio [Phys. Rev. Lett. 113, 140401 (2014), 10.1103/PhysRevLett.113.140401]. Since K -coherence fails to meet the necessary requirement of monotonicity under incoherent operations, it is desirable to remedy this monotonicity problem. We show that if we modify the original measure by taking skew information with respect to the spectral decomposition of an observable, rather than the observable itself, as a measure of coherence, then the problem disappears, and the resultant coherence measure satisfies the monotonicity. Some concrete examples are discussed and related open issues are indicated.

  9. Optimal Monotonicity-Preserving Perturbations of a Given Runge–Kutta Method

    KAUST Repository

    Higueras, Inmaculada

    2018-02-14

    Perturbed Runge–Kutta methods (also referred to as downwind Runge–Kutta methods) can guarantee monotonicity preservation under larger step sizes relative to their traditional Runge–Kutta counterparts. In this paper we study the question of how to optimally perturb a given method in order to increase the radius of absolute monotonicity (a.m.). We prove that for methods with zero radius of a.m., it is always possible to give a perturbation with positive radius. We first study methods for linear problems and then methods for nonlinear problems. In each case, we prove upper bounds on the radius of a.m., and provide algorithms to compute optimal perturbations. We also provide optimal perturbations for many known methods.

  10. Optimal Monotonicity-Preserving Perturbations of a Given Runge–Kutta Method

    KAUST Repository

    Higueras, Inmaculada; Ketcheson, David I.; Kocsis, Tihamé r A.

    2018-01-01

    Perturbed Runge–Kutta methods (also referred to as downwind Runge–Kutta methods) can guarantee monotonicity preservation under larger step sizes relative to their traditional Runge–Kutta counterparts. In this paper we study the question of how to optimally perturb a given method in order to increase the radius of absolute monotonicity (a.m.). We prove that for methods with zero radius of a.m., it is always possible to give a perturbation with positive radius. We first study methods for linear problems and then methods for nonlinear problems. In each case, we prove upper bounds on the radius of a.m., and provide algorithms to compute optimal perturbations. We also provide optimal perturbations for many known methods.

  11. Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2014-01-01

    In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...

  12. Global Attractivity Results for Mixed-Monotone Mappings in Partially Ordered Complete Metric Spaces

    Directory of Open Access Journals (Sweden)

    Kalabušić S

    2009-01-01

    Full Text Available We prove fixed point theorems for mixed-monotone mappings in partially ordered complete metric spaces which satisfy a weaker contraction condition than the classical Banach contraction condition for all points that are related by given ordering. We also give a global attractivity result for all solutions of the difference equation , where satisfies mixed-monotone conditions with respect to the given ordering.

  13. Non-monotonic resonance in a spatially forced Lengyel-Epstein model

    Energy Technology Data Exchange (ETDEWEB)

    Haim, Lev [Physics Department, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Department of Oncology, Soroka University Medical Center, Beer-Sheva 84101 (Israel); Hagberg, Aric [Center for Nonlinear Studies, Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Meron, Ehud [Physics Department, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Department of Solar Energy and Environmental Physics, BIDR, Ben-Gurion University of the Negev, Sede Boqer Campus, Midreshet Ben-Gurion 84990 (Israel)

    2015-06-15

    We study resonant spatially periodic solutions of the Lengyel-Epstein model modified to describe the chlorine dioxide-iodine-malonic acid reaction under spatially periodic illumination. Using multiple-scale analysis and numerical simulations, we obtain the stability ranges of 2:1 resonant solutions, i.e., solutions with wavenumbers that are exactly half of the forcing wavenumber. We show that the width of resonant wavenumber response is a non-monotonic function of the forcing strength, and diminishes to zero at sufficiently strong forcing. We further show that strong forcing may result in a π/2 phase shift of the resonant solutions, and argue that the nonequilibrium Ising-Bloch front bifurcation can be reversed. We attribute these behaviors to an inherent property of forcing by periodic illumination, namely, the increase of the mean spatial illumination as the forcing amplitude is increased.

  14. Error bounds for augmented truncations of discrete-time block-monotone Markov chains under subgeometric drift conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2015-01-01

    This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...

  15. Pathwise duals of monotone and additive Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    -, - (2018) ISSN 0894-9840 R&D Projects: GA ČR GAP201/12/2613 Institutional support: RVO:67985556 Keywords : pathwise duality * monotone Markov process * additive Markov process * interacting particle system Subject RIV: BA - General Mathematics Impact factor: 0.854, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0465436.pdf

  16. The relation between majorization theory and quantum information from entanglement monotones perspective

    Energy Technology Data Exchange (ETDEWEB)

    Erol, V. [Department of Computer Engineering, Institute of Science, Okan University, Istanbul (Turkey); Netas Telecommunication Inc., Istanbul (Turkey)

    2016-04-21

    Entanglement has been studied extensively for understanding the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known monotones for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. The study on these monotones has been a hot topic in quantum information [1-7] in order to understand the role of entanglement in this discipline. It can be observed that from any arbitrary quantum pure state a mixed state can obtained. A natural generalization of this observation would be to consider local operations classical communication (LOCC) transformations between general pure states of two parties. Although this question is a little more difficult, a complete solution has been developed using the mathematical framework of the majorization theory [8]. In this work, we analyze the relation between entanglement monotones concurrence and negativity with respect to majorization for general two-level quantum systems of two particles.

  17. Generalized monotonicity from global minimization in fourth-order ODEs

    NARCIS (Netherlands)

    M.A. Peletier (Mark)

    2000-01-01

    textabstractWe consider solutions of the stationary Extended Fisher-Kolmogorov equation with general potential that are global minimizers of an associated variational problem. We present results that relate the global minimization property to a generalized concept of monotonicity of the solutions.

  18. Further heuristics for $k$-means: The merge-and-split heuristic and the $(k,l)$-means

    OpenAIRE

    Nielsen, Frank; Nock, Richard

    2014-01-01

    Finding the optimal $k$-means clustering is NP-hard in general and many heuristics have been designed for minimizing monotonically the $k$-means objective. We first show how to extend Lloyd's batched relocation heuristic and Hartigan's single-point relocation heuristic to take into account empty-cluster and single-point cluster events, respectively. Those events tend to increasingly occur when $k$ or $d$ increases, or when performing several restarts. First, we show that those special events ...

  19. Monotone methods for solving a boundary value problem of second order discrete system

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Ming

    1999-01-01

    Full Text Available A new concept of a pair of upper and lower solutions is introduced for a boundary value problem of second order discrete system. A comparison result is given. An existence theorem for a solution is established in terms of upper and lower solutions. A monotone iterative scheme is proposed, and the monotone convergence rate of the iteration is compared and analyzed. The numerical results are given.

  20. Scaling and mean normalized multiplicity in hadron-nucleus collisions

    International Nuclear Information System (INIS)

    Khan, M.Q.R.; Ahmad, M.S.; Hasan, R.

    1987-01-01

    Recently it has been reported that the dependence of the mean normalized multiplicity, R A , in hadron-nucleus collisions upon the effective number of projectile encounters, , is projectile independent. We report the failure of this kind of scaling using the world data at accelerator and cosmic ray energies. Infact, we have found that the dependence of R A upon the number of projectile encounters hA is projectile independent. This leads to a new kind of scaling. Further, the scaled multiplicity distributions are found independent on the nature and energy of the incident hadron in the energy range ≅ (17.2-300) GeV. (orig.)

  1. Absolute Monotonicity of Functions Related To Estimates of First Eigenvalue of Laplace Operator on Riemannian Manifolds

    Directory of Open Access Journals (Sweden)

    Feng Qi

    2014-10-01

    Full Text Available The authors find the absolute monotonicity and complete monotonicity of some functions involving trigonometric functions and related to estimates the lower bounds of the first eigenvalue of Laplace operator on Riemannian manifolds.

  2. Almost monotonicity formulas for elliptic and parabolic operators with variable coefficients

    KAUST Repository

    Matevosyan, Norayr

    2010-10-21

    In this paper we extend the results of Caffarelli, Jerison, and Kenig [Ann. of Math. (2)155 (2002)] and Caffarelli and Kenig [Amer. J. Math.120 (1998)] by establishing an almost monotonicity estimate for pairs of continuous functions satisfying u± ≥ 0 Lu± ≥ -1, u+ · u_ = 0 ;in an infinite strip (global version) or a finite parabolic cylinder (localized version), where L is a uniformly parabolic operator Lu = LA,b,cu := div(A(x, s)∇u) + b(x,s) · ∇u + c(x,s)u - δsu with double Dini continuous A and uniformly bounded b and c. We also prove the elliptic counterpart of this estimate.This closes the gap between the known conditions in the literature (both in the elliptic and parabolic case) imposed on u± in order to obtain an almost monotonicity estimate.At the end of the paper, we demonstrate how to use this new almost monotonicity formula to prove the optimal C1,1-regularity in a fairly general class of quasi-linear obstacle-type free boundary problems. © 2010 Wiley Periodicals, Inc.

  3. Effect on the mean first passage time in symmetrical bistable systems by cross-correlation between noises

    International Nuclear Information System (INIS)

    Wang, J.; Cao, L.; Wu, D.J.

    2003-01-01

    We present an analytic investigation of the mean first passage time in two opposite directions (from the left well to the right well and from right to left) by studying symmetrical bistable systems driven by correlated Gaussian white noises, and prove that the mean first passage time in two opposite directions is not symmetrical any more when noises are correlated. As examples, the mean first passage time in the quartic bistable model and the sawtooth bistable model are calculated, respectively. From the analytic results of the mean first passage time, we testify further the relation T(from x - to x + ,λ)≠T(from x + to x - ,λ) in the same area of the parameter plan. Moreover, it is found that the dependences of T + (i.e., T(from x - to x + ,λ)) and T - (i.e., T(from x + to x - ,λ)) upon the multiplicative noise intensity Q and the additive noise intensity D exhibit entirely different properties. For same areas of the parameter plan: in the quartic bistable system, when the T + vs. Q curve exhibits a maximum, while the T - vs. Q curve is monotonous; when the T + vs. D curve is monotonous, while the T - vs. D curve experiences a phase transition from decreasing monotonously to possessing one minimum. Increasing Q, when the T + vs. D curve experiences a phase transition from decreasing monotonously to possessing one maximum, while the T - vs. D curve only increases monotonously. Similar behaviours also exist in the sawtooth bistable model

  4. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization of decision trees and decision rules) to conduct experiments. We show that, for each monotone Boolean function with at most five variables, there exists a totally optimal decision tree which is optimal with respect to both depth and number of nodes.

  6. Theoretical and experimental study of non-monotonous effects

    International Nuclear Information System (INIS)

    Delforge, J.

    1977-01-01

    In recent years, the study of the effects of low dose rates has expanded considerably, especially in connection with current problems concerning the environment and health physics. After having made a precise definition of the different types of non-monotonous effect which may be encountered, for each the main experimental results known are indicated, as well as the principal consequences which may be expected. One example is the case of radiotherapy, where there is a chance of finding irradiation conditions such that the ratio of destructive action on malignant cells to healthy cells is significantly improved. In the second part of the report, the appearance of these phenomena, especially at low dose rates are explained. For this purpose, the theory of transformation systems of P. Delattre is used as a theoretical framework. With the help of a specific example, it is shown that non-monotonous effects are frequently encountered, especially when the overall effect observed is actually the sum of several different elementary effects (e.g. in survival curves, where death may be due to several different causes), or when the objects studied possess inherent kinetics not limited to restoration phenomena alone (e.g. cellular cycle) [fr

  7. Existence of weak solutions to first-order stationary mean-field games with Dirichlet conditions

    KAUST Repository

    Ferreira, Rita; Gomes, Diogo A.; Tada, Teruo

    2018-01-01

    In this paper, we study first-order stationary monotone mean-field games (MFGs) with Dirichlet boundary conditions. While for Hamilton--Jacobi equations Dirichlet conditions may not be satisfied, here, we establish the existence of solutions of MFGs that satisfy those conditions. To construct these solutions, we introduce a monotone regularized problem. Applying Schaefer's fixed-point theorem and using the monotonicity of the MFG, we verify that there exists a unique weak solution to the regularized problem. Finally, we take the limit of the solutions of the regularized problem and using Minty's method, we show the existence of weak solutions to the original MFG.

  8. How do people learn from negative evidence? Non-monotonic generalizations and sampling assumptions in inductive reasoning.

    Science.gov (United States)

    Voorspoels, Wouter; Navarro, Daniel J; Perfors, Amy; Ransom, Keith; Storms, Gert

    2015-09-01

    A robust finding in category-based induction tasks is for positive observations to raise the willingness to generalize to other categories while negative observations lower the willingness to generalize. This pattern is referred to as monotonic generalization. Across three experiments we find systematic non-monotonicity effects, in which negative observations raise the willingness to generalize. Experiments 1 and 2 show that this effect emerges in hierarchically structured domains when a negative observation from a different category is added to a positive observation. They also demonstrate that this is related to a specific kind of shift in the reasoner's hypothesis space. Experiment 3 shows that the effect depends on the assumptions that the reasoner makes about how inductive arguments are constructed. Non-monotonic reasoning occurs when people believe the facts were put together by a helpful communicator, but monotonicity is restored when they believe the observations were sampled randomly from the environment. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. A new efficient algorithm for computing the imprecise reliability of monotone systems

    International Nuclear Information System (INIS)

    Utkin, Lev V.

    2004-01-01

    Reliability analysis of complex systems by partial information about reliability of components and by different conditions of independence of components may be carried out by means of the imprecise probability theory which provides a unified framework (natural extension, lower and upper previsions) for computing the system reliability. However, the application of imprecise probabilities to reliability analysis meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, an efficient simplified algorithm to solve and decompose the optimization problems is proposed in the paper. This algorithm allows us to practically implement reliability analysis of monotone systems under partial and heterogeneous information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example illustrates the algorithm

  10. Statistical analysis of sediment toxicity by additive monotone regression splines

    NARCIS (Netherlands)

    Boer, de W.J.; Besten, den P.J.; Braak, ter C.J.F.

    2002-01-01

    Modeling nonlinearity and thresholds in dose-effect relations is a major challenge, particularly in noisy data sets. Here we show the utility of nonlinear regression with additive monotone regression splines. These splines lead almost automatically to the estimation of thresholds. We applied this

  11. Monotone matrix transformations defined by the group inverse and simultaneous diagonalizability

    International Nuclear Information System (INIS)

    Bogdanov, I I; Guterman, A E

    2007-01-01

    Bijective linear transformations of the matrix algebra over an arbitrary field that preserve simultaneous diagonalizability are characterized. This result is used for the characterization of bijective linear monotone transformations . Bibliography: 28 titles.

  12. Scaling laws for dislocation microstructures in monotonic and cyclic deformation of fcc metals

    International Nuclear Information System (INIS)

    Kubin, L.P.; Sauzay, M.

    2011-01-01

    This work reviews and critically discusses the current understanding of two scaling laws, which are ubiquitous in the modeling of monotonic plastic deformation in face-centered cubic metals. A compilation of the available data allows extending the domain of application of these scaling laws to cyclic deformation. The strengthening relation tells that the flow stress is proportional to the square root of the average dislocation density, whereas the similitude relation assumes that the flow stress is inversely proportional to the characteristic wavelength of dislocation patterns. The strengthening relation arises from short-range reactions of non-coplanar segments and applies all through the first three stages of the monotonic stress vs. strain curves. The value of the proportionality coefficient is calculated and simulated in good agreement with the bulk of experimental measurements published since the beginning of the 1960's. The physical origin of what is called similitude is not understood and the related coefficient is not predictable. Its value is determined from a review of the experimental literature. The generalization of these scaling laws to cyclic deformation is carried out on the base of a large collection of experimental results on single and polycrystals of various materials and on different microstructures. Surprisingly, for persistent slip bands (PSBs), both the strengthening and similitude coefficients appear to be more than two times smaller than the corresponding monotonic values, whereas their ratio is the same as in monotonic deformation. The similitude relation is also checked in cell structures and in labyrinth structures. Under low cyclic stresses, the strengthening coefficient is found even lower than in PSBs. A tentative explanation is proposed for the differences observed between cyclic and monotonic deformation. Finally, the influence of cross-slip on the temperature dependence of the saturation stress of PSBs is discussed in some detail

  13. Monotonicity properties of keff with shape change and with nesting

    International Nuclear Information System (INIS)

    Arzhanov, V.

    2002-01-01

    It was found that, contrary to expectations based on physical intuition, k eff can both increase and decrease when changing the shape of an initially regular critical system, while preserving its volume. Physical intuition would only allow for a decrease of k eff when the surface/volume ratio increases. The unexpected behaviour of increasing k eff was found through numerical investigation. For a convincing demonstration of the possibility of the non-monotonic behaviour, a simple geometrical proof was constructed. This latter proof, in turn, is based on the assumption that k eff can only increase (or stay constant) in the case of nesting, i.e. when adding extra volume to a system. Since we found no formal proof of the nesting theorem for the general case, we close the paper by a simple formal proof of the monotonic behaviour of k eff by nesting

  14. Monotone difference schemes for weakly coupled elliptic and parabolic systems

    NARCIS (Netherlands)

    P. Matus (Piotr); F.J. Gaspar Lorenz (Franscisco); L. M. Hieu (Le Minh); V.T.K. Tuyen (Vo Thi Kim)

    2017-01-01

    textabstractThe present paper is devoted to the development of the theory of monotone difference schemes, approximating the so-called weakly coupled system of linear elliptic and quasilinear parabolic equations. Similarly to the scalar case, the canonical form of the vector-difference schemes is

  15. Existence of weak solutions to first-order stationary mean-field games with Dirichlet conditions

    KAUST Repository

    Ferreira, Rita

    2018-04-19

    In this paper, we study first-order stationary monotone mean-field games (MFGs) with Dirichlet boundary conditions. While for Hamilton--Jacobi equations Dirichlet conditions may not be satisfied, here, we establish the existence of solutions of MFGs that satisfy those conditions. To construct these solutions, we introduce a monotone regularized problem. Applying Schaefer\\'s fixed-point theorem and using the monotonicity of the MFG, we verify that there exists a unique weak solution to the regularized problem. Finally, we take the limit of the solutions of the regularized problem and using Minty\\'s method, we show the existence of weak solutions to the original MFG.

  16. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    Science.gov (United States)

    Englander, Jacob A.; Englander, Arnold C.

    2014-01-01

    Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.

  17. Monotone Comparative Statics for the Industry Composition

    DEFF Research Database (Denmark)

    Laugesen, Anders Rosenstand; Bache, Peter Arendorf

    2015-01-01

    We let heterogeneous firms face decisions on a number of complementary activities in a monopolistically-competitive industry. The endogenous level of competition and selection regarding entry and exit of firms introduces a wedge between monotone comparative statics (MCS) at the firm level and MCS...... for the industry composition. The latter phenomenon is defined as first-order stochastic dominance shifts in the equilibrium distributions of all activities across active firms. We provide sufficient conditions for MCS at both levels of analysis and show that we may have either type of MCS without the other...

  18. An electronic implementation for Liao's chaotic delayed neuron model with non-monotonous activation function

    International Nuclear Information System (INIS)

    Duan Shukai; Liao Xiaofeng

    2007-01-01

    A new chaotic delayed neuron model with non-monotonously increasing transfer function, called as chaotic Liao's delayed neuron model, was recently reported and analyzed. An electronic implementation of this model is described in detail. At the same time, some methods in circuit design, especially for circuit with time delayed unit and non-monotonously increasing activation unit, are also considered carefully. We find that the dynamical behaviors of the designed circuits are closely similar to the results predicted by numerical experiments

  19. Sampling from a Discrete Distribution While Preserving Monotonicity.

    Science.gov (United States)

    1982-02-01

    in a table beforehand, this procedure, known as the inverse transform method, requires n storage spaces and EX comparisons on average, which may prove...limitations that deserve attention: a. In general, the alias method does not preserve a monotone relationship between U and X as does the inverse transform method...uses the inverse transform approach but with more information computed beforehand, as in the alias method. The proposed method is not new having been

  20. Martensitic Transformation in Ultrafine-Grained Stainless Steel AISI 304L Under Monotonic and Cyclic Loading

    Directory of Open Access Journals (Sweden)

    Heinz Werner Höppel

    2012-02-01

    Full Text Available The monotonic and cyclic deformation behavior of ultrafine-grained metastable austenitic steel AISI 304L, produced by severe plastic deformation, was investigated. Under monotonic loading, the martensitic phase transformation in the ultrafine-grained state is strongly favored. Under cyclic loading, the martensitic transformation behavior is similar to the coarse-grained condition, but the cyclic stress response is three times larger for the ultrafine-grained condition.

  1. Existence, uniqueness, monotonicity and asymptotic behaviour of travelling waves for epidemic models

    International Nuclear Information System (INIS)

    Hsu, Cheng-Hsiung; Yang, Tzi-Sheng

    2013-01-01

    The purpose of this work is to investigate the existence, uniqueness, monotonicity and asymptotic behaviour of travelling wave solutions for a general epidemic model arising from the spread of an epidemic by oral–faecal transmission. First, we apply Schauder's fixed point theorem combining with a supersolution and subsolution pair to derive the existence of positive monotone monostable travelling wave solutions. Then, applying the Ikehara's theorem, we determine the exponential rates of travelling wave solutions which converge to two different equilibria as the moving coordinate tends to positive infinity and negative infinity, respectively. Finally, using the sliding method, we prove the uniqueness result provided the travelling wave solutions satisfy some boundedness conditions. (paper)

  2. Positivity and monotonicity properties of C0-semigroups. Pt. 1

    International Nuclear Information System (INIS)

    Bratteli, O.; Kishimoto, A.; Robinson, D.W.

    1980-01-01

    If exp(-tH), exp(-tK), are self-adjoint, positivity preserving, contraction semigroups on a Hilbert space H = L 2 (X;dμ) we write esup(-tH) >= esup(-tK) >= 0 whenever exp(-tH) - exp(-tK) is positivity preserving for all t >= 0 and then we characterize the class of positive functions for which (*) always implies esup(-tf(H)) >= esup(-tf(K)) >= 0. This class consists of the f epsilon Csup(infinitely)(0, infinitely) with (-1)sup(n)fsup((n + 1))(x) >= 0, x epsilon(0, infinitely), n = 0, 1, 2, ... In particular it contains the class of monotone operator functions. Furthermore if exp(-tH) is Lsup(P)(X;dμ) contractive for all p epsilon[1, infinitely] and all t > 0 (or, equivalently, for p = infinitely and t > 0) then exp(-tf(H)) has the same property. Various applications to monotonicity properties of Green's functions are given. (orig.)

  3. Non-monotonic effect of growth temperature on carrier collection in SnS solar cells

    International Nuclear Information System (INIS)

    Chakraborty, R.; Steinmann, V.; Mangan, N. M.; Brandt, R. E.; Poindexter, J. R.; Jaramillo, R.; Mailoa, J. P.; Hartman, K.; Polizzotti, A.; Buonassisi, T.; Yang, C.; Gordon, R. G.

    2015-01-01

    We quantify the effects of growth temperature on material and device properties of thermally evaporated SnS thin-films and test structures. Grain size, Hall mobility, and majority-carrier concentration monotonically increase with growth temperature. However, the charge collection as measured by the long-wavelength contribution to short-circuit current exhibits a non-monotonic behavior: the collection decreases with increased growth temperature from 150 °C to 240 °C and then recovers at 285 °C. Fits to the experimental internal quantum efficiency using an opto-electronic model indicate that the non-monotonic behavior of charge-carrier collection can be explained by a transition from drift- to diffusion-assisted components of carrier collection. The results show a promising increase in the extracted minority-carrier diffusion length at the highest growth temperature of 285 °C. These findings illustrate how coupled mechanisms can affect early stage device development, highlighting the critical role of direct materials property measurements and simulation

  4. The effect of the electrical double layer on hydrodynamic lubrication: a non-monotonic trend with increasing zeta potential

    Directory of Open Access Journals (Sweden)

    Dalei Jing

    2017-07-01

    Full Text Available In the present study, a modified Reynolds equation including the electrical double layer (EDL-induced electroviscous effect of lubricant is established to investigate the effect of the EDL on the hydrodynamic lubrication of a 1D slider bearing. The theoretical model is based on the nonlinear Poisson–Boltzmann equation without the use of the Debye–Hückel approximation. Furthermore, the variation in the bulk electrical conductivity of the lubricant under the influence of the EDL is also considered during the theoretical analysis of hydrodynamic lubrication. The results show that the EDL can increase the hydrodynamic load capacity of the lubricant in a 1D slider bearing. More importantly, the hydrodynamic load capacity of the lubricant under the influence of the EDL shows a non-monotonic trend, changing from enhancement to attenuation with a gradual increase in the absolute value of the zeta potential. This non-monotonic hydrodynamic lubrication is dependent on the non-monotonic electroviscous effect of the lubricant generated by the EDL, which is dominated by the non-monotonic electrical field strength and non-monotonic electrical body force on the lubricant. The subject of the paper is the theoretical modeling and the corresponding analysis.

  5. Modelling the drained response of bucket foundations for offshore wind turbines under general monotonic and cyclic loading

    DEFF Research Database (Denmark)

    Foglia, Aligi; Gottardi, Guido; Govoni, Laura

    2015-01-01

    The response of bucket foundations on sand subjected to planar monotonic and cyclic loading is investigated in the paper. Thirteen monotonic and cyclic laboratory tests on a skirted footing model having a 0.3 m diameter and embedment ratio equal to 1 are presented. The loading regime reproduces t...

  6. Multigenerational contaminant exposures produce non-monotonic, transgenerational responses in Daphnia magna

    International Nuclear Information System (INIS)

    Kimberly, David A.; Salice, Christopher J.

    2015-01-01

    Generally, ecotoxicologists rely on short-term tests that assume populations to be static. Conversely, natural populations may be exposed to the same stressors for many generations, which can alter tolerance to the same (or other) stressors. The objective of this study was to improve our understanding of how multigenerational stressors alter life history traits and stressor tolerance. After continuously exposing Daphnia magna to cadmium for 120 days, we assessed life history traits and conducted a challenge at higher temperature and cadmium concentrations. Predictably, individuals exposed to cadmium showed an overall decrease in reproductive output compared to controls. Interestingly, control D. magna were the most cadmium tolerant to novel cadmium, followed by those exposed to high cadmium. Our data suggest that long-term exposure to cadmium alter tolerance traits in a non-monotonic way. Because we observed effects after one-generation removal from cadmium, transgenerational effects may be possible as a result of multigenerational exposure. - Highlights: • Daphnia magna exposed to cadmium for 120 days. • D. magna exposed to cadmium had decreased reproductive output. • Control D. magna were most cadmium tolerant to novel cadmium stress. • Long-term exposure to cadmium alter tolerance traits in a non-monotonic way. • Transgenerational effects observed as a result of multigenerational exposure. - Adverse effects of long-term cadmium exposure persist into cadmium free conditions, as seen by non-monotonic responses when exposed to novel stress one generation removed.

  7. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  8. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  9. Normalized mutual information based PET-MR registration using K-Means clustering and shading correction

    NARCIS (Netherlands)

    Knops, Z.F.; Maintz, J.B.A.; Viergever, M.A.; Pluim, J.P.W.; Gee, J.C.; Maintz, J.B.A.; Vannier, M.W.

    2003-01-01

    A method for the efficient re-binning and shading based correction of intensity distributions of the images prior to normalized mutual information based registration is presented. Our intensity distribution re-binning method is based on the K-means clustering algorithm as opposed to the generally

  10. Experimental quantum control landscapes: Inherent monotonicity and artificial structure

    International Nuclear Information System (INIS)

    Roslund, Jonathan; Rabitz, Herschel

    2009-01-01

    Unconstrained searches over quantum control landscapes are theoretically predicted to generally exhibit trap-free monotonic behavior. This paper makes an explicit experimental demonstration of this intrinsic monotonicity for two controlled quantum systems: frequency unfiltered and filtered second-harmonic generation (SHG). For unfiltered SHG, the landscape is randomly sampled and interpolation of the data is found to be devoid of landscape traps up to the level of data noise. In the case of narrow-band-filtered SHG, trajectories are taken on the landscape to reveal a lack of traps. Although the filtered SHG landscape is trap free, it exhibits a rich local structure. A perturbation analysis around the top of these landscapes provides a basis to understand their topology. Despite the inherent trap-free nature of the landscapes, practical constraints placed on the controls can lead to the appearance of artificial structure arising from the resultant forced sampling of the landscape. This circumstance and the likely lack of knowledge about the detailed local landscape structure in most quantum control applications suggests that the a priori identification of globally successful (un)constrained curvilinear control variables may be a challenging task.

  11. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...

  12. Almost monotonicity formulas for elliptic and parabolic operators with variable coefficients

    KAUST Repository

    Matevosyan, Norayr; Petrosyan, Arshak

    2010-01-01

    In this paper we extend the results of Caffarelli, Jerison, and Kenig [Ann. of Math. (2)155 (2002)] and Caffarelli and Kenig [Amer. J. Math.120 (1998)] by establishing an almost monotonicity estimate for pairs of continuous functions satisfying u

  13. Non-monotonic reasoning in conceptual modeling and ontology design: A proposal

    CSIR Research Space (South Africa)

    Casini, G

    2013-06-01

    Full Text Available -1 2nd International Workshop on Ontologies and Conceptual Modeling (Onto.Com 2013), Valencia, Spain, 17-21 June 2013 Non-monotonic reasoning in conceptual modeling and ontology design: A proposal Giovanni Casini1 and Alessandro Mosca2 1...

  14. Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data

    Directory of Open Access Journals (Sweden)

    Xueqin Zhou

    2017-01-01

    Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.

  15. Monotonous braking of high energy hadrons in nuclear matter

    International Nuclear Information System (INIS)

    Strugalski, Z.

    1979-01-01

    Propagation of high energy hadrons in nuclear matter is discussed. The possibility of the existence of the monotonous energy losses of hadrons in nuclear matter is considered. In favour of this hypothesis experimental facts such as pion-nucleus interactions (proton emission spectra, proton multiplicity distributions in these interactions) and other data are presented. The investigated phenomenon in the framework of the hypothesis is characterized in more detail

  16. A measure of uncertainty regarding the interval constraint of normal mean elicited by two stages of a prior hierarchy.

    Science.gov (United States)

    Kim, Hea-Jung

    2014-01-01

    This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.

  17. Two numerical methods for mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2016-01-01

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  18. Two numerical methods for mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2016-01-09

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  19. The electronic structure of normal metal-superconductor bilayers

    Energy Technology Data Exchange (ETDEWEB)

    Halterman, Klaus; Elson, J Merle [Sensor and Signal Sciences Division, Naval Air Warfare Center, China Lake, CA 93355 (United States)

    2003-09-03

    We study the electronic properties of ballistic thin normal metal-bulk superconductor heterojunctions by solving the Bogoliubov-de Gennes equations in the quasiclassical and microscopic 'exact' regimes. In particular, the significance of the proximity effect is examined through a series of self-consistent calculations of the space-dependent pair potential {delta}(r). It is found that self-consistency cannot be neglected for normal metal layer widths smaller than the superconducting coherence length {xi}{sub 0}, revealing its importance through discernible features in the subgap density of states. Furthermore, the exact self-consistent treatment yields a proximity-induced gap in the normal metal spectrum, which vanishes monotonically when the normal metal length exceeds {xi}{sub 0}. Through a careful analysis of the excitation spectra, we find that quasiparticle trajectories with wavevectors oriented mainly along the interface play a critical role in the destruction of the energy gap.

  20. Quantisation of monotonic twist maps

    International Nuclear Information System (INIS)

    Boasman, P.A.; Smilansky, U.

    1993-08-01

    Using an approach suggested by Moser, classical Hamiltonians are generated that provide an interpolating flow to the stroboscopic motion of maps with a monotonic twist condition. The quantum properties of these Hamiltonians are then studied in analogy with recent work on the semiclassical quantization of systems based on Poincare surfaces of section. For the generalized standard map, the correspondence with the usual classical and quantum results is shown, and the advantages of the quantum Moser Hamiltonian demonstrated. The same approach is then applied to the free motion of a particle on a 2-torus, and to the circle billiard. A natural quantization condition based on the eigenphases of the unitary time--development operator is applied, leaving the exact eigenvalues of the torus, but only the semiclassical eigenvalues for the billiard; an explanation for this failure is proposed. It is also seen how iterating the classical map commutes with the quantization. (authors)

  1. On the Fractional Mean Value

    OpenAIRE

    Hosseinabadi, Abdolali Neamaty; Nategh, Mehdi

    2014-01-01

    This work, dealt with the classical mean value theorem and took advantage of it in the fractional calculus. The concept of a fractional critical point is introduced. Some sufficient conditions for the existence of a critical point is studied and an illustrative example rele- vant to the concept of the time dilation effect is given. The present paper also includes, some connections between convexity (and monotonicity) with fractional derivative in the Riemann-Liouville sense.

  2. The retest distribution of the visual field summary index mean deviation is close to normal.

    Science.gov (United States)

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  3. A Measure of Uncertainty regarding the Interval Constraint of Normal Mean Elicited by Two Stages of a Prior Hierarchy

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2014-01-01

    Full Text Available This paper considers a hierarchical screened Gaussian model (HSGM for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.

  4. On utilization bounds for a periodic resource under rate monotonic scheduling

    NARCIS (Netherlands)

    Renssen, van A.M.; Geuns, S.J.; Hausmans, J.P.H.M.; Poncin, W.; Bril, R.J.

    2009-01-01

    This paper revisits utilization bounds for a periodic resource under the rate monotonic (RM) scheduling algorithm. We show that the existing utilization bound, as presented in [8, 9], is optimistic. We subsequently show that by viewing the unavailability of the periodic resource as a deferrable

  5. Monotonicity and Logarithmic Concavity of Two Functions Involving Exponential Function

    Science.gov (United States)

    Liu, Ai-Qi; Li, Guo-Fu; Guo, Bai-Ni; Qi, Feng

    2008-01-01

    The function 1 divided by "x"[superscript 2] minus "e"[superscript"-x"] divided by (1 minus "e"[superscript"-x"])[superscript 2] for "x" greater than 0 is proved to be strictly decreasing. As an application of this monotonicity, the logarithmic concavity of the function "t" divided by "e"[superscript "at"] minus "e"[superscript"(a-1)""t"] for "a"…

  6. On monotonic solutions of an integral equation of Abel type

    International Nuclear Information System (INIS)

    Darwish, Mohamed Abdalla

    2007-08-01

    We present an existence theorem of monotonic solutions for a quadratic integral equation of Abel type in C[0, 1]. The famous Chandrasekhar's integral equation is considered as a special case. The concept of measure of noncompactness and a fi xed point theorem due to Darbo are the main tools in carrying out our proof. (author)

  7. Logarithmically complete monotonicity of a function related to the Catalan-Qi function

    Directory of Open Access Journals (Sweden)

    Qi Feng

    2016-08-01

    Full Text Available In the paper, the authors find necessary and sufficient conditions such that a function related to the Catalan-Qi function, which is an alternative generalization of the Catalan numbers, is logarithmically complete monotonic.

  8. A note on profit maximization and monotonicity for inbound call centers

    NARCIS (Netherlands)

    Koole, G.M.; Pot, S.A.

    2011-01-01

    We consider an inbound call center with a fixed reward per call and communication and agent costs. By controlling the number of lines and the number of agents, we can maximize the profit. Abandonments are included in our performance model. Monotonicity results for the maximization problem are

  9. Effect of dynamic monotonic and cyclic loading on fracture behavior for Japanese carbon steel pipe STS410

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Kanji; Murayama, Kouichi; Ogata, Hiroyuki [and others

    1997-04-01

    The fracture behavior for Japanese carbon steel pipe STS410 was examined under dynamic monotonic and cyclic loading through a research program of International Piping Integrity Research Group (EPIRG-2), in order to evaluate the strength of pipe during the seismic event The tensile test and the fracture toughness test were conducted for base metal and TIG weld metal. Three base metal pipe specimens, 1,500mm in length and 6-inch diameter sch.120, were employed for a quasi-static monotonic, a dynamic monotonic and a dynamic cyclic loading pipe fracture tests. One weld joint pipe specimen was also employed for a dynamic cyclic loading test In the dynamic cyclic loading test, the displacement was controlled as applying the fully reversed load (R=-1). The pipe specimens with a circumferential through-wall crack were subjected four point bending load at 300C in air. Japanese STS410 carbon steel pipe material was found to have high toughness under dynamic loading condition through the CT fracture toughness test. As the results of pipe fracture tests, the maximum moment to pipe fracture under dynamic monotonic and cyclic loading condition, could be estimated by plastic collapse criterion and the effect of dynamic monotonic loading and cyclic loading was a little on the maximum moment to pipe fracture of the STS410 carbon steel pipe. The STS410 carbon steel pipe seemed to be less sensitive to dynamic and cyclic loading effects than the A106Gr.B carbon steel pipe evaluated in IPIRG-1 program.

  10. Effect of dynamic monotonic and cyclic loading on fracture behavior for Japanese carbon steel pipe STS410

    International Nuclear Information System (INIS)

    Kinoshita, Kanji; Murayama, Kouichi; Ogata, Hiroyuki

    1997-01-01

    The fracture behavior for Japanese carbon steel pipe STS410 was examined under dynamic monotonic and cyclic loading through a research program of International Piping Integrity Research Group (EPIRG-2), in order to evaluate the strength of pipe during the seismic event The tensile test and the fracture toughness test were conducted for base metal and TIG weld metal. Three base metal pipe specimens, 1,500mm in length and 6-inch diameter sch.120, were employed for a quasi-static monotonic, a dynamic monotonic and a dynamic cyclic loading pipe fracture tests. One weld joint pipe specimen was also employed for a dynamic cyclic loading test In the dynamic cyclic loading test, the displacement was controlled as applying the fully reversed load (R=-1). The pipe specimens with a circumferential through-wall crack were subjected four point bending load at 300C in air. Japanese STS410 carbon steel pipe material was found to have high toughness under dynamic loading condition through the CT fracture toughness test. As the results of pipe fracture tests, the maximum moment to pipe fracture under dynamic monotonic and cyclic loading condition, could be estimated by plastic collapse criterion and the effect of dynamic monotonic loading and cyclic loading was a little on the maximum moment to pipe fracture of the STS410 carbon steel pipe. The STS410 carbon steel pipe seemed to be less sensitive to dynamic and cyclic loading effects than the A106Gr.B carbon steel pipe evaluated in IPIRG-1 program

  11. Two Numerical Approaches to Stationary Mean-Field Games

    KAUST Repository

    Almulla, Noha; Ferreira, Rita; Gomes, Diogo A.

    2016-01-01

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient-flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  12. Two Numerical Approaches to Stationary Mean-Field Games

    KAUST Repository

    Almulla, Noha

    2016-10-04

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient-flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  13. Multipartite entangled quantum states: Transformation, Entanglement monotones and Application

    Science.gov (United States)

    Cui, Wei

    Entanglement is one of the fundamental features of quantum information science. Though bipartite entanglement has been analyzed thoroughly in theory and shown to be an important resource in quantum computation and communication protocols, the theory of entanglement shared between more than two parties, which is called multipartite entanglement, is still not complete. Specifically, the classification of multipartite entanglement and the transformation property between different multipartite states by local operators and classical communications (LOCC) are two fundamental questions in the theory of multipartite entanglement. In this thesis, we present results related to the LOCC transformation between multipartite entangled states. Firstly, we investigate the bounds on the LOCC transformation probability between multipartite states, especially the GHZ class states. By analyzing the involvement of 3-tangle and other entanglement measures under weak two-outcome measurement, we derive explicit upper and lower bound on the transformation probability between GHZ class states. After that, we also analyze the transformation between N-party W type states, which is a special class of multipartite entangled states that has an explicit unique expression and a set of analytical entanglement monotones. We present a necessary and sufficient condition for a known upper bound of transformation probability between two N-party W type states to be achieved. We also further investigate a novel entanglement transformation protocol, the random distillation, which transforms multipartite entanglement into bipartite entanglement ii shared by a non-deterministic pair of parties. We find upper bounds for the random distillation protocol for general N-party W type states and find the condition for the upper bounds to be achieved. What is surprising is that the upper bounds correspond to entanglement monotones that can be increased by Separable Operators (SEP), which gives the first set of

  14. Characteristic of monotonicity of Orlicz function spaces equipped with the Orlicz norm

    Czech Academy of Sciences Publication Activity Database

    Foralewski, P.; Hudzik, H.; Kaczmarek, R.; Krbec, Miroslav

    2013-01-01

    Roč. 53, č. 2 (2013), s. 421-432 ISSN 0373-8299 R&D Projects: GA ČR GAP201/10/1920 Institutional support: RVO:67985840 Keywords : Orlicz space * Köthe space * characteristic of monotonicity Subject RIV: BA - General Mathematics

  15. Hybrid Proximal-Point Methods for Zeros of Maximal Monotone Operators, Variational Inequalities and Mixed Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Kriengsak Wattanawitoon

    2011-01-01

    Full Text Available We prove strong and weak convergence theorems of modified hybrid proximal-point algorithms for finding a common element of the zero point of a maximal monotone operator, the set of solutions of equilibrium problems, and the set of solution of the variational inequality operators of an inverse strongly monotone in a Banach space under different conditions. Moreover, applications to complementarity problems are given. Our results modify and improve the recently announced ones by Li and Song (2008 and many authors.

  16. An electronic implementation for Liao's chaotic delayed neuron model with non-monotonous activation function

    Energy Technology Data Exchange (ETDEWEB)

    Duan Shukai [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); School of Electronic and Information Engineering, Southwest University, Chongqing 400715 (China)], E-mail: duansk@swu.edu.cn; Liao Xiaofeng [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)], E-mail: xfliao@cqu.edu.cn

    2007-09-10

    A new chaotic delayed neuron model with non-monotonously increasing transfer function, called as chaotic Liao's delayed neuron model, was recently reported and analyzed. An electronic implementation of this model is described in detail. At the same time, some methods in circuit design, especially for circuit with time delayed unit and non-monotonously increasing activation unit, are also considered carefully. We find that the dynamical behaviors of the designed circuits are closely similar to the results predicted by numerical experiments.

  17. Modeling non-monotonic properties under propositional argumentation

    Science.gov (United States)

    Wang, Geng; Lin, Zuoquan

    2013-03-01

    In the field of knowledge representation, argumentation is usually considered as an abstract framework for nonclassical logic. In this paper, however, we'd like to present a propositional argumentation framework, which can be used to closer simulate a real-world argumentation. We thereby argue that under a dialectical argumentation game, we can allow non-monotonic reasoning even under classical logic. We introduce two methods together for gaining nonmonotonicity, one by giving plausibility for arguments, the other by adding "exceptions" which is similar to defaults. Furthermore, we will give out an alternative definition for propositional argumentation using argumentative models, which is highly related to the previous reasoning method, but with a simple algorithm for calculation.

  18. A Min-max Relation for Monotone Path Systems in Simple Regions

    DEFF Research Database (Denmark)

    Cameron, Kathleen

    1996-01-01

    A monotone path system (MPS) is a finite set of pairwise disjointpaths (polygonal arcs) in the plane such that every horizontal line intersectseach of the paths in at most one point. We consider a simple polygon in thexy-plane which bounds the simple polygonal (closed) region D. Let T and B betwo...

  19. On-line learning of non-monotonic rules by simple perceptron

    OpenAIRE

    Inoue, Jun-ichi; Nishimori, Hidetoshi; Kabashima, Yoshiyuki

    1997-01-01

    We study the generalization ability of a simple perceptron which learns unlearnable rules. The rules are presented by a teacher perceptron with a non-monotonic transfer function. The student is trained in the on-line mode. The asymptotic behaviour of the generalization error is estimated under various conditions. Several learning strategies are proposed and improved to obtain the theoretical lower bound of the generalization error.

  20. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  1. The Monotonic Lagrangian Grid for Rapid Air-Traffic Evaluation

    Science.gov (United States)

    Kaplan, Carolyn; Dahm, Johann; Oran, Elaine; Alexandrov, Natalia; Boris, Jay

    2010-01-01

    The Air Traffic Monotonic Lagrangian Grid (ATMLG) is presented as a tool to evaluate new air traffic system concepts. The model, based on an algorithm called the Monotonic Lagrangian Grid (MLG), can quickly sort, track, and update positions of many aircraft, both on the ground (at airports) and in the air. The underlying data structure is based on the MLG, which is used for sorting and ordering positions and other data needed to describe N moving bodies and their interactions. Aircraft that are close to each other in physical space are always near neighbors in the MLG data arrays, resulting in a fast nearest-neighbor interaction algorithm that scales as N. Recent upgrades to ATMLG include adding blank place-holders within the MLG data structure, which makes it possible to dynamically change the MLG size and also improves the quality of the MLG grid. Additional upgrades include adding FAA flight plan data, such as way-points and arrival and departure times from the Enhanced Traffic Management System (ETMS), and combining the MLG with the state-of-the-art strategic and tactical conflict detection and resolution algorithms from the NASA-developed Stratway software. In this paper, we present results from our early efforts to couple ATMLG with the Stratway software, and we demonstrate that it can be used to quickly simulate air traffic flow for a very large ETMS dataset.

  2. Monotonicity of fitness landscapes and mutation rate control.

    Science.gov (United States)

    Belavkin, Roman V; Channon, Alastair; Aston, Elizabeth; Aston, John; Krašovec, Rok; Knight, Christopher G

    2016-12-01

    A common view in evolutionary biology is that mutation rates are minimised. However, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Much biological theory in this area is based on Ronald Fisher's work, who used Euclidean geometry to study the relation between mutation size and expected fitness of the offspring in infinite phenotypic spaces. Here we reconsider this theory based on the alternative geometry of discrete and finite spaces of DNA sequences. First, we consider the geometric case of fitness being isomorphic to distance from an optimum, and show how problems of optimal mutation rate control can be solved exactly or approximately depending on additional constraints of the problem. Then we consider the general case of fitness communicating only partial information about the distance. We define weak monotonicity of fitness landscapes and prove that this property holds in all landscapes that are continuous and open at the optimum. This theoretical result motivates our hypothesis that optimal mutation rate functions in such landscapes will increase when fitness decreases in some neighbourhood of an optimum, resembling the control functions derived in the geometric case. We test this hypothesis experimentally by analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. Our findings support the hypothesis and find that the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We discuss the relevance of these findings to living organisms.

  3. A Mathematical Model for Non-monotonic Deposition Profiles in Deep Bed Filtration Systems

    DEFF Research Database (Denmark)

    Yuan, Hao; Shapiro, Alexander

    2011-01-01

    A mathematical model for suspension/colloid flow in porous media and non-monotonic deposition is proposed. It accounts for the migration of particles associated with the pore walls via the second energy minimum (surface associated phase). The surface associated phase migration is characterized...... by advection and diffusion/dispersion. The proposed model is able to produce a nonmonotonic deposition profile. A set of methods for estimating the modeling parameters is provided in the case of minimal particle release. The estimation can be easily performed with available experimental information....... The numerical modeling results highly agree with the experimental observations, which proves the ability of the model to catch a non-monotonic deposition profile in practice. An additional equation describing a mobile population behaving differently from the injected population seems to be a sufficient...

  4. Mean-squared displacements for normal and anomalous diffusion of grains

    International Nuclear Information System (INIS)

    Trigger, S A; Heijst, G J F van; Schram, P P J M

    2005-01-01

    The problem of normal and anomalous space difiusion is formulated on the basis of the integral equations with various type of the probability transition functions for difiusion (PTD functions). For the cases of stationary and time-independent PTD functions the method of fractional differentiation is avoided to construct the correct probability distributions for arbitrary distances, what is important for applications to different stochastic problems. A new general integral equation for the particle distribution, which contains the time-dependent PTD function with one or, for more complicated physical situations, with two times, is formulated and discussed. On this basis fractional differentiation in time is also avoided and a wide class of time dependent PTD functions can be investigated. Calculations of the mean-squared displacements for the various cases are performed on the basis of formulated approach. The particular problems for the PTD functions, dependable from one and for two times, are solved

  5. Regional trends in short-duration precipitation extremes: a flexible multivariate monotone quantile regression approach

    Science.gov (United States)

    Cannon, Alex

    2017-04-01

    Estimating historical trends in short-duration rainfall extremes at regional and local scales is challenging due to low signal-to-noise ratios and the limited availability of homogenized observational data. In addition to being of scientific interest, trends in rainfall extremes are of practical importance, as their presence calls into question the stationarity assumptions that underpin traditional engineering and infrastructure design practice. Even with these fundamental challenges, increasingly complex questions are being asked about time series of extremes. For instance, users may not only want to know whether or not rainfall extremes have changed over time, they may also want information on the modulation of trends by large-scale climate modes or on the nonstationarity of trends (e.g., identifying hiatus periods or periods of accelerating positive trends). Efforts have thus been devoted to the development and application of more robust and powerful statistical estimators for regional and local scale trends. While a standard nonparametric method like the regional Mann-Kendall test, which tests for the presence of monotonic trends (i.e., strictly non-decreasing or non-increasing changes), makes fewer assumptions than parametric methods and pools information from stations within a region, it is not designed to visualize detected trends, include information from covariates, or answer questions about the rate of change in trends. As a remedy, monotone quantile regression (MQR) has been developed as a nonparametric alternative that can be used to estimate a common monotonic trend in extremes at multiple stations. Quantile regression makes efficient use of data by directly estimating conditional quantiles based on information from all rainfall data in a region, i.e., without having to precompute the sample quantiles. The MQR method is also flexible and can be used to visualize and analyze the nonlinearity of the detected trend. However, it is fundamentally a

  6. Non-Monotonic Spatial Reasoning with Answer Set Programming Modulo Theories

    OpenAIRE

    Wałęga, Przemysław Andrzej; Schultz, Carl; Bhatt, Mehul

    2016-01-01

    The systematic modelling of dynamic spatial systems is a key requirement in a wide range of application areas such as commonsense cognitive robotics, computer-aided architecture design, and dynamic geographic information systems. We present ASPMT(QS), a novel approach and fully-implemented prototype for non-monotonic spatial reasoning -a crucial requirement within dynamic spatial systems- based on Answer Set Programming Modulo Theories (ASPMT). ASPMT(QS) consists of a (qualitative) spatial re...

  7. Explicit solutions of one-dimensional, first-order, stationary mean-field games with congestion

    KAUST Repository

    Gomes, Diogo A.; Nurbekyan, Levon; Prazeres, Mariana

    2017-01-01

    Here, we consider one-dimensional first-order stationary mean-field games with congestion. These games arise when crowds face difficulty moving in high-density regions. We look at both monotone decreasing and increasing interactions and construct

  8. Monotonic Set-Extended Prefix Rewriting and Verification of Recursive Ping-Pong Protocols

    DEFF Research Database (Denmark)

    Delzanno, Giorgio; Esparza, Javier; Srba, Jiri

    2006-01-01

    of messages) some verification problems become decidable. In particular we give an algorithm to decide control state reachability, a problem related to security properties like secrecy and authenticity. The proof is via a reduction to a new prefix rewriting model called Monotonic Set-extended Prefix rewriting...

  9. Necessary and sufficient conditions for a class of functions and their reciprocals to be logarithmically completely monotonic

    OpenAIRE

    Lv Yu-Pei; Sun Tian-Chuan; Chu Yu-Ming

    2011-01-01

    Abstract We prove that the function F α,β (x) = x α Γ β (x)/Γ(βx) is strictly logarithmically completely monotonic on (0, ∞) if and only if (α, β) ∈ {(α, β) : β > 0, β ≥ 2α + 1, β ≥ α + 1}{(α, β) : α = 0, β = 1} and that [F α,β (x)]-1 is strictly logarithmically completely monotonic on (0, ∞) if and only if (α, β) ∈ {(α, β ...

  10. Non-monotonic relationships between emotional arousal and memory for color and location.

    Science.gov (United States)

    Boywitt, C Dennis

    2015-01-01

    Recent research points to the decreased diagnostic value of subjective retrieval experience for memory accuracy for emotional stimuli. While for neutral stimuli rich recollective experiences are associated with better context memory than merely familiar memories this association appears questionable for emotional stimuli. The present research tested the implicit assumption that the effect of emotional arousal on memory is monotonic, that is, steadily increasing (or decreasing) with increasing arousal. In two experiments emotional arousal was manipulated in three steps using emotional pictures and subjective retrieval experience as well as context memory were assessed. The results show an inverted U-shape relationship between arousal and recognition memory but for context memory and retrieval experience the relationship was more complex. For frame colour, context memory decreased linearly while for spatial location it followed the inverted U-shape function. The complex, non-monotonic relationships between arousal and memory are discussed as possible explanations for earlier divergent findings.

  11. An iterative method for nonlinear demiclosed monotone-type operators

    International Nuclear Information System (INIS)

    Chidume, C.E.

    1991-01-01

    It is proved that a well known fixed point iteration scheme which has been used for approximating solutions of certain nonlinear demiclosed monotone-type operator equations in Hilbert spaces remains applicable in real Banach spaces with property (U, α, m+1, m). These Banach spaces include the L p -spaces, p is an element of [2,∞]. An application of our results to the approximation of a solution of a certain linear operator equation in this general setting is also given. (author). 19 refs

  12. Generalized convexity, generalized monotonicity recent results

    CERN Document Server

    Martinez-Legaz, Juan-Enrique; Volle, Michel

    1998-01-01

    A function is convex if its epigraph is convex. This geometrical structure has very strong implications in terms of continuity and differentiability. Separation theorems lead to optimality conditions and duality for convex problems. A function is quasiconvex if its lower level sets are convex. Here again, the geo­ metrical structure of the level sets implies some continuity and differentiability properties for quasiconvex functions. Optimality conditions and duality can be derived for optimization problems involving such functions as well. Over a period of about fifty years, quasiconvex and other generalized convex functions have been considered in a variety of fields including economies, man­ agement science, engineering, probability and applied sciences in accordance with the need of particular applications. During the last twenty-five years, an increase of research activities in this field has been witnessed. More recently generalized monotonicity of maps has been studied. It relates to generalized conve...

  13. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization

  14. Monotonic and fatigue deformation of Ni--W directionally solidified eutectic

    International Nuclear Information System (INIS)

    Garmong, G.; Williams, J.C.

    1975-01-01

    Unlike many eutectic composites, the Ni--W eutectic exhibits extensive ductility by slip. Furthermore, its properties may be greatly varied by proper heat treatments. Results of studies of deformation in both monotonic and fatigue loading are reported. During monotonic deformation the fiber/matrix interface acts as a source of dislocations at low strains and an obstacle to matrix slip at higher strains. Deforming the quenched-plus-aged eutectic causes planar matrix slip, with the result that matrix slip bands create stress concentrations in the fibers at low strains. The aged eutectic reaches generally higher stress levels for comparable strains than does the as-quenched eutectic, and the failure strains decrease with increasing aging times. For the composites tested in fatigue, the aged eutectic has better high-stress fatigue resistance than the as-quenched material, but for low-stress, high-cycle fatigue their cycles to failure are nearly the same. However, both crack initiation and crack propagation are different in the two conditions, so the coincidence in high-cycle fatigue is probably fortuitous. The effect of matrix strength on composite performance is not simple, since changes in strength may be accompanied by alterations in slip modes and failure processes. (17 fig) (auth)

  15. Multistability and gluing bifurcation to butterflies in coupled networks with non-monotonic feedback

    International Nuclear Information System (INIS)

    Ma Jianfu; Wu Jianhong

    2009-01-01

    Neural networks with a non-monotonic activation function have been proposed to increase their capacity for memory storage and retrieval, but there is still a lack of rigorous mathematical analysis and detailed discussions of the impact of time lag. Here we consider a two-neuron recurrent network. We first show how supercritical pitchfork bifurcations and a saddle-node bifurcation lead to the coexistence of multiple stable equilibria (multistability) in the instantaneous updating network. We then study the effect of time delay on the local stability of these equilibria and show that four equilibria lose their stability at a certain critical value of time delay, and Hopf bifurcations of these equilibria occur simultaneously, leading to multiple coexisting periodic orbits. We apply centre manifold theory and normal form theory to determine the direction of these Hopf bifurcations and the stability of bifurcated periodic orbits. Numerical simulations show very interesting global patterns of periodic solutions as the time delay is varied. In particular, we observe that these four periodic solutions are glued together along the stable and unstable manifolds of saddle points to develop a butterfly structure through a complicated process of gluing bifurcations of periodic solutions

  16. Diagnosis of constant faults in iteration-free circuits over monotone basis

    KAUST Repository

    Alrawaf, Saad Abdullah; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2014-01-01

    We show that for each iteration-free combinatorial circuit S over a basis B containing only monotone Boolean functions with at most five variables, there exists a decision tree for diagnosis of constant faults on inputs of gates with depth at most 7L(S) where L(S) is the number of gates in S. © 2013 Elsevier B.V. All rights reserved.

  17. Diagnosis of constant faults in iteration-free circuits over monotone basis

    KAUST Repository

    Alrawaf, Saad Abdullah

    2014-03-01

    We show that for each iteration-free combinatorial circuit S over a basis B containing only monotone Boolean functions with at most five variables, there exists a decision tree for diagnosis of constant faults on inputs of gates with depth at most 7L(S) where L(S) is the number of gates in S. © 2013 Elsevier B.V. All rights reserved.

  18. Effect of fiber fabric orientation on the flexural monotonic and fatigue behavior of 2D woven ceramic matrix composites

    International Nuclear Information System (INIS)

    Chawla, N.; Liaw, P.K.; Lara-Curzio, E.; Ferber, M.K.; Lowden, R.A.

    2012-01-01

    The effect of fiber fabric orientation, i.e., parallel to loading and perpendicular to the loading axis, on the monotonic and fatigue behavior of plain-weave fiber reinforced SiC matrix laminated composites was investigated. Two composite systems were studied: Nextel 312 (3M Corp.) reinforced SiC and Nicalon (Nippon Carbon Corp.) reinforced SiC, both fabricated by Forced Chemical Vapor Infiltration (FCVI). The behavior of both materials was investigated under monotonic and fatigue loading. Interlaminar and in-plane shear tests were conducted to further correlate shear properties with the effect of fabric orientation, with respect to the loading axis, on the orientation effects in bending. The underlying mechanisms, in monotonic and fatigue loading, were investigated through post-fracture examination using scanning electron microscopy (SEM).

  19. Elucidating the Relations Between Monotonic and Fatigue Properties of Laser Powder Bed Fusion Stainless Steel 316L

    Science.gov (United States)

    Zhang, Meng; Sun, Chen-Nan; Zhang, Xiang; Goh, Phoi Chin; Wei, Jun; Li, Hua; Hardacre, David

    2018-03-01

    The laser powder bed fusion (L-PBF) technique builds parts with higher static strength than the conventional manufacturing processes through the formation of ultrafine grains. However, its fatigue endurance strength σ f does not match the increased monotonic tensile strength σ b. This work examines the monotonic and fatigue properties of as-built and heat-treated L-PBF stainless steel 316L. It was found that the general linear relation σ f = mσ b for describing conventional ferrous materials is not applicable to L-PBF parts because of the influence of porosity. Instead, the ductility parameter correlated linearly with fatigue strength and was proposed as the new fatigue assessment criterion for porous L-PBF parts. Annealed parts conformed to the strength-ductility trade-off. Fatigue resistance was reduced at short lives, but the effect was partially offset by the higher ductility such that comparing with an as-built part of equivalent monotonic strength, the heat-treated parts were more fatigue resistant.

  20. Identical bands at normal deformation: Necessity of going beyond the mean-field approach

    International Nuclear Information System (INIS)

    Sun, Y.; Wu, C.; Feng, D.H.; Egido, J.L.; Guidry, M.

    1996-01-01

    The validity of BCS theory has been questioned because the appearance of normally deformed identical bands in odd and even nuclei seems to contradict the conventional understanding of the blocking effect. This problem is examined with the projected shell model (PSM), which projects good angular momentum states and includes many-body correlations in both deformation and pairing channels. Satisfactory reproduction of identical band data by the PSM suggests that it may be necessary to go beyond the mean field to obtain a quantitative account of identical bands. copyright 1996 The American Physical Society

  1. Some Results on Mean Square Error for Factor Score Prediction

    Science.gov (United States)

    Krijnen, Wim P.

    2006-01-01

    For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…

  2. Robust Monotonically Convergent Iterative Learning Control for Discrete-Time Systems via Generalized KYP Lemma

    Directory of Open Access Journals (Sweden)

    Jian Ding

    2014-01-01

    Full Text Available This paper addresses the problem of P-type iterative learning control for a class of multiple-input multiple-output linear discrete-time systems, whose aim is to develop robust monotonically convergent control law design over a finite frequency range. It is shown that the 2 D iterative learning control processes can be taken as 1 D state space model regardless of relative degree. With the generalized Kalman-Yakubovich-Popov lemma applied, it is feasible to describe the monotonically convergent conditions with the help of linear matrix inequality technique and to develop formulas for the control gain matrices design. An extension to robust control law design against systems with structured and polytopic-type uncertainties is also considered. Two numerical examples are provided to validate the feasibility and effectiveness of the proposed method.

  3. Asian Option Pricing with Monotonous Transaction Costs under Fractional Brownian Motion

    Directory of Open Access Journals (Sweden)

    Di Pan

    2013-01-01

    Full Text Available Geometric-average Asian option pricing model with monotonous transaction cost rate under fractional Brownian motion was established. The method of partial differential equations was used to solve this model and the analytical expressions of the Asian option value were obtained. The numerical experiments show that Hurst exponent of the fractional Brownian motion and transaction cost rate have a significant impact on the option value.

  4. Optimization of nonlinear, non-Gaussian Bayesian filtering for diagnosis and prognosis of monotonic degradation processes

    Science.gov (United States)

    Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.

    2018-05-01

    The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.

  5. Design considerations and analysis planning of a phase 2a proof of concept study in rheumatoid arthritis in the presence of possible non-monotonicity.

    Science.gov (United States)

    Liu, Feng; Walters, Stephen J; Julious, Steven A

    2017-10-02

    It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA), a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC) trial for a follow-on compound using the lessons learnt from the lead compound. The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid "S" shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM) is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response follows a placebo like curve, an Emax like curve, or log

  6. Local Monotonicity and Isoperimetric Inequality on Hypersurfaces in Carnot groups

    Directory of Open Access Journals (Sweden)

    Francesco Paolo Montefalcone

    2010-12-01

    Full Text Available Let G be a k-step Carnot group of homogeneous dimension Q. Later on we shall present some of the results recently obtained in [32] and, in particular, an intrinsic isoperimetric inequality for a C2-smooth compact hypersurface S with boundary @S. We stress that S and @S are endowed with the homogeneous measures n????1 H and n????2 H , respectively, which are actually equivalent to the intrinsic (Q - 1-dimensional and (Q - 2-dimensional Hausdor measures with respect to a given homogeneous metric % on G. This result generalizes a classical inequality, involving the mean curvature of the hypersurface, proven by Michael and Simon [29] and Allard [1], independently. One may also deduce some related Sobolev-type inequalities. The strategy of the proof is inspired by the classical one and will be discussed at the rst section. After reminding some preliminary notions about Carnot groups, we shall begin by proving a linear isoperimetric inequality. The second step is a local monotonicity formula. Then we may achieve the proof by a covering argument.We stress however that there are many dierences, due to our non-Euclidean setting.Some of the tools developed ad hoc are, in order, a \\blow-up" theorem, which holds true also for characteristic points, and a smooth Coarea Formula for the HS-gradient. Other tools are the horizontal integration by parts formula and the 1st variation formula for the H-perimeter n????1H already developed in [30, 31] and then generalized to hypersurfaces having non-empty characteristic set in [32]. These results can be useful in the study of minimal and constant horizontal mean curvature hypersurfaces in Carnot groups.

  7. Psychophysiological responses to short-term cooling during a simulated monotonous driving task.

    Science.gov (United States)

    Schmidt, Elisabeth; Decke, Ralf; Rasshofer, Ralph; Bullinger, Angelika C

    2017-07-01

    For drivers on monotonous routes, cognitive fatigue causes discomfort and poses an important risk for traffic safety. Countermeasures against this type of fatigue are required and thermal stimulation is one intervention method. Surprisingly, there are hardly studies available to measure the effect of cooling while driving. Hence, to better understand the effect of short-term cooling on the perceived sleepiness of car drivers, a driving simulator study (n = 34) was conducted in which physiological and vehicular data during cooling and control conditions were compared. The evaluation of the study showed that cooling applied during a monotonous drive increased the alertness of the car driver. The sleepiness rankings were significantly lower for the cooling condition. Furthermore, the significant pupillary and electrodermal responses were physiological indicators for increased sympathetic activation. In addition, during cooling a better driving performance was observed. In conclusion, the study shows generally that cooling has a positive short-term effect on drivers' wakefulness; in detail, a cooling period of 3 min delivers best results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Oscillation of Nonlinear Delay Differential Equation with Non-Monotone Arguments

    Directory of Open Access Journals (Sweden)

    Özkan Öcalan

    2017-07-01

    Full Text Available Consider the first-order nonlinear retarded differential equation $$ x^{\\prime }(t+p(tf\\left( x\\left( \\tau (t\\right \\right =0, t\\geq t_{0} $$ where $p(t$ and $\\tau (t$ are function of positive real numbers such that $%\\tau (t\\leq t$ for$\\ t\\geq t_{0},\\ $and$\\ \\lim_{t\\rightarrow \\infty }\\tau(t=\\infty $. Under the assumption that the retarded argument is non-monotone, new oscillation results are given. An example illustrating the result is also given.

  9. Denjoy minimal sets and Birkhoff periodic orbits for non-exact monotone twist maps

    Science.gov (United States)

    Qin, Wen-Xin; Wang, Ya-Nan

    2018-06-01

    A non-exact monotone twist map φbarF is a composition of an exact monotone twist map φ bar with a generating function H and a vertical translation VF with VF ((x , y)) = (x , y - F). We show in this paper that for each ω ∈ R, there exists a critical value Fd (ω) ≥ 0 depending on H and ω such that for 0 ≤ F ≤Fd (ω), the non-exact twist map φbarF has an invariant Denjoy minimal set with irrational rotation number ω lying on a Lipschitz graph, or Birkhoff (p , q)-periodic orbits for rational ω = p / q. Like the Aubry-Mather theory, we also construct heteroclinic orbits connecting Birkhoff periodic orbits, and show that quasi-periodic orbits in these Denjoy minimal sets can be approximated by periodic orbits. In particular, we demonstrate that at the critical value F =Fd (ω), the Denjoy minimal set is not uniformly hyperbolic and can be approximated by smooth curves.

  10. Non-monotonic behaviour in relaxation dynamics of image restoration

    International Nuclear Information System (INIS)

    Ozeki, Tomoko; Okada, Masato

    2003-01-01

    We have investigated the relaxation dynamics of image restoration through a Bayesian approach. The relaxation dynamics is much faster at zero temperature than at the Nishimori temperature where the pixel-wise error rate is minimized in equilibrium. At low temperature, we observed non-monotonic development of the overlap. We suggest that the optimal performance is realized through premature termination in the relaxation processes in the case of the infinite-range model. We also performed Markov chain Monte Carlo simulations to clarify the underlying mechanism of non-trivial behaviour at low temperature by checking the local field distributions of each pixel

  11. Multistability of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2015-11-01

    The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    Science.gov (United States)

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  13. Monotonicity of the von Neumann entropy expressed as a function of R\\'enyi entropies

    OpenAIRE

    Fannes, Mark

    2013-01-01

    The von Neumann entropy of a density matrix of dimension d, expressed in terms of the first d-1 integer order R\\'enyi entropies, is monotonically increasing in R\\'enyi entropies of even order and decreasing in those of odd order.

  14. Some completely monotonic properties for the $(p,q )$-gamma function

    OpenAIRE

    Krasniqi, Valmir; Merovci, Faton

    2014-01-01

    It is defined $\\Gamma_{p,q}$ function, a generalize of $\\Gamma$ function. Also, we defined $\\psi_{p,q}$-analogue of the psi function as the log derivative of $\\Gamma_{p,q}$. For the $\\Gamma_{p,q}$ -function, are given some properties related to convexity, log-convexity and completely monotonic function. Also, some properties of $\\psi_{p,q} $ analog of the $\\psi$ function have been established. As an application, when $p\\to \\infty, q\\to 1,$ we obtain all result of \\cite{Valmir1} and \\cite{SHA}.

  15. Non-monotonic probability of thermal reversal in thin-film biaxial nanomagnets with small energy barriers

    Directory of Open Access Journals (Sweden)

    N. Kani

    2017-05-01

    Full Text Available The goal of this paper is to investigate the short time-scale, thermally-induced probability of magnetization reversal for an biaxial nanomagnet that is characterized with a biaxial magnetic anisotropy. For the first time, we clearly show that for a given energy barrier of the nanomagnet, the magnetization reversal probability of an biaxial nanomagnet exhibits a non-monotonic dependence on its saturation magnetization. Specifically, there are two reasons for this non-monotonic behavior in rectangular thin-film nanomagnets that have a large perpendicular magnetic anisotropy. First, a large perpendicular anisotropy lowers the precessional period of the magnetization making it more likely to precess across the x^=0 plane if the magnetization energy exceeds the energy barrier. Second, the thermal-field torque at a particular energy increases as the magnitude of the perpendicular anisotropy increases during the magnetization precession. This non-monotonic behavior is most noticeable when analyzing the magnetization reversals on time-scales up to several tens of ns. In light of the several proposals of spintronic devices that require data retention on time-scales up to 10’s of ns, understanding the probability of magnetization reversal on the short time-scales is important. As such, the results presented in this paper will be helpful in quantifying the reliability and noise sensitivity of spintronic devices in which thermal noise is inevitably present.

  16. Reduction theorems for weighted integral inequalities on the cone of monotone functions

    Czech Academy of Sciences Publication Activity Database

    Gogatishvili, Amiran; Stepanov, V.D.

    2013-01-01

    Roč. 68, č. 4 (2013), s. 597-664 ISSN 0036-0279 R&D Projects: GA ČR GA201/08/0383; GA ČR GA13-14743S Institutional support: RVO:67985840 Keywords : weighted Lebesgue space * cone of monotone functions * duality principle Subject RIV: BA - General Mathematics Impact factor: 1.357, year: 2013 http://iopscience.iop.org/0036-0279/68/4/597

  17. Sufficient Descent Conjugate Gradient Methods for Solving Convex Constrained Nonlinear Monotone Equations

    Directory of Open Access Journals (Sweden)

    San-Yang Liu

    2014-01-01

    Full Text Available Two unified frameworks of some sufficient descent conjugate gradient methods are considered. Combined with the hyperplane projection method of Solodov and Svaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.

  18. Asymptotic estimates and exponential stability for higher-order monotone difference equations

    Directory of Open Access Journals (Sweden)

    Pituk Mihály

    2005-01-01

    Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.

  19. Asymptotic estimates and exponential stability for higher-order monotone difference equations

    Directory of Open Access Journals (Sweden)

    Mihály Pituk

    2005-03-01

    Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.

  20. Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI

    DEFF Research Database (Denmark)

    Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N

    2017-01-01

    available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time...... the quantitative results are compared against ground-truth histology, they seem to reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing......-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures – such as axons and extra-axonal spaces, which we here used in a simple model for the microstructure – and that, for axons parallel to the main magnetic field...

  1. A note on monotone solutions for a nonconvex second-order functional differential inclusion

    Directory of Open Access Journals (Sweden)

    Aurelian Cernea

    2011-12-01

    Full Text Available The existence of monotone solutions for a second-order functional differential inclusion with Carath\\'{e}odory perturbation is obtained in the case when the multifunction that define the inclusion is upper semicontinuous compact valued and contained in the Fr\\'{e}chet subdifferential of a $\\phi $-convex function of order two.

  2. Inelastic behavior of materials and structures under monotonic and cyclic loading

    CERN Document Server

    Brünig, Michael

    2015-01-01

    This book presents studies on the inelastic behavior of materials and structures under monotonic and cyclic loads. It focuses on the description of new effects like purely thermal cycles or cases of non-trivial damages. The various models are based on different approaches and methods and scaling aspects are taken into account. In addition to purely phenomenological models, the book also presents mechanisms-based approaches. It includes contributions written by leading authors from a host of different countries.

  3. Design considerations and analysis planning of a phase 2a proof of concept study in rheumatoid arthritis in the presence of possible non-monotonicity

    Directory of Open Access Journals (Sweden)

    Feng Liu

    2017-10-01

    Full Text Available Abstract Background It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA, a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC trial for a follow-on compound using the lessons learnt from the lead compound. Methods The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid “S” shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. Results It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response

  4. PENGARUH MONOTON, KUALITAS TIDUR, PSIKOFISIOLOGI, DISTRAKSI, DAN KELELAHAN KERJA TERHADAP TINGKAT KEWASPADAAN

    Directory of Open Access Journals (Sweden)

    Wiwik Budiawan

    2016-02-01

    Full Text Available Manusia sebagai subyek yang memiliki keterbatasan dalam kerja, sehingga menyebabkan terjadinya kesalahan. Kesalahan manusia yang dilakukan mengakibatkan menurunnya tingkat kewaspadaan masinis dan asisten masinis dalam menjalankan tugas. Tingkat kewaspadaan dipengaruhi oleh 5 faktor yaitu keadaan monoton, kualitas tidur, keadaan psikofisiologi, distraksi dan kelelahan kerja. Metode untuk mengukur 5 faktor yaitu kuisioner mononton, kuisioner Pittsburgh Sleep Quality Index (PSQI, kuisioner General Job Stress dan kuisioner FAS. Sedangkan untuk menguji tingkat kewaspadaan menggunakan Software Psychomotor Vigilance Test (PVT. Responden yang dipilih adalah masinis dan asisten masinis, karena jenis pekerjaan tersebut sangat membutuhkan tingkat kewaspadaan yang tinggi. Hasil pengukuran kemudian dianalisa menggunakan uji regresi linear majemuk. Dalam penelitian ini menghasilkan keadaan monoton, kualitas tidur, keadaan psikofisiologi, distraksi dan kelelahan kerja berpengaruh secara simultan terhadap tingkat kewaspadaan. Hal ini dibuktikan dengan ketika sebelum jam dinas, hasil uji F-hitung keadaan monoton, kualitas tidur, keadaan psikofisiologi adalah sebesar 0,876, sedangkan untuk variabel distraksi dan Kelelahan Kerja (FAS terhadap tingkat kewaspadaan memiliki nilai 2,371. pada saat sesudah bekerja variabel distraksi dan kelelahan kerja (FAS terhadap tingkat kewaspadaan memiliki nilai F-hitung 2,953,dan nilai 0,544 untuk keadaan monoton, kualitas tidur, keadaan psikofisiologi. Faktor yang memiliki pengaruh terbesar terhadap tingkat kewaspadaan sebelum jam dinas yaitu faktor kualitas tidur, sedangkan untuk sesudah jam dinas adalah faktor kelelahan kerja.     Abstract Human beings as subjects who have limitations in work, thus causing the error. Human error committed resulted in a decreased level of alertness machinist and assistant machinist in the line of duty. Alert level is influenced by five factors: the state of monotony, quality of sleep

  5. Comparison of linear and non-linear monotonicity-based shape reconstruction using exact matrix characterizations

    DEFF Research Database (Denmark)

    Garde, Henrik

    2018-01-01

    . For a fair comparison, exact matrix characterizations are used when probing the monotonicity relations to avoid errors from numerical solution to PDEs and numerical integration. Using a special factorization of the Neumann-to-Dirichlet map also makes the non-linear method as fast as the linear method...

  6. Influence of Compaction Temperature on Resistance Under Monotonic Loading of Crumb-Rubber Modified Hot-Mix Asphalts

    Directory of Open Access Journals (Sweden)

    Hugo A. Rondón-Quintana

    2012-12-01

    Full Text Available The influence of compaction temperature on resistance under mono-tonic loading (Marshall of Crumb-Rubber Modified (CRM Hot-Mix As-phalt (HMA was evaluated. The emphasis of this study was the applica-tion in Bogotá D.C. (Colombia. In this city the compaction temperature of HMA mixtures decreases, compared to the optimum, in about 30°C. Two asphalt cements (AC 60-70 and AC 80-100 were modified. Two particle sizes distribution curve were used. The compaction temperatures used were 120, 130, 140 and 150°C. The decrease of the compaction tempera-ture produces a small decrease in resistance under monotonic loading of the modified mixtures tested. Mixtures without CRM undergo a lineal decrease in its resistance of up to 34%.

  7. Influence of Compaction Temperature on Resistance Under Monotonic Loading of Crumb-Rubber Modified Hot-Mix Asphalts

    Directory of Open Access Journals (Sweden)

    Hugo A. Rondón-Quintana

    2012-12-01

    Full Text Available The influence of compaction temperature on resistance under monotonic loading (Marshall of Crumb-Rubber Modified (CRM Hot-Mix Asphalt (HMA was evaluated. The emphasis of this study was the application in Bogotá D.C. (Colombia. In this city the compaction temperature of HMA mixtures decreases, compared to the optimum, in about 30°C. Two asphalt cements (AC 60-70 and AC 80-100 were modified. Two particle sizes distribution curve were used. The compaction temperatures used were 120, 130, 140 and 150°C. The decrease of the compaction temperature produces a small decrease in resistance under monotonic loading of the modified mixtures tested. Mixtures without CRM undergo a lineal decrease in its resistance of up to 34%.

  8. Monotonic and Cyclic Behavior of DIN 34CrNiMo6 Tempered Alloy Steel

    Directory of Open Access Journals (Sweden)

    Ricardo Branco

    2016-04-01

    Full Text Available This paper aims at studying the monotonic and cyclic plastic deformation behavior of DIN 34CrNiMo6 high strength steel. Monotonic and low-cycle fatigue tests are conducted in ambient air, at room temperature, using standard 8-mm diameter specimens. The former tests are carried out under position control with constant displacement rate. The latter are performed under fully-reversed strain-controlled conditions, using the single-step test method, with strain amplitudes lying between ±0.4% and ±2.0%. After the tests, the fracture surfaces are examined by scanning electron microscopy in order to characterize the surface morphologies and identify the main failure mechanisms. Regardless of the strain amplitude, a softening behavior was observed throughout the entire life. Total strain energy density, defined as the sum of both tensile elastic and plastic strain energies, was revealed to be an adequate fatigue damage parameter for short and long lives.

  9. Normalized modes at selected points without normalization

    Science.gov (United States)

    Kausel, Eduardo

    2018-04-01

    As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á

  10. A note on monotonically star Lindelöf spaces | Song | Quaestiones ...

    African Journals Online (AJOL)

    A space X is monotonically star Lindelöf if one assign to for each open cover U a subspace s(U) ⊆ X, called a kernel, such that s(U) is a Lindelöf subset of X, and st(s(U); U) = X, and if V renes U then ∪ s(U) ⊆ s(V), where st(s(U); U) = ∪ {U ∈ U : U ∩ s(U) ≠ ∅}. In this paper, we investigate the relationship between ...

  11. ASPMT(QS): Non-Monotonic Spatial Reasoning with Answer Set Programming Modulo Theories

    OpenAIRE

    Wałęga, Przemysław Andrzej; Bhatt, Mehul; Schultz, Carl

    2015-01-01

    The systematic modelling of \\emph{dynamic spatial systems} [9] is a key requirement in a wide range of application areas such as comonsense cognitive robotics, computer-aided architecture design, dynamic geographic information systems. We present ASPMT(QS), a novel approach and fully-implemented prototype for non-monotonic spatial reasoning ---a crucial requirement within dynamic spatial systems-- based on Answer Set Programming Modulo Theories (ASPMT). ASPMT(QS) consists of a (qualitative) s...

  12. Monotonicity Conditions for Multirate and Partitioned Explicit Runge-Kutta Schemes

    KAUST Repository

    Hundsdorfer, Willem

    2013-01-01

    Multirate schemes for conservation laws or convection-dominated problems seem to come in two flavors: schemes that are locally inconsistent, and schemes that lack mass-conservation. In this paper these two defects are discussed for one-dimensional conservation laws. Particular attention will be given to monotonicity properties of the multirate schemes, such as maximum principles and the total variation diminishing (TVD) property. The study of these properties will be done within the framework of partitioned Runge-Kutta methods. It will also be seen that the incompatibility of consistency and mass-conservation holds for ‘genuine’ multirate schemes, but not for general partitioned methods.

  13. Semiparametric approach for non-monotone missing covariates in a parametric regression model

    KAUST Repository

    Sinha, Samiran

    2014-02-26

    Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.

  14. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  15. Uniform persistence and upper Lyapunov exponents for monotone skew-product semiflows

    International Nuclear Information System (INIS)

    Novo, Sylvia; Obaya, Rafael; Sanz, Ana M

    2013-01-01

    Several results of uniform persistence above and below a minimal set of an abstract monotone skew-product semiflow are obtained. When the minimal set has a continuous separation the results are given in terms of the principal spectrum. In the case that the semiflow is generated by the solutions of a family of non-autonomous differential equations of ordinary, delay or parabolic type, the former results are strongly improved. A method of calculus of the upper Lyapunov exponent of the minimal set is also determined. (paper)

  16. Complex, non-monotonic dose-response curves with multiple maxima: Do we (ever) sample densely enough?

    Science.gov (United States)

    Cvrčková, Fatima; Luštinec, Jiří; Žárský, Viktor

    2015-01-01

    We usually expect the dose-response curves of biological responses to quantifiable stimuli to be simple, either monotonic or exhibiting a single maximum or minimum. Deviations are often viewed as experimental noise. However, detailed measurements in plant primary tissue cultures (stem pith explants of kale and tobacco) exposed to varying doses of sucrose, cytokinins (BA or kinetin) or auxins (IAA or NAA) revealed that growth and several biochemical parameters exhibit multiple reproducible, statistically significant maxima over a wide range of exogenous substance concentrations. This results in complex, non-monotonic dose-response curves, reminiscent of previous reports of analogous observations in both metazoan and plant systems responding to diverse pharmacological treatments. These findings suggest the existence of a hitherto neglected class of biological phenomena resulting in dose-response curves exhibiting periodic patterns of maxima and minima, whose causes remain so far uncharacterized, partly due to insufficient sampling frequency used in many studies.

  17. Mean stress and the exhaustion of fatigue-damage resistance

    Science.gov (United States)

    Berkovits, Avraham

    1989-01-01

    Mean-stress effects on fatigue life are critical in isothermal and thermomechanically loaded materials and composites. Unfortunately, existing mean-stress life-prediction methods do not incorporate physical fatigue damage mechanisms. An objective is to examine the relation between mean-stress induced damage (as measured by acoustic emission) and existing life-prediction methods. Acoustic emission instrumentation has indicated that, as with static yielding, fatigue damage results from dislocation buildup and motion until dislocation saturation is reached, after which void formation and coalescence predominate. Correlation of damage processes with similar mechanisms under monotonic loading led to a reinterpretation of Goodman diagrams for 40 alloys and a modification of Morrow's formulation for life prediction under mean stresses. Further testing, using acoustic emission to monitor dislocation dynamics, can generate data for developing a more general model for fatigue under mean stress.

  18. Mean-deviation analysis in the theory of choice.

    Science.gov (United States)

    Grechuk, Bogdan; Molyboha, Anton; Zabarankin, Michael

    2012-08-01

    Mean-deviation analysis, along with the existing theories of coherent risk measures and dual utility, is examined in the context of the theory of choice under uncertainty, which studies rational preference relations for random outcomes based on different sets of axioms such as transitivity, monotonicity, continuity, etc. An axiomatic foundation of the theory of coherent risk measures is obtained as a relaxation of the axioms of the dual utility theory, and a further relaxation of the axioms are shown to lead to the mean-deviation analysis. Paradoxes arising from the sets of axioms corresponding to these theories and their possible resolutions are discussed, and application of the mean-deviation analysis to optimal risk sharing and portfolio selection in the context of rational choice is considered. © 2012 Society for Risk Analysis.

  19. Eigenvalue for Densely Defined Perturbations of Multivalued Maximal Monotone Operators in Reflexive Banach Spaces

    Directory of Open Access Journals (Sweden)

    Boubakari Ibrahimou

    2013-01-01

    maximal monotone with and . Using the topological degree theory developed by Kartsatos and Quarcoo we study the eigenvalue problem where the operator is a single-valued of class . The existence of continuous branches of eigenvectors of infinite length then could be easily extended to the case where the operator is multivalued and is investigated.

  20. Non-monotonic dose-response relationships and endocrine disruptors: a qualitative method of assessment

    OpenAIRE

    Lagarde, Fabien; Beausoleil, Claire; Belcher, Scott M; Belzunces, Luc P; Emond, Claude; Guerbet, Michel; Rousselle, Christophe

    2015-01-01

    International audience; Experimental studies investigating the effects of endocrine disruptors frequently identify potential unconventional dose-response relationships called non-monotonic dose-response (NMDR) relationships. Standardized approaches for investigating NMDR relationships in a risk assessment context are missing. The aim of this work was to develop criteria for assessing the strength of NMDR relationships. A literature search was conducted to identify published studies that repor...

  1. Expert system for failures detection and non-monotonic reasoning

    International Nuclear Information System (INIS)

    Assis, Abilio de; Schirru, Roberto

    1997-01-01

    This paper presents the development of a shell denominated TIGER that has the purpose to serve as environment to the development of expert systems in diagnosis of faults in industrial complex plants. A model of knowledge representation and an inference engine based on non monotonic reasoning has been developed in order to provide flexibility in the representation of complex plants as well as performance to satisfy restrictions of real time. The TIGER is able to provide both the occurred fault and a hierarchical view of the several reasons that caused the fault to happen. As a validation of the developed shell a monitoring system of the critical safety functions of Angra-1 has been developed. 7 refs., 7 figs., 2 tabs

  2. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    Science.gov (United States)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  3. Explicit solutions of one-dimensional, first-order, stationary mean-field games with congestion

    KAUST Repository

    Gomes, Diogo A.

    2017-01-05

    Here, we consider one-dimensional first-order stationary mean-field games with congestion. These games arise when crowds face difficulty moving in high-density regions. We look at both monotone decreasing and increasing interactions and construct explicit solutions using the current formulation. We observe new phenomena such as discontinuities, unhappiness traps and the non-existence of solutions.

  4. The non-monotonic shear-thinning flow of two strongly cohesive concentrated suspensions

    OpenAIRE

    Buscall, Richard; Kusuma, Tiara E.; Stickland, Anthony D.; Rubasingha, Sayuri; Scales, Peter J.; Teo, Hui-En; Worrall, Graham L.

    2014-01-01

    The behaviour in simple shear of two concentrated and strongly cohesive mineral suspensions showing highly non-monotonic flow curves is described. Two rheometric test modes were employed, controlled stress and controlled shear-rate. In controlled stress mode the materials showed runaway flow above a yield stress, which, for one of the suspensions, varied substantially in value and seemingly at random from one run to the next, such that the up flow-curve appeared to be quite irreproducible. Th...

  5. Assessing the Health of LiFePO4 Traction Batteries through Monotonic Echo State Networks

    Science.gov (United States)

    Anseán, David; Otero, José; Couso, Inés

    2017-01-01

    A soft sensor is presented that approximates certain health parameters of automotive rechargeable batteries from on-vehicle measurements of current and voltage. The sensor is based on a model of the open circuit voltage curve. This last model is implemented through monotonic neural networks and estimate over-potentials arising from the evolution in time of the Lithium concentration in the electrodes of the battery. The proposed soft sensor is able to exploit the information contained in operational records of the vehicle better than the alternatives, this being particularly true when the charge or discharge currents are between moderate and high. The accuracy of the neural model has been compared to different alternatives, including data-driven statistical models, first principle-based models, fuzzy observers and other recurrent neural networks with different topologies. It is concluded that monotonic echo state networks can outperform well established first-principle models. The algorithms have been validated with automotive Li-FePO4 cells. PMID:29267219

  6. Assessing the Health of LiFePO4 Traction Batteries through Monotonic Echo State Networks

    Directory of Open Access Journals (Sweden)

    Luciano Sánchez

    2017-12-01

    Full Text Available A soft sensor is presented that approximates certain health parameters of automotive rechargeable batteries from on-vehicle measurements of current and voltage. The sensor is based on a model of the open circuit voltage curve. This last model is implemented through monotonic neural networks and estimate over-potentials arising from the evolution in time of the Lithium concentration in the electrodes of the battery. The proposed soft sensor is able to exploit the information contained in operational records of the vehicle better than the alternatives, this being particularly true when the charge or discharge currents are between moderate and high. The accuracy of the neural model has been compared to different alternatives, including data-driven statistical models, first principle-based models, fuzzy observers and other recurrent neural networks with different topologies. It is concluded that monotonic echo state networks can outperform well established first-principle models. The algorithms have been validated with automotive Li-FePO4 cells.

  7. Normal iron absorption determined by means of whole body counting and red cell incorporation of 59Fe

    International Nuclear Information System (INIS)

    Larsen, L.; Milman, N.

    1977-01-01

    Gastrointestinal iron absorption was measured in 27 normal subjects (19 females and 8 males) by means of whole body counting. Whole body retention 14 days after oral administration of 10μCi 59 Fe together with a carrier dose of 9.9 mg Fe 2+ (as sulphate), was used as an expression of absorption. The percentage incorporation in the total erythrocyte mass of administered 59 Fe (erythrocyte incorporation) and of absorbed 59 Fe (red cell utilization) was also estimated. Geometric mean iron absorption was 8.3+-2.1 (SD% in females, 9.1+-2.2 % in males and 8.5+-2.1 % in the entire series. The difference between males and females was not significant. Erythrocyte incorporation was 7.7+-2.2 (SD) % (geometric mean) in the entire series and the correlation between iron absorption and erythrocyte incorporation was highly significant (r = 0.96,P < 0.001). Red cell utilization averaged 92.9 +- 4.0 (SEM)% (arithmetic mean) in the entire series. (author)

  8. Monotone measures of ergodicity for Markov chains

    Directory of Open Access Journals (Sweden)

    J. Keilson

    1998-01-01

    Full Text Available The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t below Equation (1.7]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.

  9. Drug binding affinities and potencies are best described by a log-normal distribution and use of geometric means

    International Nuclear Information System (INIS)

    Stanisic, D.; Hancock, A.A.; Kyncl, J.J.; Lin, C.T.; Bush, E.N.

    1986-01-01

    (-)-Norepinephrine (NE) is used as an internal standard in their in vitro adrenergic assays, and the concentration of NE which produces a half-maximal inhibition of specific radioligand binding (affinity; K/sub I/), or half-maximal contractile response (potency; ED 50 ) has been measured numerous times. The goodness-of-fit test for normality was performed on both normal (Gaussian) or log 10 -normal frequency histograms of these data using the SAS Univariate procedure. Specific binding of 3 H-prazosin to rat liver (α 1 -), 3 H rauwolscine to rat cortex (α 2 -) and 3 H-dihydroalprenolol to rat ventricle (β 1 -) or rat lung (β 2 -receptors) was inhibited by NE; the distributions of NE K/sub I/'s at all these sites were skewed to the right, with highly significant (p 50 's of NE in isolated rabbit aorta (α 1 ), phenoxybenzamine-treated dog saphenous vein (α 2 ) and guinea pig atrium (β 1 ). The vasorelaxant potency of atrial natriuretic hormone in histamine-contracted rabbit aorta also was better described by a log-normal distribution, indicating that log-normalcy is probably a general phenomenon of drug-receptor interactions. Because data of this type appear to be log-normally distributed, geometric means should be used in parametric statistical analyses

  10. MONOTONIC DERIVATIVE CORRECTION FOR CALCULATION OF SUPERSONIC FLOWS WITH SHOCK WAVES

    Directory of Open Access Journals (Sweden)

    P. V. Bulat

    2015-07-01

    Full Text Available Subject of Research. Numerical solution methods of gas dynamics problems based on exact and approximate solution of Riemann problem are considered. We have developed an approach to the solution of Euler equations describing flows of inviscid compressible gas based on finite volume method and finite difference schemes of various order of accuracy. Godunov scheme, Kolgan scheme, Roe scheme, Harten scheme and Chakravarthy-Osher scheme are used in calculations (order of accuracy of finite difference schemes varies from 1st to 3rd. Comparison of accuracy and efficiency of various finite difference schemes is demonstrated on the calculation example of inviscid compressible gas flow in Laval nozzle in the case of continuous acceleration of flow in the nozzle and in the case of nozzle shock wave presence. Conclusions about accuracy of various finite difference schemes and time required for calculations are made. Main Results. Comparative analysis of difference schemes for Euler equations integration has been carried out. These schemes are based on accurate and approximate solution for the problem of an arbitrary discontinuity breakdown. Calculation results show that monotonic derivative correction provides numerical solution uniformity in the breakdown neighbourhood. From the one hand, it prevents formation of new points of extremum, providing the monotonicity property, but from the other hand, causes smoothing of existing minimums and maximums and accuracy loss. Practical Relevance. Developed numerical calculation method gives the possibility to perform high accuracy calculations of flows with strong non-stationary shock and detonation waves. At the same time, there are no non-physical solution oscillations on the shock wave front.

  11. Risk-Sensitive Control of Pure Jump Process on Countable Space with Near Monotone Cost

    International Nuclear Information System (INIS)

    Suresh Kumar, K.; Pal, Chandan

    2013-01-01

    In this article, we study risk-sensitive control problem with controlled continuous time pure jump process on a countable space as state dynamics. We prove multiplicative dynamic programming principle, elliptic and parabolic Harnack’s inequalities. Using the multiplicative dynamic programing principle and the Harnack’s inequalities, we prove the existence and a characterization of optimal risk-sensitive control under the near monotone condition

  12. Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems

    International Nuclear Information System (INIS)

    Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George

    2012-01-01

    In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.

  13. Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems

    Energy Technology Data Exchange (ETDEWEB)

    Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)

    2012-12-15

    In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.

  14. Iterative methods for nonlinear set-valued operators of the monotone type with applications to operator equations

    International Nuclear Information System (INIS)

    Chidume, C.E.

    1989-06-01

    The fixed points of set-valued operators satisfying a condition of monotonicity type in real Banach spaces with uniformly convex dual spaces are approximated by recursive averaging processes. Applications to important classes of linear and nonlinear operator equations are also presented. (author). 33 refs

  15. The behavior of welded joint in steel pipe members under monotonic and cyclic loading

    International Nuclear Information System (INIS)

    Chang, Kyong-Ho; Jang, Gab-Chul; Shin, Young-Eui; Han, Jung-Guen; Kim, Jong-Min

    2006-01-01

    Most steel pipe members are joined by welding. The residual stress and weld metal in a welded joint have the influence on the behavior of steel pipes. Therefore, to accurately predict the behavior of steel pipes with a welded joint, the influence of welding residual stress and weld metal on the behavior of steel pipe must be investigated. In this paper, the residual stress of steel pipes with a welded joint was investigated by using a three-dimensional non-steady heat conduction analysis and a three-dimensional thermal elastic-plastic analysis. Based on the results of monotonic and cyclic loading tests, a hysteresis model for weld metal was formulated. The hysteresis model was proposed by the authors and applied to a three-dimensional finite elements analysis. To investigate the influence of a welded joint in steel pipes under monotonic and cyclic loading, three-dimensional finite elements analysis considering the proposed model and residual stress was carried out. The influence of a welded joint on the behavior of steel pipe members was investigated by comparing the analytical result both steel pipe with a welded joint and that without a welded joint

  16. Accounting for dynamics of mean precipitation in drought projections: A case study of Brazil for the 2050 and 2070 periods.

    Science.gov (United States)

    Mpelasoka, Freddie; Awange, Joseph L; Goncalves, Rodrigo Mikosz

    2018-05-01

    Changes in drought around the globe are among the most daunting potential effects of climate change. However, changes in droughts are often not well distinguished from changes in aridity levels. As drought constitutes conditions of aridity, the projected declines in mean precipitation tend to override changes in drought. This results in projections of more dire changes in drought than ever. The overestimate of changes can be attributed to the use of 'static' normal precipitation in the derivation of drought events. The failure in distinguishing drought from aridity is a conceptual problem of concern, particularly to drought policymakers. Given that the key objective of drought policies is to determine drought conditions, which are rare and so protracted that they are beyond the scope of normal risk management, for interventions. The main objective of this Case Study of Brazil is to demonstrate the differences between projections of changes in drought based on 'static' and '30-year dynamic' precipitation normal conditions. First we demonstrate that the 'static' based projections suggest 4-fold changes in the probability of drought-year occurrences against changes by the dynamic normal precipitation. The 'static-normal mean precipitation' based projections tend to be monotonically increasing in magnitude, and were arguably considered unrealistic. Based on the '30-year dynamic' normal precipitation conditions, the 13-member GCM ensemble median projection estimates of changes for 2050 under rcp4.5 1 and rcp8.5 2 suggest: (i) Significant differences between changes associated with rcp4.5 and rcp8.5, and are more noticeable for droughts at long than short timescales in the 2070; (ii) Overall, the results demonstrate more realistic projections of changes in drought characteristics over Brazil than previous projections based on 'static' normal precipitation conditions. However, the uncertainty of response of droughts to climate change in CMIP5 simulations is still large

  17. Construction of second order accurate monotone and stable residual distribution schemes for unsteady flow problems

    International Nuclear Information System (INIS)

    Abgrall, Remi; Mezine, Mohamed

    2003-01-01

    The aim of this paper is to construct upwind residual distribution schemes for the time accurate solution of hyperbolic conservation laws. To do so, we evaluate a space-time fluctuation based on a space-time approximation of the solution and develop new residual distribution schemes which are extensions of classical steady upwind residual distribution schemes. This method has been applied to the solution of scalar advection equation and to the solution of the compressible Euler equations both in two space dimensions. The first version of the scheme is shown to be, at least in its first order version, unconditionally energy stable and possibly conditionally monotonicity preserving. Using an idea of Csik et al. [Space-time residual distribution schemes for hyperbolic conservation laws, 15th AIAA Computational Fluid Dynamics Conference, Anahein, CA, USA, AIAA 2001-2617, June 2001], we modify the formulation to end up with a scheme that is unconditionally energy stable and unconditionally monotonicity preserving. Several numerical examples are shown to demonstrate the stability and accuracy of the method

  18. On stability and monotonicity requirements of finite difference approximations of stochastic conservation laws with random viscosity

    KAUST Repository

    Pettersson, Per

    2013-05-01

    The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain and spatially varying viscosity. We investigate well-posedness, monotonicity and stability for the extended system resulting from the Galerkin projection of the advection-diffusion equation onto the stochastic basis functions. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability of the semi-discrete system.It is essential that the eigenvalues of the resulting viscosity matrix of the stochastic Galerkin system are positive and we investigate conditions for this to hold. When the viscosity matrix is diagonalizable, stochastic Galerkin and stochastic collocation are similar in terms of computational cost, and for some cases the accuracy is higher for stochastic Galerkin provided that monotonicity requirements are met. We also investigate the total spatial operator of the semi-discretized system and its impact on the convergence to steady-state. © 2013 Elsevier B.V.

  19. MONOTONIC AND CYCLIC LOADING SIMULATION OF STRUCTURAL STEELWORK BEAM TO COLUMN BOLTED CONNECTIONS WITH CASTELLATED BEAM

    Directory of Open Access Journals (Sweden)

    SAEID ZAHEDI VAHID

    2013-08-01

    Full Text Available Recently steel extended end plate connections are commonly used in rigid steel frame due to its good ductility and ability for energy dissipation. This connection system is recommended to be widely used in special moment-resisting frame subjected to vertical monotonic and cyclic loads. However improper design of beam to column connection can leads to collapses and fatalities. Therefore extensive study of beam to column connection design must be carried out, particularly when the connection is exposed to cyclic loadings. This paper presents a Finite Element Analysis (FEA approach as an alternative method in studying the behavior of such connections. The performance of castellated beam-column end plate connections up to failure was investigated subjected to monotonic and cyclic loading in vertical and horizontal direction. The study was carried out through a finite element analysis using the multi-purpose software package LUSAS. The effect of arranging the geometry and location of openings were also been investigated.

  20. On stability and monotonicity requirements of finite difference approximations of stochastic conservation laws with random viscosity

    KAUST Repository

    Pettersson, Per; Doostan, Alireza; Nordströ m, Jan

    2013-01-01

    The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain and spatially varying viscosity. We investigate well-posedness, monotonicity and stability for the extended system resulting from the Galerkin projection of the advection-diffusion equation onto the stochastic basis functions. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability of the semi-discrete system.It is essential that the eigenvalues of the resulting viscosity matrix of the stochastic Galerkin system are positive and we investigate conditions for this to hold. When the viscosity matrix is diagonalizable, stochastic Galerkin and stochastic collocation are similar in terms of computational cost, and for some cases the accuracy is higher for stochastic Galerkin provided that monotonicity requirements are met. We also investigate the total spatial operator of the semi-discretized system and its impact on the convergence to steady-state. © 2013 Elsevier B.V.

  1. Sampling dynamics: an alternative to payoff-monotone selection dynamics

    DEFF Research Database (Denmark)

    Berkemer, Rainer

    payoff-monotone nor payoff-positive which has interesting consequences. This can be demonstrated by application to the travelers dilemma, a deliberately constructed social dilemma. The game has just one symmetric Nash equilibrium which is Pareto inefficient. Especially when the travelers have many......'' of the standard game theory result. Both, analytical tools and agent based simulation are used to investigate the dynamic stability of sampling equilibria in a generalized travelers dilemma. Two parameters are of interest: the number of strategy options (m) available to each traveler and an experience parameter...... (k), which indicates the number of samples an agent would evaluate before fixing his decision. The special case (k=1) can be treated analytically. The stationary points of the dynamics must be sampling equilibria and one can calculate that for m>3 there will be an interior solution in addition...

  2. Application of non-monotonic logic to failure diagnosis of nuclear power plant

    International Nuclear Information System (INIS)

    Takahashi, M.; Kitamura, M.; Sugiyama, K.

    1989-01-01

    A prototype diagnosis system for nuclear power plants was developed based on Truth Maintenance systems: TMS and Dempster-Shafer probability theory. The purpose of this paper is to establish basic technique for more intelligent, man-computer cooperative diagnosis system. The developed system is capable of carrying out the diagnostic inference under the imperfect observation condition with the help of the proposed belief revision procedure with TMS and the systematic uncertainty treatment with Dempster-Shafer theory. The usefulness and potentiality of the present non-monotonic logic were demonstrated through simulation experiments

  3. Isochronous relaxation curves for type 304 stainless steel after monotonic and cyclic strain

    International Nuclear Information System (INIS)

    Swindeman, R.W.

    1978-01-01

    Relaxation tests to 100 hr were performed on type 304 stainless steel in the temperature range 480 to 650 0 C and were used to develop isochronous relaxation curves. Behavior after monotonic and cyclic strain was compared. Relaxation differed only slightly as a consequence of the type of previous strain, provided that plastic flow preceded the relaxation period. We observed that the short-time relaxation behavior did not manifest strong heat-to-heat variation in creep strength

  4. Resonant scattering of energetic electrons in the plasmasphere by monotonic whistler-mode waves artificially generated by ionospheric modification

    Directory of Open Access Journals (Sweden)

    S. S. Chang

    2014-05-01

    Full Text Available Modulated high-frequency (HF heating of the ionosphere provides a feasible means of artificially generating extremely low-frequency (ELF/very low-frequency (VLF whistler waves, which can leak into the inner magnetosphere and contribute to resonant interactions with high-energy electrons in the plasmasphere. By ray tracing the magnetospheric propagation of ELF/VLF emissions artificially generated at low-invariant latitudes, we evaluate the relativistic electron resonant energies along the ray paths and show that propagating artificial ELF/VLF waves can resonate with electrons from ~ 100 keV to ~ 10 MeV. We further implement test particle simulations to investigate the effects of resonant scattering of energetic electrons due to triggered monotonic/single-frequency ELF/VLF waves. The results indicate that within the period of a resonance timescale, changes in electron pitch angle and kinetic energy are stochastic, and the overall effect is cumulative, that is, the changes averaged over all test electrons increase monotonically with time. The localized rates of wave-induced pitch-angle scattering and momentum diffusion in the plasmasphere are analyzed in detail for artificially generated ELF/VLF whistlers with an observable in situ amplitude of ~ 10 pT. While the local momentum diffusion of relativistic electrons is small, with a rate of −7 s−1, the local pitch-angle scattering can be intense near the loss cone with a rate of ~ 10−4 s−1. Our investigation further supports the feasibility of artificial triggering of ELF/VLF whistler waves for removal of high-energy electrons at lower L shells within the plasmasphere. Moreover, our test particle simulation results show quantitatively good agreement with quasi-linear diffusion coefficients, confirming the applicability of both methods to evaluate the resonant diffusion effect of artificial generated ELF/VLF whistlers.

  5. Annealing Effects on the Normal-State Resistive Properties of Underdoped Cuprates

    Science.gov (United States)

    Vovk, R. V.; Khadzhai, G. Ya.; Nazyrov, Z. F.; Kamchatnaya, S. N.; Feher, A.; Dobrovolskiy, O. V.

    2018-05-01

    The influence of room-temperature annealing on the parameters of the basal-plane electrical resistance of underdoped YBa_2Cu_3O_{7-δ } and HoBa_2Cu_3O_{7-δ } single crystals in the normal and superconducting states is investigated. The form of the derivatives dρ (T)/dT makes it possible to determine the onset temperature of the fluctuation conductivity and indicates a nonuniform distribution of the labile oxygen. Annealing has been revealed to lead to a monotonic decrease in the oxygen deficiency, that primarily manifests itself as a decrease in the residual resistance, an increase of T_c, and a decrease in the Debye temperature.

  6. Renormalization in charged colloids: non-monotonic behaviour with the surface charge

    International Nuclear Information System (INIS)

    Haro-Perez, C; Quesada-Perez, M; Callejas-Fernandez, J; Schurtenberger, P; Hidalgo-Alvarez, R

    2006-01-01

    The static structure factor S(q) is measured for a set of deionized latex dispersions with different numbers of ionizable surface groups per particle and similar diameters. For a given volume fraction, the height of the main peak of S(q), which is a direct measure of the spatial ordering of latex particles, does not increase monotonically with the number of ionizable groups. This behaviour cannot be described using the classical renormalization scheme based on the cell model. We analyse our experimental data using a renormalization model based on the jellium approximation, which predicts the weakening of the spatial order for moderate and large particle charges. (letter to the editor)

  7. Focus Article: Oscillatory and long-range monotonic exponential decays of electrostatic interactions in ionic liquids and other electrolytes: The significance of dielectric permittivity and renormalized charges

    Science.gov (United States)

    Kjellander, Roland

    2018-05-01

    A unified treatment of oscillatory and monotonic exponential decays of interactions in electrolytes is displayed, which highlights the role of dielectric response of the fluid in terms of renormalized (effective) dielectric permittivity and charges. An exact, but physically transparent statistical mechanical formalism is thereby used, which is presented in a systematic, pedagogical manner. Both the oscillatory and monotonic behaviors are given by an equation for the decay length of screened electrostatic interactions that is very similar to the classical expression for the Debye length. The renormalized dielectric permittivities, which have similar roles for electrolytes as the dielectric constant has for pure polar fluids, consist in general of several entities with different physical meanings. They are connected to dielectric response of the fluid on the same length scale as the decay length of the screened interactions. Only in cases where the decay length is very long, these permittivities correspond approximately to a dielectric response in the long-wavelength limit, like the dielectric constant for polar fluids. Experimentally observed long-range exponentially decaying surface forces are analyzed as well as the oscillatory forces observed for short to intermediate surface separations. Both occur in some ionic liquids and in concentrated as well as very dilute electrolyte solutions. The coexisting modes of decay are in general determined by the bulk properties of the fluid and not by the solvation of the surfaces; in the present cases, they are given by the behavior of the screened Coulomb interaction of the bulk fluid. The surface-fluid interactions influence the amplitudes and signs or phases of the different modes of the decay, but not their decay lengths and wavelengths. The similarities between some ionic liquids and very dilute electrolyte solutions as regards both the long-range monotonic and the oscillatory decays are analyzed.

  8. Elucidation of the effects of cementite morphology on damage formation during monotonic and cyclic tension in binary low carbon steels using in situ characterization

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Motomichi, E-mail: koyama@mech.kyushu-u.ac.jp [Faculty of Engineering, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395 (Japan); Yu, Yachen; Zhou, Jia-Xi [Faculty of Engineering, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395 (Japan); Yoshimura, Nobuyuki [Nippon Steel & Sumitomo Metal Corporation, 20-1 Shintomi, Futtsu, Chiba 293-8511 (Japan); Sakurada, Eisaku [Nippon Steel & Sumitomo Metal Corporation, 5-3 Tokai, Aichi 476-8686 (Japan); Ushioda, Kohsaku [Nippon Steel & Sumitomo Metal Corporation, 20-1 Shintomi, Futtsu, Chiba 293-8511 (Japan); Noguchi, Hiroshi [Faculty of Engineering, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395 (Japan)

    2016-06-14

    The effects of the morphology and distribution of cementite on damage formation were studied using in situ scanning electron microscopy under monotonic and cyclic tension. To investigate the effects of the morphology/distribution of cementite, intergranular cementite precipitation (ICP) and transgranular cementite precipitation (TCP) steels were prepared from an ingot of Fe-0.017 wt% C binary alloy using different heat treatments. In all cases, the damage incidents were observed primarily at the grain boundaries. The damage morphology was dependent on the cementite morphology and loading condition. Monotonic tension in the ICP steel caused cracks across the cementite plates, located at the grain boundaries. In contrast, fatigue loading in the ICP steel induced cracking at the ferrite/cementite interface. Moreover, in the TCP steel, monotonic tension- and cyclic tension-induced intergranular cracking was distinctly observed, due to the slip localization associated with a limited availability of free slip paths. When a notch is introduced to the ICP steel specimen, the morphology of the cyclic tension-induced damage at the notch tip changed to resemble that across the intergranular cementite, and was rather similar to the monotonic tension-induced damage. The damage at the notch tip coalesced with the main crack, accelerating the growth of the fatigue crack.

  9. Applied means to increase stimulation in the control room work at the Swedish nuclear power plants

    International Nuclear Information System (INIS)

    Blomberg, P.E.; Akerhielm, F.

    1988-01-01

    Nuclear power plants are generally designed and built with a quality which implies that the units seldom require intervention from the operating staff under normal operating conditions. This leaves the operators with the dominating task of only passively supervising the process. A number of measures have been taken to counteract the problem of under-stimulated individuals in the control rooms and to maintain active and purposeful working conditions. Basically these measures derive from the belief that augmented competence, increased responsibilities and a enhanced sense of indispensability functions as an inspiration even in a monotonous working situation. For this purpose a number of activities and tasks, parallel to the normal duties as member of the operating staff, have been implemented

  10. Post-error expression of speed and force while performing a simple, monotonous task with a haptic pen

    NARCIS (Netherlands)

    Bruns, M.; Keyson, D.V.; Jabon, M.E.; Hummels, C.C.M.; Hekkert, P.P.M.; Bailenson, J.N.

    2013-01-01

    Control errors often occur in repetitive and monotonous tasks, such as manual assembly tasks. Much research has been done in the area of human error identification; however, most existing systems focus solely on the prediction of errors, not on increasing worker accuracy. The current study examines

  11. Cytotoxicity of binary mixtures of human pharmaceuticals in a fish cell line: approaches for non-monotonic concentration-response relationships.

    Science.gov (United States)

    Bain, Peter A; Kumar, Anupama

    2014-08-01

    Predicting the effects of mixtures of environmental micropollutants is a priority research area. In this study, the cytotoxicity of ten pharmaceuticals to the rainbow trout cell line RTG-2 was determined using the neutral red uptake assay. Fluoxetine (FL), propranolol (PPN), and diclofenac (DCF) were selected for further study as binary mixtures. Biphasic concentration-response relationships were observed in cells exposed to FL and PPN. In the case of PPN, microscopic examination revealed lysosomal swelling indicative of direct uptake and accumulation of the compound. Three equations describing non-monotonic concentration-response relationships were evaluated and one was found to consistently provide more accurate estimates of the median and 10% effect concentrations compared with a sigmoidal concentration-response model. Predictive modeling of the effects of binary mixtures of FL, PPN, and DCF was undertaken using an implementation of the concentration addition (CA) conceptual model incorporating non-monotonic concentration-response relationships. The cytotoxicity of the all three binary combinations could be adequately predicted using CA, suggesting that the toxic mode of action in RTG-2 cells is unrelated to the therapeutic mode of action of these compounds. The approach presented here is widely applicable to the study of mixture toxicity in cases where non-monotonic concentration-response relationships are observed. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  12. Non-monotonic piezoresistive behaviour of graphene nanoplatelet (GNP-polymer composite flexible films prepared by solvent casting

    Directory of Open Access Journals (Sweden)

    S. Makireddi

    2017-07-01

    Full Text Available Graphene-polymer nanocomposite films show good piezoresistive behaviour and it is reported that the sensitivity increases either with the increased sheet resistance or decreased number density of the graphene fillers. A little is known about this behaviour near the percolation region. In this study, graphene nanoplatelet (GNP/poly (methyl methacrylate (PMMA flexible films are fabricated via solution casting process at varying weight percent of GNP. Electrical and piezoresistive behaviour of these films is studied as a function of GNP concentration. Piezoresistive strain sensitivity of the films is measured by affixing the film to an aluminium specimen which is subjected to monotonic uniaxial tensile load. The change in resistance of the film with strain is monitored using a four probe. An electrical percolation threshold at 3 weight percent of GNP is observed. We report non-monotonic piezoresistive behaviour of these films as a function GNP concentration. We observe an increase in gauge factor (GF with unstrained resistance of the films up to a critical resistance corresponding to percolation threshold. Beyond this limit the GF decreases with unstrained resistance.

  13. Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing

    2015-05-01

    This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Singular mean-filed games

    KAUST Repository

    Cirant, Marco; Gomes, Diogo A.; Pimentel, Edgard A.; Sá nchez-Morgado, Hé ctor

    2016-01-01

    Here, we prove the existence of smooth solutions for mean-field games with a singular mean-field coupling; that is, a coupling in the Hamilton-Jacobi equation of the form $g(m)=-m^{-\\alpha}$. We consider stationary and time-dependent settings. The function $g$ is monotone, but it is not bounded from below. With the exception of the logarithmic coupling, this is the first time that MFGs whose coupling is not bounded from below is examined in the literature. This coupling arises in models where agents have a strong preference for low-density regions. Paradoxically, this causes the agents to spread and prevents the creation of solutions with a very-low density. To prove the existence of solutions, we consider an approximate problem for which the existence of smooth solutions is known. Then, we prove new a priori bounds for the solutions that show that $\\frac 1 m$ is bounded. Finally, using a limiting argument, we obtain the existence of solutions. The proof in the stationary case relies on a blow-up argument and in the time-dependent case on new bounds for $m^{-1}$.

  15. Singular mean-filed games

    KAUST Repository

    Cirant, Marco

    2016-11-22

    Here, we prove the existence of smooth solutions for mean-field games with a singular mean-field coupling; that is, a coupling in the Hamilton-Jacobi equation of the form $g(m)=-m^{-\\\\alpha}$. We consider stationary and time-dependent settings. The function $g$ is monotone, but it is not bounded from below. With the exception of the logarithmic coupling, this is the first time that MFGs whose coupling is not bounded from below is examined in the literature. This coupling arises in models where agents have a strong preference for low-density regions. Paradoxically, this causes the agents to spread and prevents the creation of solutions with a very-low density. To prove the existence of solutions, we consider an approximate problem for which the existence of smooth solutions is known. Then, we prove new a priori bounds for the solutions that show that $\\\\frac 1 m$ is bounded. Finally, using a limiting argument, we obtain the existence of solutions. The proof in the stationary case relies on a blow-up argument and in the time-dependent case on new bounds for $m^{-1}$.

  16. Convex analysis and monotone operator theory in Hilbert spaces

    CERN Document Server

    Bauschke, Heinz H

    2017-01-01

    This reference text, now in its second edition, offers a modern unifying presentation of three basic areas of nonlinear analysis: convex analysis, monotone operator theory, and the fixed point theory of nonexpansive operators. Taking a unique comprehensive approach, the theory is developed from the ground up, with the rich connections and interactions between the areas as the central focus, and it is illustrated by a large number of examples. The Hilbert space setting of the material offers a wide range of applications while avoiding the technical difficulties of general Banach spaces. The authors have also drawn upon recent advances and modern tools to simplify the proofs of key results making the book more accessible to a broader range of scholars and users. Combining a strong emphasis on applications with exceptionally lucid writing and an abundance of exercises, this text is of great value to a large audience including pure and applied mathematicians as well as researchers in engineering, data science, ma...

  17. Non-Monotonic Survival of Staphylococcus aureus with Respect to Ciprofloxacin Concentration Arises from Prophage-Dependent Killing of Persisters

    Directory of Open Access Journals (Sweden)

    Elizabeth L. Sandvik

    2015-11-01

    Full Text Available Staphylococcus aureus is a notorious pathogen with a propensity to cause chronic, non-healing wounds. Bacterial persisters have been implicated in the recalcitrance of S. aureus infections, and this motivated us to examine the persistence of S. aureus to ciprofloxacin, a quinolone antibiotic. Upon treatment of exponential phase S. aureus with ciprofloxacin, we observed that survival was a non-monotonic function of ciprofloxacin concentration. Maximal killing occurred at 1 µg/mL ciprofloxacin, which corresponded to survival that was up to ~40-fold lower than that obtained with concentrations ≥ 5 µg/mL. Investigation of this phenomenon revealed that the non-monotonic response was associated with prophage induction, which facilitated killing of S. aureus persisters. Elimination of prophage induction with tetracycline was found to prevent cell lysis and persister killing. We anticipate that these findings may be useful for the design of quinolone treatments.

  18. Coldest Temperature Extreme Monotonically Increased and Hottest Extreme Oscillated over Northern Hemisphere Land during Last 114 Years.

    Science.gov (United States)

    Zhou, Chunlüe; Wang, Kaicun

    2016-05-13

    Most studies on global warming rely on global mean surface temperature, whose change is jointly determined by anthropogenic greenhouse gases (GHGs) and natural variability. This introduces a heated debate on whether there is a recent warming hiatus and what caused the hiatus. Here, we presented a novel method and applied it to a 5° × 5° grid of Northern Hemisphere land for the period 1900 to 2013. Our results show that the coldest 5% of minimum temperature anomalies (the coldest deviation) have increased monotonically by 0.22 °C/decade, which reflects well the elevated anthropogenic GHG effect. The warmest 5% of maximum temperature anomalies (the warmest deviation), however, display a significant oscillation following the Atlantic Multidecadal Oscillation (AMO), with a warming rate of 0.07 °C/decade from 1900 to 2013. The warmest (0.34 °C/decade) and coldest deviations (0.25 °C/decade) increased at much higher rates over the most recent decade than last century mean values, indicating the hiatus should not be interpreted as a general slowing of climate change. The significant oscillation of the warmest deviation provides an extension of previous study reporting no pause in the hottest temperature extremes since 1979, and first uncovers its increase from 1900 to 1939 and decrease from 1940 to 1969.

  19. Reduction in mean cerebral blood flow measurements using 99mTc-ECD-SPECT during normal aging

    International Nuclear Information System (INIS)

    Kawahata, Nobuya; Daitoh, Nobuyuki; Shirai, Fumie; Hara, Shigeru

    1997-01-01

    Mean cerebral blood flow (mCBF) was measured by SPECT using the 99m Tc-ECD-Patlak-Plot method in a selected group of 61 normal non-hospitalized subjects aged 51 to 91 years. The mCBF values showed 48.4±4.7 ml/100 g/min in aged 50-59 years group, 49.9±5.9 ml/100 g/min in aged 60-69 years group, 46.4 ±6.5 ml/100 g/min in aged 70-79 group, 38.0±3.7 ml/100 g/min in aged 80-89 years group, 38.9 ml/100 g/min in aged 90-99 years group. There was a statistically significant reduction of mCBF with advancing age (R=-0.41; p=0.001). Women have significantly higher mCBF values than men up to aged 70 years group. In this study, there was no significant laterality in the mCBF between right and left hemispheres in all decade groups. The history of hypertension, alcohol consumption, and cigarette smoking failed to show significant difference in the mCBF values. The present study shows that normal aging is associated with mCBF reduction. (author)

  20. Site-disorder driven superconductor–insulator transition: a dynamical mean field study

    International Nuclear Information System (INIS)

    Kamar, Naushad Ahmad; Vidhyadhiraja, N S

    2014-01-01

    We investigate the effect of site disorder on the superconducting state in the attractive Hubbard model within the framework of dynamical mean field theory. For a fixed interaction strength (U), the superconducting order parameter decreases monotonically with increasing disorder (x), while the single-particle spectral gap decreases for small x, reaches a minimum and keeps increasing for larger x. Thus, the system remains gapped beyond the destruction of the superconducting state, indicating a disorder-driven superconductor–insulator transition. We investigate this transition in depth considering the effects of weak and strong disorder for a range of interaction strengths. In the clean case, the order parameter is known to increase monotonically with increasing interaction, saturating at a finite value asymptotically for U→∞. The presence of disorder results in destruction of superconductivity at large U, thus drastically modifying the clean case behaviour. A physical understanding of our findings is obtained by invoking particle–hole asymmetry and the probability distributions of the order parameter and spectral gap. (paper)

  1. Subtractive, divisive and non-monotonic gain control in feedforward nets linearized by noise and delays.

    Science.gov (United States)

    Mejias, Jorge F; Payeur, Alexandre; Selin, Erik; Maler, Leonard; Longtin, André

    2014-01-01

    The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry-also known as "open-loop feedback"-, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.

  2. Subtractive, divisive and non-monotonic gain control in feedforward nets linearized by noise and delays

    Directory of Open Access Journals (Sweden)

    Jorge F Mejias

    2014-02-01

    Full Text Available The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry — also known as ’open-loop feedback’ —, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.

  3. The monotonicity and convexity of a function involving digamma one and their applications

    OpenAIRE

    Yang, Zhen-Hang

    2014-01-01

    Let $\\mathcal{L}(x,a)$ be defined on $\\left( -1,\\infty \\right) \\times \\left( 4/15,\\infty \\right) $ or $\\left( 0,\\infty \\right) \\times \\left( 1/15,\\infty \\right) $ by the formula% \\begin{equation*} \\mathcal{L}(x,a)=\\tfrac{1}{90a^{2}+2}\\ln \\left( x^{2}+x+\\tfrac{3a+1}{3}% \\right) +\\tfrac{45a^{2}}{90a^{2}+2}\\ln \\left( x^{2}+x+\\allowbreak \\tfrac{% 15a-1}{45a}\\right) . \\end{equation*} We investigate the monotonicity and convexity of the function $x\\rightarrow F_{a}\\left( x\\right) =\\psi \\left( x+1\\r...

  4. Convergence rates and finite-dimensional approximations for nonlinear ill-posed problems involving monotone operators in Banach spaces

    International Nuclear Information System (INIS)

    Nguyen Buong.

    1992-11-01

    The purpose of this paper is to investigate convergence rates for an operator version of Tikhonov regularization constructed by dual mapping for nonlinear ill-posed problems involving monotone operators in real reflective Banach spaces. The obtained results are considered in combination with finite-dimensional approximations for the space. An example is considered for illustration. (author). 15 refs

  5. Simplest bifurcation diagrams for monotone families of vector fields on a torus

    Science.gov (United States)

    Baesens, C.; MacKay, R. S.

    2018-06-01

    In part 1, we prove that the bifurcation diagram for a monotone two-parameter family of vector fields on a torus has to be at least as complicated as the conjectured simplest one proposed in Baesens et al (1991 Physica D 49 387–475). To achieve this, we define ‘simplest’ by sequentially minimising the numbers of equilibria, Bogdanov–Takens points, closed curves of centre and of neutral saddle, intersections of curves of centre and neutral saddle, Reeb components, other invariant annuli, arcs of rotational homoclinic bifurcation of horizontal homotopy type, necklace points, contractible periodic orbits, points of neutral horizontal homoclinic bifurcation and half-plane fan points. We obtain two types of simplest case, including that initially proposed. In part 2, we analyse the bifurcation diagram for an explicit monotone family of vector fields on a torus and prove that it has at most two equilibria, precisely four Bogdanov–Takens points, no closed curves of centre nor closed curves of neutral saddle, at most two Reeb components, precisely four arcs of rotational homoclinic connection of ‘horizontal’ homotopy type, eight horizontal saddle-node loop points, two necklace points, four points of neutral horizontal homoclinic connection, and two half-plane fan points, and there is no simultaneous existence of centre and neutral saddle, nor contractible homoclinic connection to a neutral saddle. Furthermore, we prove that all saddle-nodes, Bogdanov–Takens points, non-neutral and neutral horizontal homoclinic bifurcations are non-degenerate and the Hopf condition is satisfied for all centres. We also find it has four points of degenerate Hopf bifurcation. It thus provides an example of a family satisfying all the assumptions of part 1 except the one of at most one contractible periodic orbit.

  6. Raman D-band in the irradiated graphene: Origin of the non-monotonous dependence of its intensity with defect concentration

    International Nuclear Information System (INIS)

    Codorniu Pujals, Daniel

    2013-01-01

    Raman spectroscopy is one of the most used experimental techniques in studying irradiated carbon nanostructures, in particular graphene, due to its high sensibility to the presence of defects in the crystalline lattice. Special attention has been given to the variation of the intensity of the Raman D-band of graphene with the concentration of defects produced by irradiation. Nowadays, there are enough experimental evidences about the non-monotonous character of that dependence, but the explanation of this behavior is still controversial. In the present work we developed a simplified mathematical model to obtain a functional relationship between these two magnitudes and showed that the non-monotonous dependence is intrinsic to the nature of the D-band and that it is not necessarily linked to amorphization processes. The obtained functional dependence was used to fit experimental data taken from other authors. The determination coefficient of the fitting was 0.96.

  7. Comportement des poteaux mixtes acier-béton soumis aux sollicitations de type monotone. Étude expérimentale

    Directory of Open Access Journals (Sweden)

    Cristina Câmpian

    2006-01-01

    Full Text Available For more than one hundred years the construction system based on steel or composite steel -- concrete frames became one of the more utilized types of building in civil engineering domain. For an optimal dimensioning of the structure, the engineers had to found a compromise between the structural exigency for the resistance, stiffness and ductility, on one side, and architectural exigency on the other side. Three monotonic tests and nine cyclic tests according to ECCS loading procedure were carried out in Cluj Laboratory of Concrete. The tested composite columns of fully encased type were subject to a variable transverse load at one end while keeping a constant value of the axial compression force into them. An analytical interpretation is given for the calculus of column stiffness for the monotonic tests, making a comparation with the latest versions of the Eurocode 4 stiffness formula.

  8. Brain 'talks over' boring quotes: top-down activation of voice-selective areas while listening to monotonous direct speech quotations.

    Science.gov (United States)

    Yao, Bo; Belin, Pascal; Scheepers, Christoph

    2012-04-15

    In human communication, direct speech (e.g., Mary said, "I'm hungry") is perceived as more vivid than indirect speech (e.g., Mary said that she was hungry). This vividness distinction has previously been found to underlie silent reading of quotations: Using functional magnetic resonance imaging (fMRI), we found that direct speech elicited higher brain activity in the temporal voice areas (TVA) of the auditory cortex than indirect speech, consistent with an "inner voice" experience in reading direct speech. Here we show that listening to monotonously spoken direct versus indirect speech quotations also engenders differential TVA activity. This suggests that individuals engage in top-down simulations or imagery of enriched supra-segmental acoustic representations while listening to monotonous direct speech. The findings shed new light on the acoustic nature of the "inner voice" in understanding direct speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Low dose effects and non-monotonic dose responses for endocrine active chemicals: Science to practice workshop: Workshop summary

    DEFF Research Database (Denmark)

    Beausoleil, Claire; Ormsby, Jean-Nicolas; Gies, Andreas

    2013-01-01

    A workshop was held in Berlin September 12–14th 2012 to assess the state of the science of the data supporting low dose effects and non-monotonic dose responses (“low dose hypothesis”) for chemicals with endocrine activity (endocrine disrupting chemicals or EDCs). This workshop consisted of lectu...

  10. Solvability conditions of the Cauchy problem for two-dimensional systems of linear functional differential equations with monotone operators

    Czech Academy of Sciences Publication Activity Database

    Šremr, Jiří

    2007-01-01

    Roč. 132, č. 3 (2007), s. 263-295 ISSN 0862-7959 R&D Projects: GA ČR GP201/04/P183 Institutional research plan: CEZ:AV0Z10190503 Keywords : system of functional differential equations with monotone operators * initial value problem * unique solvability Subject RIV: BA - General Mathematics

  11. The Marotto Theorem on planar monotone or competitive maps

    International Nuclear Information System (INIS)

    Yu Huang

    2004-01-01

    In 1978, Marotto generalized Li-Yorke's results on the criterion for chaos from one-dimensional discrete dynamical systems to n-dimensional discrete dynamical systems, showing that the existence of a non-degenerate snap-back repeller implies chaos in the sense of Li-Yorke. This theorem is very useful in predicting and analyzing discrete chaos in multi-dimensional dynamical systems. Yet, besides it is well known that there exists an error in the conditions of the original Marotto Theorem, and several authors had tried to correct it in different way, Chen, Hsu and Zhou pointed out that the verification of 'non-degeneracy' of a snap-back repeller is the most difficult in general and expected, 'almost beyond reasonable doubt', that the existence of only degenerate snap-back repeller still implies chaotic, which was posed as a conjecture by them. In this paper, we shall give necessary and sufficient conditions of chaos in the sense of Li-Yorke for planar monotone or competitive discrete dynamical systems and solve Chen-Hsu-Zhou Conjecture for such kinds of systems

  12. The Monotonic Lagrangian Grid for Fast Air-Traffic Evaluation

    Science.gov (United States)

    Alexandrov, Natalia; Kaplan, Carolyn; Oran, Elaine; Boris, Jay

    2010-01-01

    This paper describes the continued development of a dynamic air-traffic model, ATMLG, intended for rapid evaluation of rules and methods to control and optimize transport systems. The underlying data structure is based on the Monotonic Lagrangian Grid (MLG), which is used for sorting and ordering positions and other data needed to describe N moving bodies, and their interactions. In ATMLG, the MLG is combined with algorithms for collision avoidance and updating aircraft trajectories. Aircraft that are close to each other in physical space are always near neighbors in the MLG data arrays, resulting in a fast nearest-neighbor interaction algorithm that scales as N. In this paper, we use ATMLG to examine how the ability to maintain a required separation between aircraft decreases as the number of aircraft in the volume increases. This requires keeping track of the primary and subsequent collision avoidance maneuvers necessary to maintain a five mile separation distance between all aircraft. Simulation results show that the number of collision avoidance moves increases exponentially with the number of aircraft in the volume.

  13. Evaluation of the Monotonic Lagrangian Grid and Lat-Long Grid for Air Traffic Management

    Science.gov (United States)

    Kaplan, Carolyn; Dahm, Johann; Oran, Elaine; Alexandrov, Natalia; Boris, Jay

    2011-01-01

    The Air Traffic Monotonic Lagrangian Grid (ATMLG) is used to simulate a 24 hour period of air traffic flow in the National Airspace System (NAS). During this time period, there are 41,594 flights over the United States, and the flight plan information (departure and arrival airports and times, and waypoints along the way) are obtained from an Federal Aviation Administration (FAA) Enhanced Traffic Management System (ETMS) dataset. Two simulation procedures are tested and compared: one based on the Monotonic Lagrangian Grid (MLG), and the other based on the stationary Latitude-Longitude (Lat- Long) grid. Simulating one full day of air traffic over the United States required the following amounts of CPU time on a single processor of an SGI Altix: 88 s for the MLG method, and 163 s for the Lat-Long grid method. We present a discussion of the amount of CPU time required for each of the simulation processes (updating aircraft trajectories, sorting, conflict detection and resolution, etc.), and show that the main advantage of the MLG method is that it is a general sorting algorithm that can sort on multiple properties. We discuss how many MLG neighbors must be considered in the separation assurance procedure in order to ensure a five-mile separation buffer between aircraft, and we investigate the effect of removing waypoints from aircraft trajectories. When aircraft choose their own trajectory, there are more flights with shorter duration times and fewer CD&R maneuvers, resulting in significant fuel savings.

  14. Differential diagnosis of normal pressure hydrocephalus by MRI mean diffusivity histogram analysis.

    Science.gov (United States)

    Ivkovic, M; Liu, B; Ahmed, F; Moore, D; Huang, C; Raj, A; Kovanlikaya, I; Heier, L; Relkin, N

    2013-01-01

    Accurate diagnosis of normal pressure hydrocephalus is challenging because the clinical symptoms and radiographic appearance of NPH often overlap those of other conditions, including age-related neurodegenerative disorders such as Alzheimer and Parkinson diseases. We hypothesized that radiologic differences between NPH and AD/PD can be characterized by a robust and objective MR imaging DTI technique that does not require intersubject image registration or operator-defined regions of interest, thus avoiding many pitfalls common in DTI methods. We collected 3T DTI data from 15 patients with probable NPH and 25 controls with AD, PD, or dementia with Lewy bodies. We developed a parametric model for the shape of intracranial mean diffusivity histograms that separates brain and ventricular components from a third component composed mostly of partial volume voxels. To accurately fit the shape of the third component, we constructed a parametric function named the generalized Voss-Dyke function. We then examined the use of the fitting parameters for the differential diagnosis of NPH from AD, PD, and DLB. Using parameters for the MD histogram shape, we distinguished clinically probable NPH from the 3 other disorders with 86% sensitivity and 96% specificity. The technique yielded 86% sensitivity and 88% specificity when differentiating NPH from AD only. An adequate parametric model for the shape of intracranial MD histograms can distinguish NPH from AD, PD, or DLB with high sensitivity and specificity.

  15. Application of a repetitive process setting to design of monotonically convergent iterative learning control

    Science.gov (United States)

    Boski, Marcin; Paszke, Wojciech

    2015-11-01

    This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.

  16. Monotonous and oscillation instability of mechanical equilibrium of isothermal three-components mixture with zero-gradient density

    International Nuclear Information System (INIS)

    Zhavrin, Yu.I.; Kosov, V.N.; Kul'zhanov, D.U.; Karataev, K.K.

    2000-01-01

    Presence of two types of instabilities of mechanical equilibrium of a mixture experimentally is shown at an isothermal diffusion of multicomponent system with zero gradient of density/ Theoretically is proved, that partial Rayleigh numbers R 1 , R 2 having different signs, there are two areas with monotonous (R 1 2 < by 0) instability. The experimental data confirm presence of these areas and satisfactory are described by the represented theory. (author)

  17. Dynamical zeta functions for piecewise monotone maps of the interval

    CERN Document Server

    Ruelle, David

    2004-01-01

    Consider a space M, a map f:M\\to M, and a function g:M \\to {\\mathbb C}. The formal power series \\zeta (z) = \\exp \\sum ^\\infty _{m=1} \\frac {z^m}{m} \\sum _{x \\in \\mathrm {Fix}\\,f^m} \\prod ^{m-1}_{k=0} g (f^kx) yields an example of a dynamical zeta function. Such functions have unexpected analytic properties and interesting relations to the theory of dynamical systems, statistical mechanics, and the spectral theory of certain operators (transfer operators). The first part of this monograph presents a general introduction to this subject. The second part is a detailed study of the zeta functions associated with piecewise monotone maps of the interval [0,1]. In particular, Ruelle gives a proof of a generalized form of the Baladi-Keller theorem relating the poles of \\zeta (z) and the eigenvalues of the transfer operator. He also proves a theorem expressing the largest eigenvalue of the transfer operator in terms of the ergodic properties of (M,f,g).

  18. Reduction in mean cerebral blood flow measurements using {sup 99m}Tc-ECD-SPECT during normal aging

    Energy Technology Data Exchange (ETDEWEB)

    Kawahata, Nobuya; Daitoh, Nobuyuki; Shirai, Fumie; Hara, Shigeru [Narita Memorial Hospital, Toyohashi, Aichi (Japan)

    1997-10-01

    Mean cerebral blood flow (mCBF) was measured by SPECT using the {sup 99m}Tc-ECD-Patlak-Plot method in a selected group of 61 normal non-hospitalized subjects aged 51 to 91 years. The mCBF values showed 48.4{+-}4.7 ml/100 g/min in aged 50-59 years group, 49.9{+-}5.9 ml/100 g/min in aged 60-69 years group, 46.4 {+-}6.5 ml/100 g/min in aged 70-79 group, 38.0{+-}3.7 ml/100 g/min in aged 80-89 years group, 38.9 ml/100 g/min in aged 90-99 years group. There was a statistically significant reduction of mCBF with advancing age (R=-0.41; p=0.001). Women have significantly higher mCBF values than men up to aged 70 years group. In this study, there was no significant laterality in the mCBF between right and left hemispheres in all decade groups. The history of hypertension, alcohol consumption, and cigarette smoking failed to show significant difference in the mCBF values. The present study shows that normal aging is associated with mCBF reduction. (author)

  19. The influence of gas–solid reaction kinetics in models of thermochemical heat storage under monotonic and cyclic loading

    International Nuclear Information System (INIS)

    Nagel, T.; Shao, H.; Roßkopf, C.; Linder, M.; Wörner, A.; Kolditz, O.

    2014-01-01

    Highlights: • Detailed analysis of cyclic and monotonic loading of thermochemical heat stores. • Fully coupled reactive heat and mass transport. • Reaction kinetics can be simplified in systems limited by heat transport. • Operating lines valid during monotonic and cyclic loading. • Local integral degree of conversion to capture heterogeneous material usage. - Abstract: Thermochemical reactions can be employed in heat storage devices. The choice of suitable reactive material pairs involves a thorough kinetic characterisation by, e.g., extensive thermogravimetric measurements. Before testing a material on a reactor level, simulations with models based on the Theory of Porous Media can be used to establish its suitability. The extent to which the accuracy of the kinetic model influences the results of such simulations is unknown yet fundamental to the validity of simulations based on chemical models of differing complexity. In this article we therefore compared simulation results on the reactor level based on an advanced kinetic characterisation of a calcium oxide/hydroxide system to those obtained by a simplified kinetic model. Since energy storage is often used for short term load buffering, the internal reactor behaviour is analysed under cyclic partial loading and unloading in addition to full monotonic charge/discharge operation. It was found that the predictions by both models were very similar qualitatively and quantitatively in terms of thermal power characteristics, conversion profiles, temperature output, reaction duration and pumping powers. Major differences were, however, observed for the reaction rate profiles themselves. We conclude that for systems not limited by kinetics the simplified model seems sufficient to estimate the reactor behaviour. The degree of material usage within the reactor was further shown to strongly vary under cyclic loading conditions and should be considered when designing systems for certain operating regimes

  20. Non-Interior Continuation Method for Solving the Monotone Semidefinite Complementarity Problem

    International Nuclear Information System (INIS)

    Huang, Z.H.; Han, J.

    2003-01-01

    Recently, Chen and Tseng extended non-interior continuation smoothing methods for solving linear/ nonlinear complementarity problems to semidefinite complementarity problems (SDCP). In this paper we propose a non-interior continuation method for solving the monotone SDCP based on the smoothed Fischer-Burmeister function, which is shown to be globally linearly and locally quadratically convergent under suitable assumptions. Our algorithm needs at most to solve a linear system of equations at each iteration. In addition, in our analysis on global linear convergence of the algorithm, we need not use the assumption that the Frechet derivative of the function involved in the SDCP is Lipschitz continuous. For non-interior continuation/ smoothing methods for solving the nonlinear complementarity problem, such an assumption has been used widely in the literature in order to achieve global linear convergence results of the algorithms

  1. Scaling of normalized mean energy and scalar dissipation rates in a turbulent channel flow

    Science.gov (United States)

    Abe, Hiroyuki; Antonia, Robert Anthony

    2011-05-01

    Non-dimensional parameters for the mean energy and scalar dissipation rates Cɛ and Cɛθ are examined using direct numerical simulation (DNS) data obtained in a fully developed turbulent channel flow with a passive scalar (Pr = 0.71) at several values of the Kármán (Reynolds) number h+. It is shown that Cɛ and Cɛθ are approximately equal in the near-equilibrium region (viz., y+ = 100 to y/h = 0.7) where the production and dissipation rates of either the turbulent kinetic energy or scalar variance are approximately equal and the magnitudes of the diffusion terms are negligibly small. The magnitudes of Cɛ and Cɛθ are about 2 and 1 in the logarithmic and outer regions, respectively, when h+ is sufficiently large. The former value is about the same for the channel, pipe, and turbulent boundary layer, reflecting the similarity between the mean velocity and temperature distributions among these three canonical flows. The latter value is, on the other hand, about twice as large as in homogeneous isotropic turbulence due to the existence of the large-scale u structures in the channel. The behaviour of Cɛ and Cɛθ impacts on turbulence modeling. In particular, the similarity between Cɛ and Cɛθ leads to a simple relation for the scalar variance to turbulent kinetic energy time-scale ratio, an important ingredient in the eddy diffusivity model. This similarity also yields a relation between the Taylor and Corrsin microscales and analogous relations, in terms of h+, for the Taylor microscale Reynolds number and Corrsin microscale Peclet number. This dependence is reasonably well supported by both the DNS data at small to moderate h+ and the experimental data of Comte-Bellot [Ph. D. thesis (University of Grenoble, 1963)] at larger h+. It does not however apply to a turbulent boundary layer where the mean energy dissipation rate, normalized on either wall or outer variables, is about 30% larger than for the channel flow.

  2. Complete Monotonicity of a Difference Between the Exponential and Trigamma Functions and Properties Related to a Modified Bessel Function

    DEFF Research Database (Denmark)

    Qi, Feng; Berg, Christian

    2013-01-01

    In the paper, the authors find necessary and sufficient conditions for a difference between the exponential function αeβ/t, α, β > 0, and the trigamma function ψ (t) to be completely monotonic on (0,∞). While proving the complete onotonicity, the authors discover some properties related to the fi...

  3. Surfactants non-monotonically modify the onset of Faraday waves

    Science.gov (United States)

    Strickland, Stephen; Shearer, Michael; Daniels, Karen

    2017-11-01

    When a water-filled container is vertically vibrated, subharmonic Faraday waves emerge once the driving from the vibrations exceeds viscous dissipation. In the presence of an insoluble surfactant, a viscous boundary layer forms at the contaminated surface to balance the Marangoni and Boussinesq stresses. For linear gravity-capillary waves in an undriven fluid, the surfactant-induced boundary layer increases the amount of viscous dissipation. In our analysis and experiments, we consider whether similar effects occur for nonlinear Faraday (gravity-capillary) waves. Assuming a finite-depth, infinite-breadth, low-viscosity fluid, we derive an analytic expression for the onset acceleration up to second order in ɛ =√{ 1 / Re } . This expression allows us to include fluid depth and driving frequency as parameters, in addition to the Marangoni and Boussinesq numbers. For millimetric fluid depths and driving frequencies of 30 to 120 Hz, our analysis recovers prior numerical results and agrees with our measurements of NBD-PC surfactant on DI water. In both case, the onset acceleration increases non-monotonically as a function of Marangoni and Boussinesq numbers. For shallower systems, our model predicts that surfactants could decrease the onset acceleration. DMS-0968258.

  4. Non-existence of Normal Tokamak Equilibria with Negative Central Current

    International Nuclear Information System (INIS)

    Hammett, G.W.; Jardin, S.C.; Stratton, B.C.

    2003-01-01

    Recent tokamak experiments employing off-axis, non-inductive current drive have found that a large central current hole can be produced. The current density is measured to be approximately zero in this region, though in principle there was sufficient current-drive power for the central current density to have gone significantly negative. Recent papers have used a large aspect-ratio expansion to show that normal MHD equilibria (with axisymmetric nested flux surfaces, non-singular fields, and monotonic peaked pressure profiles) can not exist with negative central current. We extend that proof here to arbitrary aspect ratio, using a variant of the virial theorem to derive a relatively simple integral constraint on the equilibrium. However, this constraint does not, by itself, exclude equilibria with non-nested flux surfaces, or equilibria with singular fields and/or hollow pressure profiles that may be spontaneously generated

  5. One-dimensional, forward-forward mean-field games with congestion

    KAUST Repository

    Gomes, Diogo A.

    2017-03-29

    Here, we consider one-dimensional forward-forward mean-field games (MFGs) with congestion, which were introduced to approximate stationary MFGs. We use methods from the theory of conservation laws to examine the qualitative properties of these games. First, by computing Riemann invariants and corresponding invariant regions, we develop a method to prove lower bounds for the density. Next, by combining the lower bound with an entropy function, we prove the existence of global solutions for parabolic forward-forward MFGs. Finally, we construct traveling-wave solutions, which settles in a negative way the convergence problem for forward-forward MFGs. A similar technique gives the existence of time-periodic solutions for non-monotonic MFGs.

  6. Condition-based inspection/replacement policies for non-monotone deteriorating systems with environmental covariates

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Xuejing [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); School of mathematics and statistics, Lanzhou University, Lanzhou 730000 (China); Fouladirad, Mitra, E-mail: mitra.fouladirad@utt.f [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Berenguer, Christophe [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Bordes, Laurent [Universite de Pau et des Pays de l' Adour, LMA UMR CNRS 5142, 64013 PAU Cedex (France)

    2010-08-15

    The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.

  7. Condition-based inspection/replacement policies for non-monotone deteriorating systems with environmental covariates

    International Nuclear Information System (INIS)

    Zhao Xuejing; Fouladirad, Mitra; Berenguer, Christophe; Bordes, Laurent

    2010-01-01

    The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.

  8. Is normal science good science?

    Directory of Open Access Journals (Sweden)

    Adrianna Kępińska

    2015-09-01

    Full Text Available “Normal science” is a concept introduced by Thomas Kuhn in The Structure of Scientific Revolutions (1962. In Kuhn’s view, normal science means “puzzle solving”, solving problems within the paradigm—framework most successful in solving current major scientific problems—rather than producing major novelties. This paper examines Kuhnian and Popperian accounts of normal science and their criticisms to assess if normal science is good. The advantage of normal science according to Kuhn was “psychological”: subjective satisfaction from successful “puzzle solving”. Popper argues for an “intellectual” science, one that consistently refutes conjectures (hypotheses and offers new ideas rather than focus on personal advantages. His account is criticized as too impersonal and idealistic. Feyerabend’s perspective seems more balanced; he argues for a community that would introduce new ideas, defend old ones, and enable scientists to develop in line with their subjective preferences. The paper concludes that normal science has no one clear-cut set of criteria encompassing its meaning and enabling clear assessment.

  9. Ethics and "normal birth".

    Science.gov (United States)

    Lyerly, Anne Drapkin

    2012-12-01

    The concept of "normal birth" has been promoted as ideal by several international organizations, although debate about its meaning is ongoing. In this article, I examine the concept of normalcy to explore its ethical implications and raise a trio of concerns. First, in its emphasis on nonuse of technology as a goal, the concept of normalcy may marginalize women for whom medical intervention is necessary or beneficial. Second, in its emphasis on birth as a socially meaningful event, the mantra of normalcy may unintentionally avert attention to meaning in medically complicated births. Third, the emphasis on birth as a normal and healthy event may be a contributor to the long-standing tolerance for the dearth of evidence guiding the treatment of illness during pregnancy and the failure to responsibly and productively engage pregnant women in health research. Given these concerns, it is worth debating not just what "normal birth" means, but whether the term as an ideal earns its keep. © 2012, Copyright the Authors Journal compilation © 2012, Wiley Periodicals, Inc.

  10. Simple bounds for counting processes with monotone rate of occurrence of failures

    International Nuclear Information System (INIS)

    Kaminskiy, Mark P.

    2007-01-01

    The article discusses some aspects of analogy between certain classes of distributions used as models for time to failure of nonrepairable objects, and the counting processes used as models for failure process for repairable objects. The notion of quantiles for the counting processes with strictly increasing cumulative intensity function is introduced. The classes of counting processes with increasing (decreasing) rate of occurrence of failures are considered. For these classes, the useful nonparametric bounds for cumulative intensity function based on one known quantile are obtained. These bounds, which can be used for repairable objects, are similar to the bounds introduced by Barlow and Marshall [Barlow, R. Marshall, A. Bounds for distributions with monotone hazard rate, I and II. Ann Math Stat 1964; 35: 1234-74] for IFRA (DFRA) time to failure distributions applicable to nonrepairable objects

  11. An Optimal Augmented Monotonic Tracking Controller for Aircraft Engines with Output Constraints

    Directory of Open Access Journals (Sweden)

    Jiakun Qin

    2017-01-01

    Full Text Available This paper proposes a novel min-max control scheme for aircraft engines, with the aim of transferring a set of regulated outputs between two set-points, while ensuring a set of auxiliary outputs remain within prescribed constraints. In view of this, an optimal augmented monotonic tracking controller (OAMTC is proposed, by considering a linear plant with input integration, to enhance the ability of the control system to reject uncertainty in system parameters and ensure no crossing limits. The key idea is to use the eigenvalue and eigenvector placement method and genetic algorithms to shape the output responses. The approach is validated by numerical simulation. The results show that the designed OAMTC controller can achieve a satisfactory dynamic and steady performance and keep the auxiliary outputs within constraints in the transient regime.

  12. Monotone Hybrid Projection Algorithms for an Infinitely Countable Family of Lipschitz Generalized Asymptotically Quasi-Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Watcharaporn Cholamjiak

    2009-01-01

    Full Text Available We prove a weak convergence theorem of the modified Mann iteration process for a uniformly Lipschitzian and generalized asymptotically quasi-nonexpansive mapping in a uniformly convex Banach space. We also introduce two kinds of new monotone hybrid methods and obtain strong convergence theorems for an infinitely countable family of uniformly Lipschitzian and generalized asymptotically quasi-nonexpansive mappings in a Hilbert space. The results improve and extend the corresponding ones announced by Kim and Xu (2006 and Nakajo and Takahashi (2003.

  13. Use of the nonsteady monotonic heating method for complex determination of thermophysical properties of chemically reacting mixture in the case of non-equilibrium proceeding of the chemical reaction

    International Nuclear Information System (INIS)

    Serebryanyj, G.Z.

    1984-01-01

    Theoretical analysis is made for the monotonic heating method as applied for complex determination of thermophysical properties of chemically reacting gases. The possibility is shown of simultaneous determination of frozen and equilibrium heat capacity, frozen and equilibrium heat conduction provided non-equilibrium occuring of the reaction in the wide range of temperatures and pressures. The monotonic heating method can be used for complex determination of thermophysical properties of chemically reacting systems in case of non-equilibrium proceeding of the chemical reaction

  14. The Amounts of As, Au, Br, Cu, Fe, Mo, Se and Zn in Normal and Uraemic Human whole Blood. A. Comparison by Means of Neutron Activation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Brune, D; Samsahl, K [AB Atomenergi, Nykoeping (Sweden); Wester, P O [Dept. of Medicine, Karolinska Inst., Serafimerlasarettet, Stockholm (Sweden)

    1964-01-15

    Quantitative determination of the elements As, Au, Br, Cu, Fe, Mo, Se and Zn have been performed in normal and uraemic human whole blood by means of H{sub 2}SO{sub 4} - H-O- digestion, distillation and ion exchange, combined with gamma-spectrometric analysis. The uraemic blood was found to contain about 10 times as much As and twice as much Mo as did the normal blood. As regards Fe, the uraemic blood contained slightly less than the normal blood. For the other elements there were no detectable difference.

  15. Laser induced non-monotonic degradation in short-circuit current of triple-junction solar cells

    Science.gov (United States)

    Dou, Peng-Cheng; Feng, Guo-Bin; Zhang, Jian-Min; Song, Ming-Ying; Zhang, Zhen; Li, Yun-Peng; Shi, Yu-Bin

    2018-06-01

    In order to study the continuous wave (CW) laser radiation effects and mechanism of GaInP/GaAs/Ge triple-junction solar cells (TJSCs), 1-on-1 mode irradiation experiments were carried out. It was found that the post-irradiation short circuit current (ISC) of the TJSCs initially decreased and then increased with increasing of irradiation laser power intensity. To explain this phenomenon, a theoretical model had been established and then verified by post-damage tests and equivalent circuit simulations. Conclusion was drawn that laser induced alterations in the surface reflection and shunt resistance were the main causes for the observed non-monotonic decrease in the ISC of the TJSCs.

  16. Self-consistent normal ordering of gauge field theories

    International Nuclear Information System (INIS)

    Ruehl, W.

    1987-01-01

    Mean-field theories with a real action of unconstrained fields can be self-consistently normal ordered. This leads to a considerable improvement over standard mean-field theory. This concept is applied to lattice gauge theories. First an appropriate real action mean-field theory is constructed. The equations determining the Gaussian kernel necessary for self-consistent normal ordering of this mean-field theory are derived. (author). 4 refs

  17. Telomere length in normal and neoplastic canine tissues.

    Science.gov (United States)

    Cadile, Casey D; Kitchell, Barbara E; Newman, Rebecca G; Biller, Barbara J; Hetler, Elizabeth R

    2007-12-01

    To determine the mean telomere restriction fragment (TRF) length in normal and neoplastic canine tissues. 57 solid-tissue tumor specimens collected from client-owned dogs, 40 samples of normal tissue collected from 12 clinically normal dogs, and blood samples collected from 4 healthy blood donor dogs. Tumor specimens were collected from client-owned dogs during diagnostic or therapeutic procedures at the University of Illinois Veterinary Medical Teaching Hospital, whereas 40 normal tissue samples were collected from 12 control dogs. Telomere restriction fragment length was determined by use of an assay kit. A histologic diagnosis was provided for each tumor by personnel at the Veterinary Diagnostic Laboratory at the University of Illinois. Mean of the mean TRF length for 44 normal samples was 19.0 kilobases (kb; range, 15.4 to 21.4 kb), and the mean of the mean TRF length for 57 malignant tumors was 19.0 kb (range, 12.9 to 23.5 kb). Although the mean of the mean TRF length for tumors and normal tissues was identical, tumor samples had more variability in TRF length. Telomerase, which represents the main mechanism by which cancer cells achieve immortality, is an attractive therapeutic target. The ability to measure telomere length is crucial to monitoring the efficacy of telomerase inhibition. In contrast to many other mammalian species, the length of canine telomeres and the rate of telomeric DNA loss are similar to those reported in humans, making dogs a compelling choice for use in the study of human anti-telomerase strategies.

  18. Subgap resonant quasiparticle transport in normal-superconductor quantum dot devices

    Energy Technology Data Exchange (ETDEWEB)

    Gramich, J., E-mail: joerg.gramich@unibas.ch; Baumgartner, A.; Schönenberger, C. [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland)

    2016-04-25

    We report thermally activated transport resonances for biases below the superconducting energy gap in a carbon nanotube quantum dot (QD) device with a superconducting Pb and a normal metal contact. These resonances are due to the superconductor's finite quasi-particle population at elevated temperatures and can only be observed when the QD life-time broadening is considerably smaller than the gap. This condition is fulfilled in our QD devices with optimized Pd/Pb/In multi-layer contacts, which result in reproducibly large and “clean” superconducting transport gaps with a strong conductance suppression for subgap biases. We show that these gaps close monotonically with increasing magnetic field and temperature. The accurate description of the subgap resonances by a simple resonant tunneling model illustrates the ideal characteristics of the reported Pb contacts and gives an alternative access to the tunnel coupling strengths in a QD.

  19. Explicit Solutions for One-Dimensional Mean-Field Games

    KAUST Repository

    Prazeres, Mariana

    2017-04-05

    In this thesis, we consider stationary one-dimensional mean-field games (MFGs) with or without congestion. Our aim is to understand the qualitative features of these games through the analysis of explicit solutions. We are particularly interested in MFGs with a nonmonotonic behavior, which corresponds to situations where agents tend to aggregate. First, we derive the MFG equations from control theory. Then, we compute explicit solutions using the current formulation and examine their behavior. Finally, we represent the solutions and analyze the results. This thesis main contributions are the following: First, we develop the current method to solve MFG explicitly. Second, we analyze in detail non-monotonic MFGs and discover new phenomena: non-uniqueness, discontinuous solutions, empty regions and unhappiness traps. Finally, we address several regularization procedures and examine the stability of MFGs.

  20. Testable, fault-tolerant power interface circuit for normally de-energized loads

    International Nuclear Information System (INIS)

    Hager, R.E.

    1987-01-01

    A power interface circuit is described for supplying power from a power line to a normally de-energized process control apparatus in a pressurized light water nuclear power system in dependence upon three input signals, comprising: voter means for supplying power to the normally de-energized load when at least two of the three input signals indicate that the normally de-energized load should be activated; a normally closed switch, operatively connected to the power line and the voter means, for supplying power to the voter means during ordinary operation; a first resistor operatively connected to the power line; a current detector operatively connected to the first resistor and the voter means; a second resistor operatively connected to the current detector and ground; and current sensor means, operatively connected between the voter means and the normally de-energized load, for detecting the power supplied to the normally de-energized load by the voter means

  1. Influence of pores on crack initiation in monotonic tensile and cyclic loadings in lost foam casting A319 alloy by using 3D in-situ analysis

    International Nuclear Information System (INIS)

    Wang, Long; Limodin, Nathalie; El Bartali, Ahmed; Witz, Jean-François; Seghir, Rian; Buffiere, Jean-Yves; Charkaluk, Eric

    2016-01-01

    Lost Foam Casting (LFC) process is replacing the conventional gravity Die Casting (DC) process in automotive industry for the purpose of geometry optimization, cost reduction and consumption control. However, due to lower cooling rate, LFC produces in a coarser microstructure that reduces fatigue life. In order to study the influence of the casting microstructure of LFC Al-Si alloy on damage micromechanisms under monotonic tensile loading and Low Cycle Fatigue (LCF) at room temperature, an experimental protocol based on the three dimensional (3D) in-situ analysis has been set up and validated. This paper focuses on the influence of pores on crack initiation in monotonic and cyclic tensile loadings. X-ray Computed Tomography (CT) allowed the microstructure of material being characterized in 3D and damage evolution being followed in-situ also in 3D. Experimental and numerical mechanical fields were obtained by using Digital Volume Correlation (DVC) technique and Finite Element Method (FEM) simulation respectively. Pores were shown to have an important influence on strain localization as large pores generate enough strain localization zones for crack initiation both in monotonic tensile and cyclic loadings.

  2. Influence of pores on crack initiation in monotonic tensile and cyclic loadings in lost foam casting A319 alloy by using 3D in-situ analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Long, E-mail: longwang_calt@163.com [Univ. Lille, CNRS, Centrale Lille, Arts et Metiers Paris tech, FRE 3723 – LML – Laboratoire de Mecanique de Lille, F-59000 Lille (France); Limodin, Nathalie; El Bartali, Ahmed; Witz, Jean-François; Seghir, Rian [Univ. Lille, CNRS, Centrale Lille, Arts et Metiers Paris tech, FRE 3723 – LML – Laboratoire de Mecanique de Lille, F-59000 Lille (France); Buffiere, Jean-Yves [Laboratoire Matériaux, Ingénierie et Sciences (MATEIS), CNRS UMR5510, INSA-Lyon, 20 Av. Albert Einstein, 69621 Villeurbanne (France); Charkaluk, Eric [Univ. Lille, CNRS, Centrale Lille, Arts et Metiers Paris tech, FRE 3723 – LML – Laboratoire de Mecanique de Lille, F-59000 Lille (France)

    2016-09-15

    Lost Foam Casting (LFC) process is replacing the conventional gravity Die Casting (DC) process in automotive industry for the purpose of geometry optimization, cost reduction and consumption control. However, due to lower cooling rate, LFC produces in a coarser microstructure that reduces fatigue life. In order to study the influence of the casting microstructure of LFC Al-Si alloy on damage micromechanisms under monotonic tensile loading and Low Cycle Fatigue (LCF) at room temperature, an experimental protocol based on the three dimensional (3D) in-situ analysis has been set up and validated. This paper focuses on the influence of pores on crack initiation in monotonic and cyclic tensile loadings. X-ray Computed Tomography (CT) allowed the microstructure of material being characterized in 3D and damage evolution being followed in-situ also in 3D. Experimental and numerical mechanical fields were obtained by using Digital Volume Correlation (DVC) technique and Finite Element Method (FEM) simulation respectively. Pores were shown to have an important influence on strain localization as large pores generate enough strain localization zones for crack initiation both in monotonic tensile and cyclic loadings.

  3. Behaviour of smart reinforced concrete beam with super elastic shape memory alloy subjected to monotonic loading

    Science.gov (United States)

    Hamid, Nubailah Abd; Ibrahim, Azmi; Adnan, Azlan; Ismail, Muhammad Hussain

    2018-05-01

    This paper discusses the superelastic behavior of shape memory alloy, NiTi when used as reinforcement in concrete beams. The ability of NiTi to recover and reduce permanent deformations of concrete beams was investigated. Small-scale concrete beams, with NiTi reinforcement were experimentally investigated under monotonic loads. The behaviour of simply supported reinforced concrete (RC) beams hybrid with NiTi rebars and the control beam subject to monotonic loads were experimentally investigated. This paper is to highlight the ability of the SMA bars to recover and reduce permanent deformations of concrete flexural members. The size of the control beam is 125 mm × 270 mm × 1000 mm with 3 numbers of 12 mm diameter bars as main reinforcement for compression and 3 numbers of 12 mm bars as tension or hanger bars while 6 mm diameter at 100 mm c/c used as shear reinforcement bars for control beam respectively. While, the minimal provision of 200mm using the 12.7mm of superelastic Shape Memory Alloys were employed to replace the steel rebar at the critical region of the beam. In conclusion, the contribution of the SMA bar in combination with high-strength steel to the conventional reinforcement showed that the SMA beam has exhibited an improve performance in term of better crack recovery and deformation. Therefore the usage of hybrid NiTi with the steel can substantially diminish the risk of the earthquake and also can reduce the associated cost aftermath.

  4. Critical undrained shear strength of sand-silt mixtures under monotonic loading

    Directory of Open Access Journals (Sweden)

    Mohamed Bensoula

    2014-07-01

    Full Text Available This study uses experimental triaxial tests with monotonic loading to develop empirical relationships to estimate undrained critical shear strength. The effect of the fines content on undrained shear strength is analyzed for different density states. The parametric analysis indicates that, based on the soil void ratio and fine content properties, the undrained critical shear strength first increases and then decreases as the proportion of fines increases, which demonstrates the influence of fine content on a soil’s vulnerability to liquefaction. A series of monotonic undrained triaxial tests were performed on reconstituted saturated sand-silt mixtures. Beyond 30% fines content, a fraction of the silt participates in the soil’s skeleton chain force. In this context, the concept of the equivalent intergranular void ratio may be an appropriate parameter to express the critical shear strength of the studied soil. This parameter is able to control the undrained shear strength of non-plastic silt and sand mixtures with different densities.   Resumen Este estudio utiliza evaluaciones experimentales triaxiales con cargas repetitivas para desarrollar relaciones empíricas y estimar la tensión crítica de corte bajo condiciones no drenadas. El efecto de contenido de finos en la tensión de corte sin drenar se analizó en diferentes estados de densidad. El análisis paramétrico indica que, basado en la porosidad del suelo y las propiedades del material de finos, la tensión de corte sin drenar primero se incrementa y luego decrece mientras la proporción de finos aumenta, lo que demuestra la influencia de contenido de finos en la vulnerabilidad del suelo a la licuación. Una serie de las evaluaciones se realizó en  mezclas rehidratadas y saturadas de arena y cieno. Más allá del 30 % de los contenidos finos, una fracción del cieno hace parte principal de la cadena de fuerza del suelo. En este contexto, el concepto de porosidad equivalente

  5. Assessing dose-response relationships for endocrine disrupting chemicals (EDCs): a focus on non-monotonicity.

    Science.gov (United States)

    Zoeller, R Thomas; Vandenberg, Laura N

    2015-05-15

    The fundamental principle in regulatory toxicology is that all chemicals are toxic and that the severity of effect is proportional to the exposure level. An ancillary assumption is that there are no effects at exposures below the lowest observed adverse effect level (LOAEL), either because no effects exist or because they are not statistically resolvable, implying that they would not be adverse. Chemicals that interfere with hormones violate these principles in two important ways: dose-response relationships can be non-monotonic, which have been reported in hundreds of studies of endocrine disrupting chemicals (EDCs); and effects are often observed below the LOAEL, including all environmental epidemiological studies examining EDCs. In recognition of the importance of this issue, Lagarde et al. have published the first proposal to qualitatively assess non-monotonic dose response (NMDR) relationships for use in risk assessments. Their proposal represents a significant step forward in the evaluation of complex datasets for use in risk assessments. Here, we comment on three elements of the Lagarde proposal that we feel need to be assessed more critically and present our arguments: 1) the use of Klimisch scores to evaluate study quality, 2) the concept of evaluating study quality without topical experts' knowledge and opinions, and 3) the requirement of establishing the biological plausibility of an NMDR before consideration for use in risk assessment. We present evidence-based logical arguments that 1) the use of the Klimisch score should be abandoned for assessing study quality; 2) evaluating study quality requires experts in the specific field; and 3) an understanding of mechanisms should not be required to accept observable, statistically valid phenomena. It is our hope to contribute to the important and ongoing debate about the impact of NMDRs on risk assessment with positive suggestions.

  6. Driving monotonous routes in a train simulator: the effect of task demand on driving performance and subjective experience.

    Science.gov (United States)

    Dunn, Naomi; Williamson, Ann

    2012-01-01

    Although monotony is widely recognised as being detrimental to performance, its occurrence and effects are not yet well understood. This is despite the fact that task-related characteristics, such as monotony and low task demand, have been shown to contribute to performance decrements over time. Participants completed one of two simulated train-driving scenarios. Both were highly monotonous and differed only in terms of the level of cognitive demand required (i.e. low demand or high demand). These results highlight the seriously detrimental effects of the combination of monotony and low task demands and clearly show that even a relatively minor increase in cognitive demand can mitigate adverse monotony-related effects on performance for extended periods of time. Monotony is an inherent characteristic of transport industries, including rail, aviation and road transport, which can have adverse impact on safety, reliability and efficiency. This study highlights possible strategies for mitigating these adverse effects. Practitioner Summary: This study provides evidence for the importance of cognitive demand in mitigating monotony-related effects on performance. The results have clear implications for the rapid onset of performance deterioration in low demand monotonous tasks and demonstrate that these detrimental performance effects can be overcome with simple solutions, such as making the task more cognitively engaging.

  7. Using an inductive approach for definition making: Monotonicity and boundedness of sequences

    Directory of Open Access Journals (Sweden)

    Deonarain Brijlall

    2009-09-01

    Full Text Available The study investigated fourth–year students’ construction of the definitions of monotonicity and boundedness of sequences, at the Edgewood Campus of the University of KwaZulu –Natal in South Africa. Structured worksheets based on a guided problem solving teaching model were used to help students to construct the twodefinitions. A group of twenty three undergraduateteacher trainees participated in the project. These students specialised in the teaching of mathematics in the Further Education and Training (FET (Grades 10 to 12 school curriculum. This paper, specifically, reports on the investigation of students’ definition constructions based on a learnig theory within the context of advanced mathematical thinking and makes a contribution to an understanding of how these students constructed the two definitions. It was found that despite the intervention of a structured design, these definitions were partially or inadequately conceptualised by some students.

  8. Precaval retropancreatic space: Normal anatomy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yeon Hee; Kim, Ki Whang; Kim, Myung Jin; Yoo, Hyung Sik; Lee, Jong Tae [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    1992-07-15

    The authors defined precaval retropancreatic space as the space between pancreatic head with portal vein and IVC and analyzed the CT findings of this space to know the normal structures and size in this space. We evaluated 100 cases of normal abdominal CT scan to find out normal anatomic structures of precaval retropancreatic space retrospectively. We also measured the distance between these structures and calculated the minimum, maximum and mean values. At the splenoportal confluence level, normal structures between portal vein and IVC were vessel (21%), lymph node (19%), and caudate lobe of liver (2%) in order of frequency. The maximum AP diameter of portocaval lymph node was 4 mm. Common bile duct (CBD) was seen in 44% and the diameter was mean 3 mm and maximum 11 mm. CBD was located in extrapancreatic (75%) and lateral (60.6%) to pancreatic head. At IVC-left renal vein level, the maximum distance between CBD and IVC was 5 mm and the structure between posterior pancreatic surface and IVC was only fat tissue. Knowledge of these normal structures and measurement will be helpful in differentiating pancreatic mass with retropancreatic mass such as lymphadenopathy.

  9. Mechanisms of plastic deformation (cyclic and monotonous) of Inconel X750

    International Nuclear Information System (INIS)

    Randrianarivony, H.

    1992-01-01

    Plastic deformation mechanisms under cyclic or monotonous solicitations, are analysed in function of Inconel X750 initial macrostructure. Two heat treated Inconel (first one is treated at 1366 K one hour, air cooled, aged at 977 K 20 hours, and air cooled, the second alloy is aged at 1158 K 24 hours, air cooled, aged at 977 K 20 hours, and air cooled), are characterized respectively by a fine and uniform precipitation of the γ' phase (approximative formulae: Ni 3 (Al,Ti)), and by a bimodal distribution of γ' precipitates. In both alloys, dislocations pairs (characteristic of a shearing by antiphase wall creation) are observed, and the crossing mechanism of the γ' precipitates by creation of overstructure pile defects is the same. But, glissile loops dislocations are less numerous than dislocations pairs in the first alloy, involving denser bands structure for this alloy (dislocations loops are always observed around γ' precipitates). Some comportment explications of Inconel X750 in PWR medium are given. (A.B.). refs., figs., tabs

  10. Living an unstable everyday life while attempting to perform normality - the meaning of living as an alcohol-dependent woman.

    Science.gov (United States)

    Thurang, Anna; Bengtsson Tops, Anita

    2013-02-01

    To illuminate the meaning of living with alcohol dependency as a woman. The number of women suffering from alcohol dependency is increasing. Today there are shortcomings in knowledge about the lived experiences of being a woman with alcohol dependency; knowledge which might be of importance for meeting these women's specific needs of care. The study has a qualitative design. Fourteen women with alcohol dependency participated in open in-depth interviews. Data were analysed according to a phenomenological-hermeneutic method, and interpreted by help from gender and caring perspectives as well as results from previous research of alcohol dependency. In relation to the women's senses of well-being, four main gender formations were found; An unstable self involving continual and rapid swings between emotional and bodily reactions. Ambivalence - meaning ambiguous feelings towards themselves as human beings and how they lead their lives. Introspectiveness - involving reflections, pondering and being introverted. Attempts to perform normality - covering - dealing with life through various strategies and facades to live up to the expectations of how to behave as a woman. Living with alcohol dependency as a woman constitutes of a rapid shifting everyday life resulting in senses of alienation as well as private introspection leading to self-degradation, and to a lesser extent meaningfulness and hope. It also constitutes of managing to perform normality. When supporting women with alcohol dependency towards wellbeing, professionals need to work towards approaching the woman's inner thoughts, share them and reflect over them together. To support these women to find balance in life, caregivers need to cooperate with the women to find out how best to live a life adjusted to the woman's abilities and wishes. © 2012 Blackwell Publishing Ltd.

  11. Using exogenous variables in testing for monotonic trends in hydrologic time series

    Science.gov (United States)

    Alley, William M.

    1988-01-01

    One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.

  12. Sterilization of Normal Human Plasma and Some of its Fractions by Means of Gamma Rays

    International Nuclear Information System (INIS)

    López Martínez de Alva, L.; Crespo, Y M.

    1967-01-01

    Owing to the frequency with which normal human plasma transmits hepatitis, various methods of sterilization have been tried. The method most used, but which has been shown to be ineffective, is sterilization of the liquid plasma with ultraviolet rays. The other method is the use of ß-propiolactone, but this has also been discontinued because of the changes it produces in the structure of plasma proteins. The object of the present study, which should be regarded as a preliminary report, is to present the results obtained by sterilizing, by means of gamma rays ( 60 Co, 1.3316 MeV) at doses of 2, 2. 5 and 3 Mrad, a substance which, like plasma, contains highly labile proteins easily able to undergo structural changes under irradiation. Pure fibrinogen, pure gamma globulin, and albumin of human origin were subjected to the same doses; the results were very satisfactory, in that no appreciable change could be demonstrated as regards the structure, solubility or chemical characteristics of the substances concerned. It was shown by means of simple coagulation tests that some of the proteins involved in this mechanism, such as prothrombin, die Power-Stuart factor and the Hageman factor, were practically unchanged. All the plasma samples and the various proteins were previously lyophilized to give a maximum moisture content of 0.034%, in order to avoid ionizing the water content into oxygenated water, which would modify and oxidize the proteins. It was shown that lyophilized plasmas initially contaminated with different strains and viruses remained sterile with doses as low as 2 Mrad. Finally, it was shown that this method is simple and practical, since sterilization can be checked in the final packaging. (author) [es

  13. Time-dependent, non-monotonic response of warm convective cloud fields to changes in aerosol loading

    Directory of Open Access Journals (Sweden)

    G. Dagan

    2017-06-01

    Full Text Available Large eddy simulations (LESs with bin microphysics are used here to study cloud fields' sensitivity to changes in aerosol loading and the time evolution of this response. Similarly to the known response of a single cloud, we show that the mean field properties change in a non-monotonic trend, with an optimum aerosol concentration for which the field reaches its maximal water mass or rain yield. This trend is a result of competition between processes that encourage cloud development versus those that suppress it. However, another layer of complexity is added when considering clouds' impact on the field's thermodynamic properties and how this is dependent on aerosol loading. Under polluted conditions, rain is suppressed and the non-precipitating clouds act to increase atmospheric instability. This results in warming of the lower part of the cloudy layer (in which there is net condensation and cooling of the upper part (net evaporation. Evaporation at the upper part of the cloudy layer in the polluted simulations raises humidity at these levels and thus amplifies the development of the next generation of clouds (preconditioning effect. On the other hand, under clean conditions, the precipitating clouds drive net warming of the cloudy layer and net cooling of the sub-cloud layer due to rain evaporation. These two effects act to stabilize the atmospheric boundary layer with time (consumption of the instability. The evolution of the field's thermodynamic properties affects the cloud properties in return, as shown by the migration of the optimal aerosol concentration toward higher values.

  14. Asymptotic Poisson distribution for the number of system failures of a monotone system

    International Nuclear Information System (INIS)

    Aven, Terje; Haukis, Harald

    1997-01-01

    It is well known that for highly available monotone systems, the time to the first system failure is approximately exponentially distributed. Various normalising factors can be used as the parameter of the exponential distribution to ensure the asymptotic exponentiality. More generally, it can be shown that the number of system failures is asymptotic Poisson distributed. In this paper we study the performance of some of the normalising factors by using Monte Carlo simulation. The results show that the exponential/Poisson distribution gives in general very good approximations for highly available components. The asymptotic failure rate of the system gives best results when the process is in steady state, whereas other normalising factors seem preferable when the process is not in steady state. From a computational point of view the asymptotic system failure rate is most attractive

  15. Non-monotonic spatial distribution of the interstellar dust in astrospheres: finite gyroradius effect

    Science.gov (United States)

    Katushkina, O. A.; Alexashov, D. B.; Izmodenov, V. V.; Gvaramadze, V. V.

    2017-02-01

    High-resolution mid-infrared observations of astrospheres show that many of them have filamentary (cirrus-like) structure. Using numerical models of dust dynamics in astrospheres, we suggest that their filamentary structure might be related to specific spatial distribution of the interstellar dust around the stars, caused by a gyrorotation of charged dust grains in the interstellar magnetic field. Our numerical model describes the dust dynamics in astrospheres under an influence of the Lorentz force and assumption of a constant dust charge. Calculations are performed for the dust grains with different sizes separately. It is shown that non-monotonic spatial dust distribution (viewed as filaments) appears for dust grains with the period of gyromotion comparable with the characteristic time-scale of the dust motion in the astrosphere. Numerical modelling demonstrates that the number of filaments depends on charge-to-mass ratio of dust.

  16. Annealing temperature dependent non-monotonic d{sup 0} ferromagnetism in pristine In{sub 2}O{sub 3} nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Haiming; Xing, Pengfei, E-mail: pfxing@tju.edu.cn; Yao, Dongsheng; Wu, Ping

    2017-05-01

    Cubic bixbyite In{sub 2}O{sub 3} nanoparticles with room temperature d{sup 0} ferromagnetism were prepared by sol-gel method with the air annealing temperature ranging from 500 to 900 °C. X-ray diffraction, X-ray photoelectron spectroscopy, Raman-scattering and photoluminescence were carried out to demonstrate the presence of oxygen vacancies. The lattice constant, the atomic ratio of crystal O and In, the Raman peak at 369 cm{sup −1}, the PL emission peak at 396 nm and the saturation magnetization of d{sup 0} ferromagnetism all had a consistent non-monotonic change with the increasing annealing temperature. With further considering the relation between the grain size and the distribution of oxygen vacancies, we think that d{sup 0} ferromagnetism in our samples is directly related with the singly charged oxygen vacancies at the surface of In{sub 2}O{sub 3} nanoparticles. - Highlights: • Effect of air-annealing temperature on the d{sup 0} ferromagnetism of pure In{sub 2}O{sub 3}. • Oxygen-deficiency states of all samples were detected by Raman scattering and PL. • Ferromagnetism changes non-monotonically with the increasing annealing temperature. • d{sup 0} ferromagnetism in our In{sub 2}O{sub 3} nanoparticles is related with the surface V{sub O}{sup +}.

  17. Monotonic and cyclic responses of impact polypropylene and continuous glass fiber-reinforced impact polypropylene composites at different strain rates

    KAUST Repository

    Yudhanto, Arief

    2016-03-08

    Impact copolymer polypropylene (IPP), a blend of isotactic polypropylene and ethylene-propylene rubber, and its continuous glass fiber composite form (glass fiber-reinforced impact polypropylene, GFIPP) are promising materials for impact-prone automotive structures. However, basic mechanical properties and corresponding damage of IPP and GFIPP at different rates, which are of keen interest in the material development stage and numerical tool validation, have not been reported. Here, we applied monotonic and cyclic tensile loads to IPP and GFIPP at different strain rates (0.001/s, 0.01/s and 0.1/s) to study the mechanical properties, failure modes and the damage parameters. We used monotonic and cyclic tests to obtain mechanical properties and define damage parameters, respectively. We also used scanning electron microscopy (SEM) images to visualize the failure mode. We found that IPP generally exhibits brittle fracture (with relatively low failure strain of 2.69-3.74%) and viscoelastic-viscoplastic behavior. GFIPP [90]8 is generally insensitive to strain rate due to localized damage initiation mostly in the matrix phase leading to catastrophic transverse failure. In contrast, GFIPP [±45]s is sensitive to the strain rate as indicated by the change in shear modulus, shear strength and failure mode.

  18. Statistical Tests for Frequency Distribution of Mean Gravity Anomalies

    African Journals Online (AJOL)

    The hypothesis that a very large number of lOx 10mean gravity anomalies are normally distributed has been rejected at 5% Significance level based on the X2 and the unit normal deviate tests. However, the 50 equal area mean anomalies derived from the lOx 10data, have been found to be normally distributed at the same ...

  19. U.S. Monthly Climate Normals (1971-2000)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — U.S. Monthly Climate Normals (1971-2000) (DSI-9641C) include climatological normals based on monthly maximum, minimum, and mean temperature and monthly total...

  20. Studies on the zeros of Bessel functions and methods for their computation: 2. Monotonicity, convexity, concavity, and other properties

    Science.gov (United States)

    Kerimov, M. K.

    2016-07-01

    This work continues the study of real zeros of first- and second-kind Bessel functions and Bessel general functions with real variables and orders begun in the first part of this paper (see M.K. Kerimov, Comput. Math. Math. Phys. 54 (9), 1337-1388 (2014)). Some new results concerning such zeros are described and analyzed. Special attention is given to the monotonicity, convexity, and concavity of zeros with respect to their ranks and other parameters.

  1. Mean Field Type Control with Congestion (II): An Augmented Lagrangian Method

    Energy Technology Data Exchange (ETDEWEB)

    Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr; Laurière, Mathieu [Univ. Paris Diderot, Sorbonne Paris Cité, Laboratoire Jacques-Louis Lions, UMR 7598, UPMC, CNRS (France)

    2016-12-15

    This work deals with a numerical method for solving a mean-field type control problem with congestion. It is the continuation of an article by the same authors, in which suitably defined weak solutions of the system of partial differential equations arising from the model were discussed and existence and uniqueness were proved. Here, the focus is put on numerical methods: a monotone finite difference scheme is proposed and shown to have a variational interpretation. Then an Alternating Direction Method of Multipliers for solving the variational problem is addressed. It is based on an augmented Lagrangian. Two kinds of boundary conditions are considered: periodic conditions and more realistic boundary conditions associated to state constrained problems. Various test cases and numerical results are presented.

  2. A note on the environmental Kuznets curve for CO2: A pooled mean group approach

    International Nuclear Information System (INIS)

    Iwata, Hiroki; Okada, Keisuke; Samreth, Sovannroeun

    2011-01-01

    This paper investigates whether the environmental Kuznets curve (EKC) hypothesis for CO 2 emissions is satisfied using the panel data of 28 countries by taking nuclear energy into account. Using the pooled mean group (PMG) estimation method, our main results indicate that (1) the impacts of nuclear energy on CO 2 emissions are significantly negative, (2) CO 2 emissions actually increase monotonically within the sample period in all cases: the full sample, OECD countries, and non-OECD countries, and (3) the growth rate in CO 2 emissions with income is decreasing in OECD countries and increasing in non-OECD countries.

  3. A Mixed Monotone Operator Method for the Existence and Uniqueness of Positive Solutions to Impulsive Caputo Fractional Differential Equations

    Directory of Open Access Journals (Sweden)

    Jieming Zhang

    2013-01-01

    Full Text Available We establish some sufficient conditions for the existence and uniqueness of positive solutions to a class of initial value problem for impulsive fractional differential equations involving the Caputo fractional derivative. Our analysis relies on a fixed point theorem for mixed monotone operators. Our result can not only guarantee the existence of a unique positive solution but also be applied to construct an iterative scheme for approximating it. An example is given to illustrate our main result.

  4. Behaviour of C-shaped angle shear connectors under monotonic and fully reversed cyclic loading: An experimental study

    International Nuclear Information System (INIS)

    Shariati, Mahdi; Ramli Sulong, N.H.; Suhatril, Meldi; Shariati, Ali; Arabnejad Khanouki, M.M.; Sinaei, Hamid

    2012-01-01

    Highlights: ► C-shaped angle connectors show 8.8–33.1% strength degradation under cyclic loading. ► Connector fracture type of failure was experienced in C-shaped angle shear connectors. ► In push-out samples, more cracking was observed in those slabs with longer angles. ► C-shaped angle connectors show good behaviour in terms of the ultimate shear capacity. ► C-shaped angle connectors did not fulfil the requirements for ductility criteria. -- Abstract: This paper presents an evaluation of the structural behaviour of C-shaped angle shear connectors in composite beams, suitable for transferring shear force in composite structures. The results of the experimental programme, including eight push-out tests, are presented and discussed. The results include resistance, strength degradation, ductility, and failure modes of C-shaped angle shear connectors, under monotonic and fully reversed cyclic loading. The results show that connector fracture type of failure was experienced in C-shaped angle connectors and after the failure, more cracking was observed in those slabs with longer angles. On top of that, by comparing the shear resistance of C-shaped angle shear connectors under monotonic and cyclic loading, these connectors showed 8.8–33.1% strength degradation, under fully reversed cyclic loading. Furthermore, it was concluded that the mentioned shear connector shows a proper behaviour, in terms of the ultimate shear capacity, but it does not satisfy the ductility criteria, imposed by the Eurocode 4, to perform a plastic distribution of the shear force between different connectors along the beam length.

  5. HCV carriers with normal aminotransferase levels: “normal” does not always mean “healthy”

    Directory of Open Access Journals (Sweden)

    Claudio Puoti

    2013-04-01

    Full Text Available BACKGROUND Approximately 30% of patients with chronic HCV infection show persistently normal ALT levels (PNALT, and another 40% have minimally raised ALT values. Although formerly referred to as “healthy” or “asymptomatic” HCV carriers, it has now become clear that the majority of these patients have some degree of histological liver damage. Controversies still exist regarding the definition of “persistent” ALT normality, the virological and histological features of these subjects, and the natural history and optimal management of chronic hepatitis C (CHC with normal ALT. Most patients with normal ALT have histologically proven liver damage that may be significant (> F2 in up to 20% of patients, and might progress toward more severe degree of liver fibrosis. A significant proportion of patients (≥ 20% experiences periods of increased serum ALT (flare associated with disease progression. AIM OF THE STUDY The introduction of the new combination therapy of PEG-IFN plus ribavirin allowed response rates higher than 50%, with a favourable risk-benefit ratio also in patients with benign or slow progressive disease. Given the efficacy of the new treatments, which soon became the standard of care for CHC, it has been suggested that the issue of whether or not to treat subjects with PNALT should be re-evaluated. ALT levels may have less importance in deciding who should be treated. Many other factors might influence the decision to treat, such as the age of the patient, HCV genotype, liver histology, patient’s motivation, symptoms, extrahepatic manifestations, comorbid illness. The role of non-invasive tools for the assessment of liver fibrosis (transient hepatic elastography remains to be further validated.

  6. Cognition of normal pattern of myocardial polar map

    International Nuclear Information System (INIS)

    Fujisawa, Yasuo; Sasaki, Jiro; Kashima, Kenji; Matsumura, Yasushi; Yamamoto, Kazuhiro; Kodama, Kazuhisa

    1989-01-01

    When we diagnose the presence of ischemic heart disease by the diagrams of computer-generated polar map of exercised thallium images, the estimation of the presence of the deficit is not sufficient, because many normal subjects are considered as abnormal. The mean+2SD of defect severity index (DSI) of 118 normal subjects was 120, and we defined the patients with DSI≤120 as normal. But in 139 patients with their DSI≤120, 28 patients had significant coronary stenosis (>75%) and this means that false negative was 20%. We estimated the pattern of the deficit and found that in 109 of 111 subjects with normal coronary arteries, and 16 of 28 patients with ischemic heart disease, the patterns of the diagrams of polar map were patchy. This means that the diagram of the polar map show patchy pattern more frequently in normal subjects. In 125 patients whose diagrams of polar map were patchy, 16 patients with ischemic heart disease were included (false negative was 13%). We conclude that the estimation of DSI and the pattern of the diagram of polar map should be simultaneously considered and this makes the more accurate diagnosis possible. (author)

  7. Mean-Gini Portfolio Analysis: A Pedagogic Illustration

    Directory of Open Access Journals (Sweden)

    C. Sherman Cheung

    2007-05-01

    Full Text Available It is well known in the finance literature that mean-variance analysis is inappropriate when asset returns are not normally distributed or investors’ preferences of returns are not characterized by quadratic functions. The normality assumption has been widely rejected in cases of emerging market equities and hedge funds. The mean-Gini framework is an attractive alternative as it is consistent with stochastic dominance rules regardless of the probability distributions of asset returns. Applying mean-Gini to a portfolio setting involving multiple assets, however, has always been challenging to business students whose training in optimization is limited. This paper introduces a simple spreadsheet-based approach to mean-Gini portfolio optimization, thus allowing the mean-Gini concepts to be covered more effectively in finance courses such as portfolio theory and investment analysis.

  8. Mean blood velocities and flow impedance in the fetal descending thoracic aortic and common carotid artery in normal pregnancy.

    Science.gov (United States)

    Bilardo, C M; Campbell, S; Nicolaides, K H

    1988-12-01

    A linear array pulsed Doppler duplex scanner was used to establish reference ranges for mean blood velocities and flow impedance (Pulsatility Index = PI) in the descending thoracic aorta and in the common carotid artery from 70 fetuses in normal pregnancies at 17-42 weeks' gestation. The aortic velocity increased with gestation up to 32 weeks, then remained constant until term, when it decreased. In contrast, the velocity in the common carotid artery increased throughout pregnancy. The PI in the aorta remained constant throughout pregnancy, while in the common carotid artery it fell steeply after 32 weeks. These results suggest that with advancing gestation there is a redistribution of the fetal circulation with decreased impedance to flow to the fetal brain, presumably to compensate for the progressive decrease in fetal blood PO2.

  9. The resource theory of quantum reference frames: manipulations and monotones

    International Nuclear Information System (INIS)

    Gour, Gilad; Spekkens, Robert W

    2008-01-01

    Every restriction on quantum operations defines a resource theory, determining how quantum states that cannot be prepared under the restriction may be manipulated and used to circumvent the restriction. A superselection rule (SSR) is a restriction that arises through the lack of a classical reference frame and the states that circumvent it (the resource) are quantum reference frames. We consider the resource theories that arise from three types of SSRs, associated respectively with lacking: (i) a phase reference, (ii) a frame for chirality, and (iii) a frame for spatial orientation. Focusing on pure unipartite quantum states (and in some cases restricting our attention even further to subsets of these), we explore single-copy and asymptotic manipulations. In particular, we identify the necessary and sufficient conditions for a deterministic transformation between two resource states to be possible and, when these conditions are not met, the maximum probability with which the transformation can be achieved. We also determine when a particular transformation can be achieved reversibly in the limit of arbitrarily many copies and find the maximum rate of conversion. A comparison of the three resource theories demonstrates that the extent to which resources can be interconverted decreases as the strength of the restriction increases. Along the way, we introduce several measures of frameness and prove that these are monotonically non-increasing under various classes of operations that are permitted by the SSR

  10. Monotonicity of the ratio of modified Bessel functions of the first kind with applications.

    Science.gov (United States)

    Yang, Zhen-Hang; Zheng, Shen-Zhou

    2018-01-01

    Let [Formula: see text] with [Formula: see text] be the modified Bessel functions of the first kind of order v . In this paper, we prove the monotonicity of the function [Formula: see text] on [Formula: see text] for different values of parameter p with [Formula: see text]. As applications, we deduce some new Simpson-Spector-type inequalities for [Formula: see text] and derive a new type of bounds [Formula: see text] ([Formula: see text]) for [Formula: see text]. In particular, we show that the upper bound [Formula: see text] for [Formula: see text] is the minimum over all upper bounds [Formula: see text], where [Formula: see text] and is not comparable with other sharpest upper bounds. We also find such type of upper bounds for [Formula: see text] with [Formula: see text] and for [Formula: see text] with [Formula: see text].

  11. Effect of Pulse Polarity on Thresholds and on Non-monotonic Loudness Growth in Cochlear Implant Users.

    Science.gov (United States)

    Macherey, Olivier; Carlyon, Robert P; Chatron, Jacques; Roman, Stéphane

    2017-06-01

    Most cochlear implants (CIs) activate their electrodes non-simultaneously in order to eliminate electrical field interactions. However, the membrane of auditory nerve fibers needs time to return to its resting state, causing the probability of firing to a pulse to be affected by previous pulses. Here, we provide new evidence on the effect of pulse polarity and current level on these interactions. In experiment 1, detection thresholds and most comfortable levels (MCLs) were measured in CI users for 100-Hz pulse trains consisting of two consecutive biphasic pulses of the same or of opposite polarity. All combinations of polarities were studied: anodic-cathodic-anodic-cathodic (ACAC), CACA, ACCA, and CAAC. Thresholds were lower when the adjacent phases of the two pulses had the same polarity (ACCA and CAAC) than when they were different (ACAC and CACA). Some subjects showed a lower threshold for ACCA than for CAAC while others showed the opposite trend demonstrating that polarity sensitivity at threshold is genuine and subject- or electrode-dependent. In contrast, anodic (CAAC) pulses always showed a lower MCL than cathodic (ACCA) pulses, confirming previous reports. In experiments 2 and 3, the subjects compared the loudness of several pulse trains differing in current level separately for ACCA and CAAC. For 40 % of the electrodes tested, loudness grew non-monotonically as a function of current level for ACCA but never for CAAC. This finding may relate to a conduction block of the action potentials along the fibers induced by a strong hyperpolarization of their central processes. Further analysis showed that the electrodes showing a lower threshold for ACCA than for CAAC were more likely to yield a non-monotonic loudness growth. It is proposed that polarity sensitivity at threshold reflects the local neural health and that anodic asymmetric pulses should preferably be used to convey sound information while avoiding abnormal loudness percepts.

  12. Studies on the zeros of Bessel functions and methods for their computation: 3. Some new works on monotonicity, convexity, and other properties

    Science.gov (United States)

    Kerimov, M. K.

    2016-12-01

    This paper continues the study of real zeros of Bessel functions begun in the previous parts of this work (see M. K. Kerimov, Comput. Math. Math. Phys. 54 (9), 1337-1388 (2014); 56 (7), 1175-1208 (2016)). Some new results regarding the monotonicity, convexity, concavity, and other properties of zeros are described. Additionally, the zeros of q-Bessel functions are investigated.

  13. Evaluation of normalization methods in mammalian microRNA-Seq data

    Science.gov (United States)

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  14. Smooth quantile normalization.

    Science.gov (United States)

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  15. Non-monotonic swelling of surface grafted hydrogels induced by pH and/or salt concentration

    Science.gov (United States)

    Longo, Gabriel S.; Olvera de la Cruz, Monica; Szleifer, I.

    2014-09-01

    We use a molecular theory to study the thermodynamics of a weak-polyacid hydrogel film that is chemically grafted to a solid surface. We investigate the response of the material to changes in the pH and salt concentration of the buffer solution. Our results show that the pH-triggered swelling of the hydrogel film has a non-monotonic dependence on the acidity of the bath solution. At most salt concentrations, the thickness of the hydrogel film presents a maximum when the pH of the solution is increased from acidic values. The quantitative details of such swelling behavior, which is not observed when the film is physically deposited on the surface, depend on the molecular architecture of the polymer network. This swelling-deswelling transition is the consequence of the complex interplay between the chemical free energy (acid-base equilibrium), the electrostatic repulsions between charged monomers, which are both modulated by the absorption of ions, and the ability of the polymer network to regulate charge and control its volume (molecular organization). In the absence of such competition, for example, for high salt concentrations, the film swells monotonically with increasing pH. A deswelling-swelling transition is similarly predicted as a function of the salt concentration at intermediate pH values. This reentrant behavior, which is due to the coupling between charge regulation and the two opposing effects triggered by salt concentration (screening electrostatic interactions and charging/discharging the acid groups), is similar to that found in end-grafted weak polyelectrolyte layers. Understanding how to control the response of the material to different stimuli, in terms of its molecular structure and local chemical composition, can help the targeted design of applications with extended functionality. We describe the response of the material to an applied pressure and an electric potential. We present profiles that outline the local chemical composition of the

  16. Non-monotonic reorganization of brain networks with Alzheimer’s disease progression

    Directory of Open Access Journals (Sweden)

    Hyoungkyu eKim

    2015-06-01

    Full Text Available Background: Identification of stage-specific changes in brain network of patients with Alzheimer’s disease (AD is critical for rationally designed therapeutics that delays the progression of the disease. However, pathological neural processes and their resulting changes in brain network topology with disease progression are not clearly known. Methods: The current study was designed to investigate the alterations in network topology of resting state fMRI among patients in three different clinical dementia rating (CDR groups (i.e., CDR = 0.5, 1, 2 and amnestic mild cognitive impairment (aMCI and age-matched healthy subject groups. We constructed cost networks from these 5 groups and analyzed their network properties using graph theoretical measures.Results: The topological properties of AD brain networks differed in a non-monotonic, stage-specific manner. Interestingly, local and global efficiency and betweenness of the network were rather higher in the aMCI and AD (CDR 1 groups than those of prior stage groups. The number, location, and structure of rich-clubs changed dynamically as the disease progressed.Conclusions: The alterations in network topology of the brain are quite dynamic with AD progression, and these dynamic changes in network patterns should be considered meticulously for efficient therapeutic interventions of AD.

  17. Magnetic resonance imaging of normal pituitary gland

    International Nuclear Information System (INIS)

    Yamanaka, Masami; Uozumi, Tohru; Sakoda, Katsuaki; Ohta, Masahiro; Kagawa, Yoshihiro; Kajima, Toshio.

    1986-01-01

    Magnetic resonance imaging (MRI) is a suitable procedure for diagnosing such midline-positioned lesions as pituitary adenomas. To differentiate them from microadenomas fifty-seven cases (9 - 74 years old, 29 men and 28 women), including 50 patients without any sellar or parasellar diseases and seven normal volunteers, were studied in order to clarify the MR findings of the shape, height, and signal intensity of the normal pituitary gland, especially at the median sagittal section. The height of a normal pituitary gland varied from 2 to 9 mm (mean: 5.7 mm); the upper surface of the gland was convex in 19.3 %, flat in 49.1 %, and concave in 31.6 %. The mean height of the gland in women in their twenties was 7.5 mm, and the upper convex shape appeared exclusively in women of the second to fourth decades. Nine intrasellar pituitary adenomas (PRL-secreting: 4, GH-secreting: 4, ACTH-secreting: 1), all verified by surgery, were diagnosed using a resistive MR system. The heights of the gland in these cases were from 7 to 15 mm (mean: 11.3 mm); the upper surface was convex in 7 cases. A localized bulging of the upper surface of the gland and a localized depression of the sellar floor were depicted on the coronal and sagittal sections in most cases. Although the GH- and ACTH-secreting adenoma cases showed homogeneous intrasellar contents, in all the PRL-secreting adenoma cases a low-signal-intensity area was detected in the IR images. The mean T1 values of the intrasellar content of the normal volunteers, the PRL-, GH-, and ACTH-secreting adenoma cases, were 367, 416, 355, and 411 ms respectively. However, in the PRL-secreting adenoma cases, the mean T1 value of the areas showing a low signal intensity on IR images was 455 ms; this was a significant prolongation in comparison with that of a normal pituitary gland. (J.P.N.)

  18. Experimental Studies on Behaviour of Reinforced Geopolymer Concrete Beams Subjected to Monotonic Static Loading

    Science.gov (United States)

    Madheswaran, C. K.; Ambily, P. S.; Dattatreya, J. K.; Ramesh, G.

    2015-06-01

    This work describes the experimental investigation on behaviour of reinforced GPC beams subjected to monotonic static loading. The overall dimensions of the GPC beams are 250 mm × 300 mm × 2200 mm. The effective span of beam is 1600 mm. The beams have been designed to be critical in shear as per IS:456 provisions. The specimens were produced from a mix incorporating fly ash and ground granulated blast furnace slag, which was designed for a compressive strength of 40 MPa at 28 days. The reinforced concrete specimens are subjected to curing at ambient temperature under wet burlap. The parameters being investigated include shear span to depth ratio (a/d = 1.5 and 2.0). Experiments are conducted on 12 GPC beams and four OPCC control beams. All the beams are tested using 2000 kN servo-controlled hydraulic actuator. This paper presents the results of experimental studies.

  19. Exponential mean-square stability of two classes of theta Milstein methods for stochastic delay differential equations

    Science.gov (United States)

    Rouz, Omid Farkhondeh; Ahmadian, Davood; Milev, Mariyan

    2017-12-01

    This paper establishes exponential mean square stability of two classes of theta Milstein methods, namely split-step theta Milstein (SSTM) method and stochastic theta Milstein (STM) method, for stochastic differential delay equations (SDDEs). We consider the SDDEs problem under a coupled monotone condition on drift and diffusion coefficients, as well as a necessary linear growth condition on the last term of theta Milstein method. It is proved that the SSTM method with θ ∈ [0, ½] can recover the exponential mean square stability of the exact solution with some restrictive conditions on stepsize, but for θ ∈ (½, 1], we proved that the stability results hold for any stepsize. Then, based on the stability results of SSTM method, we examine the exponential mean square stability of the STM method and obtain the similar stability results to that of the SSTM method. In the numerical section the figures show thevalidity of our claims.

  20. Meaning of counterfactual statements in quantum physics

    International Nuclear Information System (INIS)

    Stapp, H.P.

    1998-01-01

    David Mermin suggests that my recent proof pertaining to quantum nonlocality is undermined by an essential ambiguity pertaining to the meaning of counterfactual statements in quantum physics. The ambiguity he cites arises from his imposition of a certain criterion for the meaningfulness of such counterfactual statements. That criterion conflates the meaning of a counterfactual statement with the details of a proof of its validity in such a way as to make the meaning of such a statement dependent upon the context in which it occurs. That dependence violates the normal demand in logic that the meaning of a statement be defined by the words in the statement itself, not by the context in which the statement occurs. My proof conforms to that normal requirement. I describe the context-independent meaning within my proof of the counterfactual statements in question. copyright 1998 American Association of Physics Teachers

  1. One-dimensional, forward-forward mean-field games with congestion

    KAUST Repository

    Gomes, Diogo A.; Sedjro, Marc

    2017-01-01

    forward-forward MFGs. Finally, we construct traveling-wave solutions, which settles in a negative way the convergence problem for forward-forward MFGs. A similar technique gives the existence of time-periodic solutions for non-monotonic MFGs.

  2. Event-by-event fluctuations of the mean transverse momentum in 40, 80, and 158 A GeV/c Pb-Au collisions

    International Nuclear Information System (INIS)

    Adamova, D.; Agakichiev, G.; Appelshaeuser, H.; Belaga, V.; Braun-Munzinger, P.; Campagnolo, R.; Castillo, A.; Cherlin, A.; Damjanovic, S.; Dietel, T.; Dietrich, L.; Drees, A.; Esumi, S.; Filimonov, K.; Fomenko, K.; Fraenkel, Z.; Garabatos, C.; Glaessel, P.; Hering, G.; Holeczek, J.; Kushpil, V.; Lenkeit, B.; Ludolphs, W.; Maas, A.; Marin, A.; Milosevic, J.; Milov, A.; Miskowiec, D.; Musa, L.; Panebrattsev, Yu.; Petchenova, O.; Petracek, V.; Pfeiffer, A.; Rak, J.; Ravinovich, I.; Rehak, P.; Richter, M.; Sako, H.; Schmitz, W.; Schukraft, J.; Sedykh, S.; Seipp, W.; Sharma, A.; Shimansky, S.; Slivova, J.; Specht, H.J.; Stachel, J.; Sumbera, M.; Tilsner, H.; Tserruya, I.; Wessels, J.P.; Wienold, T.; Windelband, B.; Wurm, J.P.; Xie, W.; Yurevich, S.; Yurevich, V.

    2003-01-01

    Measurements of event-by-event fluctuations of the mean transverse momentum in Pb-Au collisions at 40, 80, and 158 A GeV/c are presented. A significant excess of mean p T fluctuations at mid-rapidity is observed over the expectation from statistically independent particle emission. The results are somewhat smaller than recent measurements at RHIC. A possible non-monotonic behavior of the mean p T fluctuations as function of collision energy, which may have indicated that the system has passed the critical point of the QCD phase diagram in the range of μ B under investigation, has not been observed. The centrality dependence of mean p T fluctuations in Pb-Au is consistent with an extrapolation from pp collisions assuming that the non-statistical fluctuations scale with multiplicity. The results are compared to calculations by the RQMD and URQMD event generators

  3. SU-F-I-01: Normalized Mean Glandular Dose Values for Dedicated Breast CT Using Realistic Breast-Shaped Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, A [Department of Radiology, Biomedical Engineering Graduate Group, University of California Davis, Sacramento, CA (United States); Boone, J [Departments of Radiology and Biomedical Engineering, Biomedical Engeering Graduate Group, University of California Davis, Sacramento, CA (United States)

    2016-06-15

    Purpose: To estimate normalized mean glandular dose values for dedicated breast CT (DgN-CT) using breast CT-derived phantoms and compare to estimations using cylindrical phantoms. Methods: Segmented breast CT (bCT) volume data sets (N=219) were used to measure effective diameter profiles and were grouped into quintiles by volume. The profiles were averaged within each quintile to represent the range of breast sizes found clinically. These profiles were then used to generate five voxelized computational phantoms (V1, V2, V3, V4, V5 for the small to large phantom sizes, respectively), and loaded into the MCNP6 lattice geometry to simulate normalized mean glandular dose coefficients (DgN-CT) using the system specifications of the Doheny-prototype bCT scanner in our laboratory. The DgN-CT coefficients derived from the bCT-derived breast-shaped phantoms were compared to those generated using a simpler cylindrical phantom using a constant volume, and the following constraints: (1) Length=1.5*radius; (2) radius determined at chest wall (Rcw), and (3) radius determined at the phantom center-of-mass (Rcm). Results: The change in Dg-NCT coefficients averaged across all phantom sizes, was - 0.5%, 19.8%, and 1.3%, for constraints 1–3, respectively. This suggests that the cylindrical assumption is a good approximation if the radius is taken at the breast center-of-mass, but using the radius at the chest wall results in an underestimation of the glandular dose. Conclusion: The DgN-CT coefficients for bCT-derived phantoms were compared against the assumption of a cylindrical phantom and proved to be essentially equivalent when the cylinder radius was set to r=1.5/L or Rcm. While this suggests that for dosimetry applications a patient’s breast can be approximated as a cylinder (if the correct radius is applied), this assumes a homogenous composition of breast tissue and the results may be different if the realistic heterogeneous distribution of glandular tissue is considered

  4. Investigation on de-trapping mechanisms related to non-monotonic kink pattern in GaN HEMT devices

    Directory of Open Access Journals (Sweden)

    Chandan Sharma

    2017-08-01

    Full Text Available This article reports an experimental approach to analyze the kink effect phenomenon which is usually observed during the GaN high electron mobility transistor (HEMT operation. De-trapping of charge carriers is one of the prominent reasons behind the kink effect. The commonly observed non-monotonic behavior of kink pattern is analyzed under two different device operating conditions and it is found that two different de-trapping mechanisms are responsible for a particular kink behavior. These different de-trapping mechanisms are investigated through a time delay analysis which shows the presence of traps with different time constants. Further voltage sweep and temperature analysis corroborates the finding that different de-trapping mechanisms play a role in kink behavior under different device operating conditions.

  5. Investigation on de-trapping mechanisms related to non-monotonic kink pattern in GaN HEMT devices

    Science.gov (United States)

    Sharma, Chandan; Laishram, Robert; Amit, Rawal, Dipendra Singh; Vinayak, Seema; Singh, Rajendra

    2017-08-01

    This article reports an experimental approach to analyze the kink effect phenomenon which is usually observed during the GaN high electron mobility transistor (HEMT) operation. De-trapping of charge carriers is one of the prominent reasons behind the kink effect. The commonly observed non-monotonic behavior of kink pattern is analyzed under two different device operating conditions and it is found that two different de-trapping mechanisms are responsible for a particular kink behavior. These different de-trapping mechanisms are investigated through a time delay analysis which shows the presence of traps with different time constants. Further voltage sweep and temperature analysis corroborates the finding that different de-trapping mechanisms play a role in kink behavior under different device operating conditions.

  6. Non-monotonic dose dependence of the Ge- and Ti-centres in quartz

    International Nuclear Information System (INIS)

    Woda, C.; Wagner, G.A.

    2007-01-01

    The dose response of the Ge- and Ti-centres in quartz is studied over a large dose range. After an initial signal increase in the low dose range, both defects show a pronounced decrease in signal intensities for high doses. The model by Euler and Kahan [1987. Radiation effects and anelastic loss in germanium-doped quartz. Phys. Rev. B 35 (9), 4351-4359], in which the signal drop is explained by an enhanced trapping of holes at the electron trapping site, is critically discussed. A generalization of the model is then developed, following similar considerations by Lawless et al. [2005. A model for non-monotonic dose dependence of thermoluminescence (TL). J. Phys. Condens. Matter 17, 737-753], who explained a signal drop in TL by an enhanced recombination rate with electrons at the recombination centre. Finally, an alternative model for the signal decay is given, based on the competition between single and double electron capture at the electron trapping site. From the critical discussion of the different models it is concluded that the double electron capture mechanism is the most probable effect for the dose response

  7. Microstructure-based modelling of the long-term monotonic and cyclic creep of the martensitic steel X 20(22) CrMoV 12 1

    International Nuclear Information System (INIS)

    Henes, D.; Straub, S.; Blum, W.; Moehlig, H.; Granacher, J.; Berger, C.

    1999-01-01

    The current state of development of the composite model of deformation of the martensitic steel X 20(22) CrMoV 12 1 under conditions of creep is briefly described. The model is able to reproduce differences in monotonic creep strength of different melts with slightly different initial microstructures and to simulate cyclic creep with alternating phases of tension and compression. (orig.)

  8. Dose-response meta-analysis of differences in means

    Directory of Open Access Journals (Sweden)

    Alessio Crippa

    2016-08-01

    Full Text Available Abstract Background Meta-analytical methods are frequently used to combine dose-response findings expressed in terms of relative risks. However, no methodology has been established when results are summarized in terms of differences in means of quantitative outcomes. Methods We proposed a two-stage approach. A flexible dose-response model is estimated within each study (first stage taking into account the covariance of the data points (mean differences, standardized mean differences. Parameters describing the study-specific curves are then combined using a multivariate random-effects model (second stage to address heterogeneity across studies. Results The method is fairly general and can accommodate a variety of parametric functions. Compared to traditional non-linear models (e.g. E max, logistic, spline models do not assume any pre-specified dose-response curve. Spline models allow inclusion of studies with a small number of dose levels, and almost any shape, even non monotonic ones, can be estimated using only two parameters. We illustrated the method using dose-response data arising from five clinical trials on an antipsychotic drug, aripiprazole, and improvement in symptoms in shizoaffective patients. Using the Positive and Negative Syndrome Scale (PANSS, pooled results indicated a non-linear association with the maximum change in mean PANSS score equal to 10.40 (95 % confidence interval 7.48, 13.30 observed for 19.32 mg/day of aripiprazole. No substantial change in PANSS score was observed above this value. An estimated dose of 10.43 mg/day was found to produce 80 % of the maximum predicted response. Conclusion The described approach should be adopted to combine correlated differences in means of quantitative outcomes arising from multiple studies. Sensitivity analysis can be a useful tool to assess the robustness of the overall dose-response curve to different modelling strategies. A user-friendly R package has been developed to facilitate

  9. Assessing Meaning Construction on Social Media: A Case of Normalizing Militarism

    NARCIS (Netherlands)

    Jackson, S.T.; Joachim, J.M.; Robinson, N.; Schneiker, A.

    2017-01-01

    With an estimated 3.8 billion Internet users worldwide, new media in the form of Web 2.0 applications and its usergenerated content increasingly rival traditional media as the means of circulating and gathering information. Central to the power and importance of social media is its visuality and the

  10. Mean free path of electrons in rare gas solids

    International Nuclear Information System (INIS)

    Schwentner, N.

    1976-07-01

    The energy distribution of photoelectrons of solid Ar, Kr and Xe films with thickness between 10 A and 300 A have been measured in the photon energy range 10 eV to 30 eV using the synchrotron radiation of DESY. By varying the photon energy and the film thickness the dependence of the electron-electron scattering length on the electron kinetic energy has been determined. The mean free path for inelastic electron-electron scattering decreases monotonically from values of the order of 1.000 A at the scattering threshold to values between 1 A and 5 A for electron energies 10 eV above threshold. The observed energy dependence can be understood by a simplified bandstructure and a scattering probability described by a product of density of states. The threshold energy for electron-electron scattering lies between twice the energy of the n = 1 excitons and the sum of bandgap and exciton energy. (HK) [de

  11. Mean Occupation Function of High-redshift Quasars from the Planck Cluster Catalog

    Science.gov (United States)

    Chakraborty, Priyanka; Chatterjee, Suchetana; Dutta, Alankar; Myers, Adam D.

    2018-06-01

    We characterize the distribution of quasars within dark matter halos using a direct measurement technique for the first time at redshifts as high as z ∼ 1. Using the Planck Sunyaev-Zeldovich (SZ) catalog for galaxy groups and the Sloan Digital Sky Survey (SDSS) DR12 quasar data set, we assign host clusters/groups to the quasars and make a measurement of the mean number of quasars within dark matter halos as a function of halo mass. We find that a simple power-law fit of {log} =(2.11+/- 0.01) {log}(M)-(32.77+/- 0.11) can be used to model the quasar fraction in dark matter halos. This suggests that the quasar fraction increases monotonically as a function of halo mass even to redshifts as high as z ∼ 1.

  12. Comparison of sensory-specific satiety between normal weight and overweight children

    DEFF Research Database (Denmark)

    Rischel, Helene Egebjerg; Nielsen, Louise Aas; Gamborg, Michael Orland

    2016-01-01

    .024), and declines in wanting for something fat, of which the normal weight children displayed an increase (F(1,83) = 4,10, p = 0.046). No differences were found for sensory-specific satiety, wanting for the main food yoghurt, hunger, or satiety. In conclusion, overweight children did not differ from normal weight......Sensory properties of some foods may be of importance to energy consumption and thus the development and maintenance of childhood obesity. This study compares selected food related qualities in overweight and normal weight children. Ninety-two participants were included; 55 were overweight...... with a mean age of 11.6 years (range 6-18 years) and a mean BMI z-score of 2.71 (range 1.29-4.60). The 37 normal weight children had a mean age of 13.0 years (range 6-19 years) and a mean BMI z-score of 0.16 (range -1.71 to 1.24). All children completed a half-hour long meal test consisting of alternation...

  13. Defecography: A study of normal volunteers

    International Nuclear Information System (INIS)

    Shorvon, P.; Stevenson, G.W.; McHugh, S.; Somers, P.

    1987-01-01

    This study of young volunteers was set up in an effort to establish true normal measurements for defecography with minimum selection bias. The results describe the mean (and the range) for the following: anorectal angle; anorectal junction position at rest; excursion on lift, strain, and evacuation; anal canal length and degree of closure; and the frequency and degree of features such as rectocele and intussusception which have previously been called abnormalities. The results indicate that there is a very wide range of normal appearances. Knowledge of these normal variations is important to avoid overreporting and unnecessary surgery

  14. Reference Priors For Non-Normal Two-Sample Problems

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo, 1992) is applied to locationscale models with any regular sampling density. A number of two-sample problems is analyzed in this general context, extending the dierence, ratio and product of Normal means problems outside Normality, while explicitly

  15. On Hesitant Fuzzy Reducible Weighted Bonferroni Mean and Its Generalized Form for Multicriteria Aggregation

    Directory of Open Access Journals (Sweden)

    Wei Zhou

    2014-01-01

    Full Text Available Due to convenience and powerfulness in dealing with vagueness and uncertainty of real situation, hesitant fuzzy set has received more and more attention and has been a hot research topic recently. To differently process and effectively aggregate hesitant fuzzy information and capture their interrelationship, in this paper, we propose the hesitant fuzzy reducible weighted Bonferroni mean (HFRWBM and present its four prominent characteristics, namely, reductibility, monotonicity, boundedness, and idempotency. Then, we further investigate its generalized form, that is, the generalized hesitant fuzzy reducible weighted Bonferroni mean (GHFRWBM. Based on the discussion of model parameters, some special cases of the HFRWBM and GHFRWBM are studied in detail. In addition, to deal with the situation that multicriteria have connections in hesitant fuzzy information aggregation, a three-step aggregation approach has been proposed on the basis of the HFRWBM and GHFRWBM. In the end, we apply the proposed aggregation operators to multicriteria aggregation and give an example to illustrate our results.

  16. Explosive percolation on directed networks due to monotonic flow of activity

    Science.gov (United States)

    Waagen, Alex; D'Souza, Raissa M.; Lu, Tsai-Ching

    2017-07-01

    An important class of real-world networks has directed edges, and in addition, some rank ordering on the nodes, for instance the popularity of users in online social networks. Yet, nearly all research related to explosive percolation has been restricted to undirected networks. Furthermore, information on such rank-ordered networks typically flows from higher-ranked to lower-ranked individuals, such as follower relations, replies, and retweets on Twitter. Here we introduce a simple percolation process on an ordered, directed network where edges are added monotonically with respect to the rank ordering. We show with a numerical approach that the emergence of a dominant strongly connected component appears to be discontinuous. Large-scale connectivity occurs at very high density compared with most percolation processes, and this holds not just for the strongly connected component structure but for the weakly connected component structure as well. We present analysis with branching processes, which explains this unusual behavior and gives basic intuition for the underlying mechanisms. We also show that before the emergence of a dominant strongly connected component, multiple giant strongly connected components may exist simultaneously. By adding a competitive percolation rule with a small bias to link uses of similar rank, we show this leads to formation of two distinct components, one of high-ranked users, and one of low-ranked users, with little flow between the two components.

  17. Event-by-event fluctuations of the mean transverse momentum in 40, 80, and 158 A GeV/c Pb-Au collisions

    Energy Technology Data Exchange (ETDEWEB)

    Adamova, D.; Agakichiev, G.; Appelshaeuser, H. E-mail: appels@physi.uni-heidelberg.de; Belaga, V.; Braun-Munzinger, P.; Campagnolo, R.; Castillo, A.; Cherlin, A.; Damjanovic, S.; Dietel, T.; Dietrich, L.; Drees, A.; Esumi, S.; Filimonov, K.; Fomenko, K.; Fraenkel, Z.; Garabatos, C.; Glaessel, P.; Hering, G.; Holeczek, J.; Kushpil, V.; Lenkeit, B.; Ludolphs, W.; Maas, A.; Marin, A.; Milosevic, J.; Milov, A.; Miskowiec, D.; Musa, L.; Panebrattsev, Yu.; Petchenova, O.; Petracek, V.; Pfeiffer, A.; Rak, J.; Ravinovich, I.; Rehak, P.; Richter, M.; Sako, H.; Schmitz, W.; Schukraft, J.; Sedykh, S.; Seipp, W.; Sharma, A.; Shimansky, S.; Slivova, J.; Specht, H.J.; Stachel, J.; Sumbera, M.; Tilsner, H.; Tserruya, I.; Wessels, J.P.; Wienold, T.; Windelband, B.; Wurm, J.P.; Xie, W.; Yurevich, S.; Yurevich, V

    2003-11-03

    Measurements of event-by-event fluctuations of the mean transverse momentum in Pb-Au collisions at 40, 80, and 158 A GeV/c are presented. A significant excess of mean p{sub T} fluctuations at mid-rapidity is observed over the expectation from statistically independent particle emission. The results are somewhat smaller than recent measurements at RHIC. A possible non-monotonic behavior of the mean p{sub T} fluctuations as function of collision energy, which may have indicated that the system has passed the critical point of the QCD phase diagram in the range of {mu}{sub B} under investigation, has not been observed. The centrality dependence of mean p{sub T} fluctuations in Pb-Au is consistent with an extrapolation from pp collisions assuming that the non-statistical fluctuations scale with multiplicity. The results are compared to calculations by the RQMD and URQMD event generators.

  18. Event-by-event fluctuations of the mean transverse momentum in 40, 80, and 158 A GeV/c Pb-Au collisions

    CERN Document Server

    Adamova, D; Appelshäuser, H; Belaga, V V; Braun-Munzinger, P; Campagnolo, R; Castillo, A; Cherlin, A; Damjanovic, S; Dietel, T; Dietrich, L; Drees, A; Esumi, S; Filimonov, K; Fomenko, K; Fraenkel, Zeev; Garabatos, C; Glässel, P; Hering, G; Holeczek, J; Kushpil, V; Lenkeit, B C; Ludolphs, W; Maas, A; Marin, A; Milosevic, J; Milov, A; Miskowiec, D; Musa, L; Panebratsev, Yu A; Petchenova, O Yu; Petracek, V; Pfeiffer, A; Rak, J; Ravinovich, I; Rehak, P; Richter, M; Sako, H; Schmitz, W; Schükraft, Jürgen; Sedykh, S; Seipp, W; Sharma, A; Shimansky, S S; Slivova, J; Specht, H J; Stachel, J; Sumbera, M; Tilsner, H; Tserruya, Itzhak; Wessels, J P; Wienold, T; Windelband, B; Wurm, J P; Xie, W; Yurevich, S; Yurevich, V I

    2003-01-01

    Measurements of event-by-event fluctuations of the mean transverse momentum in Pb-Au collisions at 40, 80, and 158 A GeV/c are presented. A significant excess of mean p_T fluctuations at mid-rapidity is observed over the expectation from statistically independent particle emission. The results are somewhat smaller than recent measurements at RHIC. A possible non-monotonic behaviour of the mean p_T fluctuations as function of collision energy, which may have indicated that the system has passed the critical point of the QCD phase diagram in the range of mu_B under investigation, has not been observed. The centrality dependence of mean p_T fluctuations in Pb-Au is consistent with an extrapolation from pp collisions assuming that the non-statistical fluctuations scale with multiplicity. The results are compared to calculations by the RQMD and UrQMD event generators.

  19. Procesos de negociación de significado en una escuela normal mexicana Processes of negotiation of meaning in a mexican program of teacher education

    Directory of Open Access Journals (Sweden)

    Pedro Antonio Estrada Rodríguez

    2008-12-01

    Full Text Available En este trabajo, analizamos algunos procesos de negociación de significado y desarrollo de micropolíticas en las experiencias de docentes y directivos en una escuela normal mexicana con la reforma a la educación normal de 1997. En dicho plan de estudios se estipula que los estudiantes realicen prácticas escolares durante el último año de la carrera. El estudio se enfocó en las actividades que realizaron los docentes de manera previa a las prácticas del preservicio con la primera generación de estudiantes del plan de 1997 de la Licenciatura en Educación Primaria. En el artículo mostramos cómo los docentes y el personal directivo de la normal del estudio, transitaron por un proceso de negociación de significado y generación de micropolíticas en su relación con la propuesta curricular nacional y la forma en que se organizaron bajo un esquema que concebimos como comunidad de práctica cultivada.Neste trabalho analisamos alguns processos de negociação de significado e desenvolvimento de micro-políticas nas experiências dos docentes e autoridades em uma escola normal mexicana após a reforma à educação normal de 1997. Neste programa de estudos estipula-se que os estudantes realizem os estágios escolares durante o último ano da faculdade. O estudo foi focado nas atividades que realizaram os docentes de forma prévia aos estágios do pré-serviço com a primeira geração de estudantes do plano de 1997 do Bacharelado em Educação Primária (Ensino Básico. No artigo mostramos como os docentes e autoridades da normal do estudo, transitaram por um processo de negociação de significado e geração de micro-política na sua realização com a proposta curricular nacional e a maneira com a qual foram organizados, sob um esquema que concebemos como comunidade de prática cultivada.In this paper, we analyze the experiences of teachers and principals in a Mexican teachers' college following the teacher education reform of 1997. In

  20. Brain perfusion ratios by 99mTc HMPAO SPECT utilizing a mean value of the visual cortex to the cerebellum ratio derived from normal subjects

    International Nuclear Information System (INIS)

    Sanchez Catasus, C.; Rodriguez, R.; Cisnero, M.; Palmero, R.; Diaz, O.; Aguila, A.

    2002-01-01

    Aim: Previous results shows that the cerebellum (CER) is the best reference to calculate relative indexes of perfusion (IP) by brain SPECT. However, it can not be used on patients with bilateral cerebellar hypoperfusion. In such cases visual cortex (VC) or an average of the whole brain activity is recommended (WB). VC and WB are less reliable than CER, making it difficult to compare SPECT scans that have been normalized with different values. Materials and Methods: To overcome this difficulty, we developed a method to calculate IP utilizing a reference value defined as (VC / ), where is the mean value of the VC/CER ratio derived from a normal database which was assumed to be constant. We called the value VC/ the 'Pseudocerebellum' (PCER). For clinical validation, we first tested statistically the VC/CER ratio on a group of 60 [ 99m Tc]-HMPAO SPECT scans of 20 normal subjects and 40 neurological patients with positive SPECT but without involvement of VC and CER. To demonstrate that IP PCER approx. IP CER , we calculated the mean value of the absolute differences CER - IP PCER vertical bar> on two groups of scans from subjects without involvement of VC and CER: 10 normal subjects (GI); and 40 patients (GII). Finally, using an indirect procedure the method was tested on a third group of SPECT scans of 30 patients with bilateral cerebellar hypoperfusion (G III). Results: The VC/CER ratio was approximately constant with gender and age at a 95% confidence level; CER - IP PCER vertical bar> was 1.22%±0.35 and 1.20%±0.42 for GI and GII, respectively. This is less than the within-subject replicability of the HMPAO SPECT studies; and thus demonstrated by an indirect approach that IP PCER is a valid procedure by which to evaluate relative perfusion on patients with bilateral cerebellar hypoperfusion and quantitatively comparable to using CER as reference region. Conclusion: The VC/CER ratio has very little inter-subject variability in individuals where these regions are not

  1. MR measurement of normal corpus callosum in children

    International Nuclear Information System (INIS)

    Kim, Hyoung Sub; Kim, Jong Chul; Kang, Yong Soo; Lee, Young Hwan; Kim, Young Wol

    1997-01-01

    To measure the mean size of the various portions of the corpus callosum in normal Korean children, using MR imaging. Our subjects were 166 children (male : female=100 : 66) aged under 15 whose findings on MR imaging and neurologic examination were normal. Using midsagittal T1-weighted imaging, we measured the length of the brain and corpus callosum, the height of the latter, and the thickness of its genu body, transitional zone and splenium. The measurements were statistically analysed according to age and sex. Brain length and the size of the various portions of the corpus callosum tended to increase relatively rapidly during the first three years of life, but the rate of growth tended to decrease according to age. The mean lenght of the brain and corpus callosum and the mean thickness of the splenium of the corpus callosum did not differ according to sex. The mean thickness of the genu, body and transitional zone of the corpus callosum was greater in males than in females. The ratio of the length of the corpus callosum to the anteroposterior diameter of the brain was significantly greater in females than in males (alpha=0.05). Using MR imaging, we measured the mean sizes of the various portions of the corpus callosum in normal children;these values may provide a useful basis for determing changes occurring in its structure

  2. Age-related pattern of normal cranial bone marrow: MRI study

    International Nuclear Information System (INIS)

    Pan Shinong; Li Qi; Li Wei; Chen Zhian; Wu Zhenhua; Guo Qiyong; Liu Yunhui

    2009-01-01

    Objective: To investigate the age-related pattern of normal skull bone marrow with 3.0 T MR T 1 WI. Methods: Cranial MR T 1 WI images which were defined to be normal were retrospectively reviewed in 360 cases. Patients with known diffuse bone marrow disease, focal lesions, history of radiation treatment or steroid therapy were excluded, while patients whose cranial MRI and follow-up visits were all normal were included in this study. All the subjects were divided into 7 groups according to the age: 50 years group. Mid- and para- sagittal T 1 WI images were used to be analyzed and the type of cranial bone marrow was classified according to the thickness of diploe and the pattern of the signal characteristics. Statistical analysis was conducted to reveal the relationship between the age and the type. Results: The normal skull bone marrow could be divided into four types as follows: (1) Type-I: 115 cases, 47 of which appeared type- Ia and the mean thickness was (1.24±0.31) mm; 68 of which appeared type-Ib and the mean thickness was (1.76±0.37) mm. Type-II: 57 cases and the mean thickness was (2.78 ± 0.69) mm. Type-III: 148 cases, 18 of which appeared type-IIIa and the mean thickness was (2.33 ± 0.65) mm; 88 of which appeared type-IIIb and the mean thickness was (4.01± 0.86) mm; 42 of which appeared type-IIIc and the mean thickness was (4.31±0.73) mm. Type-IV: 40 cases, 25 of which appeared type-IVa and the mean thickness was (5.17±1.02) mm; 15 of which appeared type-IVb and the mean thickness was (5.85±1.45) mm. (2) 2 =266.36, P<0.01). Conclusion: There is characteristic in the distribution of normal skull bone marrow with age growing. And skull bone marrow transforms gradually from type-I to IV with aging. (authors)

  3. The mean-size dependence of the exchange narrowing in molecular J-aggregates

    International Nuclear Information System (INIS)

    Chen Yulu; Zhao Jijun

    2011-01-01

    The effect of segment-size fluctuations on exchange narrowing in a molecular J-aggregate of site-energy disordered distributions is studied using a one-dimensional Frenkel-exciton model. It is found that the segment-size disorder leads to the width of the absorption spectra deviating from the scaling law, σ 4/3 of the site-energy disordered standard deviation σ, being suitable for the system only with the site-energy disorder. In larger σ, the segment-size disorder has little influence on the linear absorption spectra. With increasing segment mean-length, the absorption line width monotonically increases, and then approaches a saturated value. By comparing a system of larger mean-length segment with a smaller one, both with the same segment-size disorder, it is found that the absorption line width of the former is broadened, and the exchange narrowing effect is reduced. The present result shows that the correlation effect can be partially maintained for the system with larger mean-length segment. -- Research Highlights: → Segment fluctuations affect the exchange narrowing of molecular J-aggregates. → The width of the absorption spectra is found to deviate from the scaling law. → Increase in segment size causes increase in the width and then saturates. → Exchange narrowing is reduced for larger mean-size segment. → Correlation can be kept partly in the larger size segment.

  4. The COBE normalization for standard cold dark matter

    Science.gov (United States)

    Bunn, Emory F.; Scott, Douglas; White, Martin

    1995-01-01

    The Cosmic Background Explorer Satellite (COBE) detection of microwave anisotropies provides the best way of fixing the amplitude of cosmological fluctuations on the largest scales. This normalization is usually given for an n = 1 spectrum, including only the anisotropy caused by the Sachs-Wolfe effect. This is certainly not a good approximation for a model containing any reasonable amount of baryonic matter. In fact, even tilted Sachs-Wolfe spectra are not a good fit to models like cold dark matter (CDM). Here, we normalize standard CDM (sCDM) to the two-year COBE data and quote the best amplitude in terms of the conventionally used measures of power. We also give normalizations for some specific variants of this standard model, and we indicate how the normalization depends on the assumed values on n, Omega(sub B) and H(sub 0). For sCDM we find the mean value of Q = 19.9 +/- 1.5 micro-K, corresponding to sigma(sub 8) = 1.34 +/- 0.10, with the normalization at large scales being B = (8.16 +/- 1.04) x 10(exp 5)(Mpc/h)(exp 4), and other numbers given in the table. The measured rms temperature fluctuation smoothed on 10 deg is a little low relative to this normalization. This is mainly due to the low quadrupole in the data: when the quadrupole is removed, the measured value of sigma(10 deg) is quite consistent with the best-fitting the mean value of Q. The use of the mean value of Q should be preferred over sigma(10 deg), when its value can be determined for a particular theory, since it makes full use of the data.

  5. A cascadic monotonic time-discretized algorithm for finite-level quantum control computation

    Science.gov (United States)

    Ditz, P.; Borzi`, A.

    2008-03-01

    A computer package (CNMS) is presented aimed at the solution of finite-level quantum optimal control problems. This package is based on a recently developed computational strategy known as monotonic schemes. Quantum optimal control problems arise in particular in quantum optics where the optimization of a control representing laser pulses is required. The purpose of the external control field is to channel the system's wavefunction between given states in its most efficient way. Physically motivated constraints, such as limited laser resources, are accommodated through appropriately chosen cost functionals. Program summaryProgram title: CNMS Catalogue identifier: ADEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 770 No. of bytes in distributed program, including test data, etc.: 7098 Distribution format: tar.gz Programming language: MATLAB 6 Computer: AMD Athlon 64 × 2 Dual, 2:21 GHz, 1:5 GB RAM Operating system: Microsoft Windows XP Word size: 32 Classification: 4.9 Nature of problem: Quantum control Solution method: Iterative Running time: 60-600 sec

  6. Creep crack growth by grain boundary cavitation under monotonic and cyclic loading

    Science.gov (United States)

    Wen, Jian-Feng; Srivastava, Ankit; Benzerga, Amine; Tu, Shan-Tung; Needleman, Alan

    2017-11-01

    Plane strain finite deformation finite element calculations of mode I crack growth under small scale creep conditions are carried out. Attention is confined to isothermal conditions and two time histories of the applied stress intensity factor: (i) a monononic increase to a plateau value subsequently held fixed; and (ii) a cyclic time variation. The crack growth calculations are based on a micromechanics constitutive relation that couples creep deformation and damage due to grain boundary cavitation. Grain boundary cavitation, with cavity growth due to both creep and diffusion, is taken as the sole failure mechanism contributing to crack growth. The influence on the crack growth rate of loading history parameters, such as the magnitude of the applied stress intensity factor, the ratio of the applied minimum to maximum stress intensity factors, the loading rate, the hold time and the cyclic loading frequency, are explored. The crack growth rate under cyclic loading conditions is found to be greater than under monotonic creep loading with the plateau applied stress intensity factor equal to its maximum value under cyclic loading conditions. Several features of the crack growth behavior observed in creep-fatigue tests naturally emerge, for example, a Paris law type relation is obtained for cyclic loading.

  7. Effect of sildenafil citrate (Viagra) on coronary flow in normal subjects.

    Science.gov (United States)

    Ishikura, Fuminobu; Beppu, Shintaro; Ueda, Hiroaki; Nehra, Ajay; Khandheria, Bijoy K

    2008-01-01

    The purpose of this study was to evaluate the effect of sildenafil citrate (Viagra) on coronary function in normal subjects. The study assessed mean blood pressure, left anterior descending coronary artery (LAD) flow, and echocardiographic variables before and 30 and 60 minutes after taking 50 mg of sildenafil citrate. The mean velocity of LAD flow was assessed with Doppler flow imaging. The study subjects were 6 healthy male volunteers (mean age 37 years). The mean velocity of LAD flow increased 60 minutes after taking sildenafil citrate, but there were no other changes. Two volunteers felt mild flashing and one had mild headache during the study. Sildenafil citrate caused vasodilatation in a normal coronary artery without systemic pressure drops. These results suggest that the agent itself did not have negative effects on the heart in normal subjects.

  8. NORMAL AXIAL ANGLES OF THE KNEE JOINT IN ADULT ...

    African Journals Online (AJOL)

    hi-tech

    2003-08-01

    Aug 1, 2003 ... Conclusion: Our study has demonstrated comparative variations in means and ranges of normal axial angles .... population was significantly different from the mean ... case, however, the angle also exhibits racial variations.

  9. CT Densitometry of the Lung in Healthy Nonsmokers with Normal Pulmonary Function

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Tack Sun; Chae, Eun Jin; Seo, Joon Beom; Jung, Young Ju; Oh, Yeon Mok; Lee, Sang Do [University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2012-09-15

    To investigate the upper normal limit of low attenuation area in healthy nonsmokers. A total of 36 nonsmokers with normal pulmonary function test underwent a CT scan. Six thresholds (-980 --930 HU) on inspiration CT and two thresholds (-950 and -910 HU) on expiration CT were used for obtaining low attenuation area. The mean lung density was obtained on both inspiration CT and expiration CT. Descriptive statistics of low attenuation area and the mean lung density, evaluation of difference of low attenuation area and the mean lung density in both sex and age groups, analysis of the relationship between demographic information and CT parameters were performed. Upper normal limit for low attenuation area was 12.96% on inspiration CT (-950 HU) and 9.48% on expiration CT (-910 HU). Upper normal limit for the mean lung density was -837.58 HU on inspiration CT and 686.82 HU on expiration CT. Low attenuation area and the mean lung density showed no significant differences in both sex and age groups. Body mass index (BMI) was negatively correlated with low attenuation area on inspiration CT (-950 HU, r = -0.398, p = 0.016) and positively correlated with the mean lung density on inspiration CT (r 0.539, p = 0.001) and expiration CT (r = 0.432, p = 0.009). Age and body surface area were not correlated with low attenuation area or the mean lung density. Low attenuation area on CT densitometry of the lung could be found in healthy nonsmokers with normal pulmonary function, and showed negative association with BMI. Reference values, such as range and upper normal limit for low attenuation area in healthy subjects could be helpful in quantitative analysis and follow up of early emphysema, using CT densitometry of the lung.

  10. Comparison of the monotonic and cyclic mechanical properties of ultrafine-grained low carbon steels processed by continuous and conventional equal channel angular pressing

    International Nuclear Information System (INIS)

    Niendorf, T.; Böhner, A.; Höppel, H.W.; Göken, M.; Valiev, R.Z.; Maier, H.J.

    2013-01-01

    Highlights: ► UFG low-carbon steel was successfully processed by continuous ECAP-Conform. ► Continuously processed UFG steel shows high performance. ► High monotonic strength and good ductility. ► Microstructural stability under cyclic loading in the LCF regime. ► Established concepts can be used for predicting the properties. - Abstract: In the current study the mechanical properties of ultra-fine grained low carbon steel processed by conventional equal channel angular pressing and a continuous equal channel angular pressing-Conform process were investigated. Both monotonic and cyclic properties were determined for the steel in either condition and found to be very similar. Microstructural analyses employing electron backscatter diffraction were used for comparison of the low carbon steels processed by either technique. Both steels feature very similar grain sizes and misorientation angle distributions. With respect to fatigue life the low carbon steel investigated shows properties similar to ultra-fine grained interstitial-free steel processed by conventional equal channel angular pressing, and thus, the general fatigue behavior can be addressed following the same routines as proposed for interstitial-free steel. In conclusion, the continuously processed material exhibits very promising properties, and thus, equal channel angular pressing-Conform is a promising tool for production of ultra-fine grained steels in a large quantity

  11. A One-Sample Test for Normality with Kernel Methods

    OpenAIRE

    Kellner , Jérémie; Celisse , Alain

    2015-01-01

    We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. O...

  12. Normal range of gastric emptying in children

    International Nuclear Information System (INIS)

    Thomas, P.; Collins, C.; Francis, L.; Henry, R.; O'Loughlin, E.; John Hunter Children's Hospital, Newcastle, NSW

    1999-01-01

    Full text: As part of a larger study looking at gastric emptying times in cystic fibrosis, we assessed the normal range of gastric emptying in a control group of children. Thirteen children (8 girls, 5 boys) aged 4-15 years (mean 10) were studied. Excluded were children with a history of relevant gastrointestinal medical or surgical disease, egg allergy or medication affecting gastric emptying. Imaging was performed at 08.00 h after an overnight fast. The test meal was consumed in under 15 min and comprised one 50 g egg, 80 g commercial pancake mix, 10 ml of polyunsaturated oil, 40 ml of water and 30 g of jam. The meal was labelled with 99 Tc m -macroaggregates of albumin. Water (150 ml) was also consumed with the test meal. One minute images of 128 x 128 were acquired over the anterior and posterior projections every 5 min for 30 min, then every 15 min until 90 min with a final image at 120 min. Subjects remained supine for the first 60 min, after which they were allowed to walk around. A time-activity curve was generated using the geometric mean of anterior and posterior activity. The half emptying time ranged from 55 to 107 min (mean 79, ± 2 standard deviations 43-115). Lag time (time for 5% to leave stomach) ranged from 2 to 26 min (mean 10). The percent emptied at 60 min ranged from 47 to 73% (mean 63%). There was no correlation of half emptying time with age. The normal reference range for a test meal of pancakes has been established for 13 normal children

  13. Prognostic implications of normal exercise thallium 201 images

    International Nuclear Information System (INIS)

    Wahl, J.M.; Hakki, A.H.; Iskandrian, A.S.

    1985-01-01

    A study was made of 455 patients (mean age, 51 years) in whom exercise thallium 201 scintigrams performed for suspected coronary artery disease were normal. Of those, 322 (71%) had typical or atypical angina pectoris and 68% achieved 85% or more maximal predicted heart rate. The exercise ECGs were abnormal in 68 patients (15%), normal in 229 (50%), and inconclusive in 158 (35%). Ventricular arrhythmias occurred during exercise in 194 patients (43%). After a mean follow-up period of 14 months, four patients had had cardiac events, sudden cardiac death in one and nonfatal myocardial infarctions in three. None of the four patients had abnormal exercise ECGs. Two had typical and two had atypical angina pectoris. Normal exercise thallium 201 images identify patients at a low risk for future cardiac events (0.8% per year), patients with abnormal exercise ECGs but normal thallium images have good prognoses, and exercise thallium 201 imaging is a better prognostic predictor than treadmill exercise testing alone, because of the high incidence of inconclusive exercise ECGs and the good prognosis in patients with abnormal exercise ECGs

  14. The Cognitive Social Network in Dreams: Transitivity, Assortativity, and Giant Component Proportion Are Monotonic.

    Science.gov (United States)

    Han, Hye Joo; Schweickert, Richard; Xi, Zhuangzhuang; Viau-Quesnel, Charles

    2016-04-01

    For five individuals, a social network was constructed from a series of his or her dreams. Three important network measures were calculated for each network: transitivity, assortativity, and giant component proportion. These were monotonically related; over the five networks as transitivity increased, assortativity increased and giant component proportion decreased. The relations indicate that characters appear in dreams systematically. Systematicity likely arises from the dreamer's memory of people and their relations, which is from the dreamer's cognitive social network. But the dream social network is not a copy of the cognitive social network. Waking life social networks tend to have positive assortativity; that is, people tend to be connected to others with similar connectivity. Instead, in our sample of dream social networks assortativity is more often negative or near 0, as in online social networks. We show that if characters appear via a random walk, negative assortativity can result, particularly if the random walk is biased as suggested by remote associations. Copyright © 2015 Cognitive Science Society, Inc.

  15. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Mechanical characteristics under monotonic and cyclic simple shear of spark plasma sintered ultrafine-grained nickel

    International Nuclear Information System (INIS)

    Dirras, G.; Bouvier, S.; Gubicza, J.; Hasni, B.; Szilagyi, T.

    2009-01-01

    The present work focuses on understanding the mechanical behavior of bulk ultrafine-grained nickel specimens processed by spark plasma sintering of high purity nickel nanopowder and subsequently deformed under large amplitude monotonic simple shear tests and strain-controlled cyclic simple shear tests at room temperature. During cyclic tests, the samples were deformed up to an accumulated von Mises strain of about ε VM = 0.75 (the flow stress was in the 650-700 MPa range), which is extremely high in comparison with the low tensile/compression ductility of this class of materials at quasi-static conditions. The underlying physical mechanisms were investigated by electron microscopy and X-ray diffraction profile analysis. Lattice dislocation-based plasticity leading to cell formation and dislocation interactions with twin boundaries contributed to the work-hardening of these materials. The large amount of plastic strain that has been reached during the shear tests highlights intrinsic mechanical characteristics of the ultrafine-grained nickel studied here.

  17. Mechanical characteristics under monotonic and cyclic simple shear of spark plasma sintered ultrafine-grained nickel

    Energy Technology Data Exchange (ETDEWEB)

    Dirras, G., E-mail: dirras@univ-paris13.fr [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Bouvier, S. [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Gubicza, J. [Department of Materials Physics, Eoetvoes Lorand University, P.O.B. 32, Budapest H-1518 (Hungary); Hasni, B. [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Szilagyi, T. [Department of Materials Physics, Eoetvoes Lorand University, P.O.B. 32, Budapest H-1518 (Hungary)

    2009-11-25

    The present work focuses on understanding the mechanical behavior of bulk ultrafine-grained nickel specimens processed by spark plasma sintering of high purity nickel nanopowder and subsequently deformed under large amplitude monotonic simple shear tests and strain-controlled cyclic simple shear tests at room temperature. During cyclic tests, the samples were deformed up to an accumulated von Mises strain of about {epsilon}{sub VM} = 0.75 (the flow stress was in the 650-700 MPa range), which is extremely high in comparison with the low tensile/compression ductility of this class of materials at quasi-static conditions. The underlying physical mechanisms were investigated by electron microscopy and X-ray diffraction profile analysis. Lattice dislocation-based plasticity leading to cell formation and dislocation interactions with twin boundaries contributed to the work-hardening of these materials. The large amount of plastic strain that has been reached during the shear tests highlights intrinsic mechanical characteristics of the ultrafine-grained nickel studied here.

  18. Sonographic findings after total hip arthroplasty: normal and complications

    International Nuclear Information System (INIS)

    Lee, Kyoung Rok; Seon, Young Seok; Choi, Ji He; Kim, Sun Su; Kim, Se Jong; Park, Byong Lan; Kim, Byoung Geun

    2002-01-01

    The purpose of this study was to determine the efficacy of sonography in the evaluation of normal pseudocapsular morphology and the detection of complications after total hip arthroplasty. Between Janvary 1997 and June 2000, 47 patients (35 men and 12 women aged 24 to 84 (mean, 61) years) using real-time linear-array, convex US units with 3.5-MHz and 10-MHz transducers. Normal capsular morphology in 30 with total hip replacements, who had been asymptomatic for at least one year, was studied, and the prosthetic joint infection demonstrated in six of 17 who had experienced was confirmed at surgery or by US-guided aspiration. Sonograms indicated that a normal pseudocapsule lay straight over the neck of the prosthesis or was slightly convex toward the neck , and that the mean bone-to-pseudocapsule distance was 2.9 mm. However, in the 11 symptomatic patients in whom no evidence of infection was revealed by cultures, th mean distance was 4.7 mm; in the remaining six patients, whose joints were infected (a condition strongly suggested by the presence of extracapsular fluid), the mean distance was 5.5 mm, with no significant difference between the two groups. Sonography can be used to evaluate normal caspular morphology after total hip replacement and to diagnose infection around hip prostheses. In all patients in whom sonography revealed the presence of extra-articular fluid, infection had occurred

  19. Sonographic findings after total hip arthroplasty: normal and complications

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyoung Rok; Seon, Young Seok; Choi, Ji He; Kim, Sun Su; Kim, Se Jong; Park, Byong Lan; Kim, Byoung Geun [Kwangju Christian Hospital, Kwangju (Korea, Republic of)

    2002-04-01

    The purpose of this study was to determine the efficacy of sonography in the evaluation of normal pseudocapsular morphology and the detection of complications after total hip arthroplasty. Between Janvary 1997 and June 2000, 47 patients (35 men and 12 women aged 24 to 84 (mean, 61) years) using real-time linear-array, convex US units with 3.5-MHz and 10-MHz transducers. Normal capsular morphology in 30 with total hip replacements, who had been asymptomatic for at least one year, was studied, and the prosthetic joint infection demonstrated in six of 17 who had experienced was confirmed at surgery or by US-guided aspiration. Sonograms indicated that a normal pseudocapsule lay straight over the neck of the prosthesis or was slightly convex toward the neck , and that the mean bone-to-pseudocapsule distance was 2.9 mm. However, in the 11 symptomatic patients in whom no evidence of infection was revealed by cultures, th mean distance was 4.7 mm; in the remaining six patients, whose joints were infected (a condition strongly suggested by the presence of extracapsular fluid), the mean distance was 5.5 mm, with no significant difference between the two groups. Sonography can be used to evaluate normal caspular morphology after total hip replacement and to diagnose infection around hip prostheses. In all patients in whom sonography revealed the presence of extra-articular fluid, infection had occurred.

  20. Phenotype of normal spirometry in an aging population.

    Science.gov (United States)

    Vaz Fragoso, Carlos A; McAvay, Gail; Van Ness, Peter H; Casaburi, Richard; Jensen, Robert L; MacIntyre, Neil; Gill, Thomas M; Yaggi, H Klar; Concato, John

    2015-10-01

    In aging populations, the commonly used Global Initiative for Chronic Obstructive Lung Disease (GOLD) may misclassify normal spirometry as respiratory impairment (airflow obstruction and restrictive pattern), including the presumption of respiratory disease (chronic obstructive pulmonary disease [COPD]). To evaluate the phenotype of normal spirometry as defined by a new approach from the Global Lung Initiative (GLI), overall and across GOLD spirometric categories. Using data from COPDGene (n = 10,131; ages 45-81; smoking history, ≥10 pack-years), we evaluated spirometry and multiple phenotypes, including dyspnea severity (Modified Medical Research Council grade 0-4), health-related quality of life (St. George's Respiratory Questionnaire total score), 6-minute-walk distance, bronchodilator reversibility (FEV1 % change), computed tomography-measured percentage of lung with emphysema (% emphysema) and gas trapping (% gas trapping), and small airway dimensions (square root of the wall area for a standardized airway with an internal perimeter of 10 mm). Among 5,100 participants with GLI-defined normal spirometry, GOLD identified respiratory impairment in 1,146 (22.5%), including a restrictive pattern in 464 (9.1%), mild COPD in 380 (7.5%), moderate COPD in 302 (5.9%), and severe COPD in none. Overall, the phenotype of GLI-defined normal spirometry included normal adjusted mean values for dyspnea grade (0.8), St. George's Respiratory Questionnaire (15.9), 6-minute-walk distance (1,424 ft [434 m]), bronchodilator reversibility (2.7%), % emphysema (0.9%), % gas trapping (10.7%), and square root of the wall area for a standardized airway with an internal perimeter of 10 mm (3.65 mm); corresponding 95% confidence intervals were similarly normal. These phenotypes remained normal for GLI-defined normal spirometry across GOLD spirometric categories. GLI-defined normal spirometry, even when classified as respiratory impairment by GOLD, included adjusted mean values in the

  1. Efficient CEPSTRAL Normalization for Robust Speech Recognition

    National Research Council Canada - National Science Library

    Liu, Fu-Hua; Stern, Richard M; Huang, Xuedong; Acero, Alejandro

    1993-01-01

    .... We compare the performance of these algorithms with the very simple RASTA and cepstral mean normalization procedures, describing the performance of these algorithms in the context of the 1992 DARPA...

  2. VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz dan Mean Absolute Deviation

    Directory of Open Access Journals (Sweden)

    R. Agus Sartono

    2009-05-01

    Full Text Available Portfolio selection method which have been introduced by Harry Markowitz (1952 used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991 introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attempt to assess the VaR of two portfolios using delta normal method and historical simulation. We use the secondary data from the Jakarta Stock Exchange – LQ45 during 2003. We find that there is a weak-positive correlation between deviation standard and return in both portfolios. The VaR nolmal delta based on mean absolute deviation method eventually is higher than the VaR normal delta based on mean variance method. However, based on the historical simulation the VaR of two methods is statistically insignificant. Thus, the deviation standard is sufficient measures of portfolio risk.Keywords: optimalisasi portofolio, mean-variance, mean-absolute deviation, value-at-risk, metode delta normal, metode simulasi historis

  3. Lectures on mean curvature flows

    CERN Document Server

    Zhu, Xi-Ping

    2002-01-01

    "Mean curvature flow" is a term that is used to describe the evolution of a hypersurface whose normal velocity is given by the mean curvature. In the simplest case of a convex closed curve on the plane, the properties of the mean curvature flow are described by Gage-Hamilton's theorem. This theorem states that under the mean curvature flow, the curve collapses to a point, and if the flow is diluted so that the enclosed area equals \\pi, the curve tends to the unit circle. In this book, the author gives a comprehensive account of fundamental results on singularities and the asymptotic behavior of mean curvature flows in higher dimensions. Among other topics, he considers in detail Huisken's theorem (a generalization of Gage-Hamilton's theorem to higher dimension), evolution of non-convex curves and hypersurfaces, and the classification of singularities of the mean curvature flow. Because of the importance of the mean curvature flow and its numerous applications in differential geometry and partial differential ...

  4. Electron mean-free-path filtering in Dirac material for improved thermoelectric performance.

    Science.gov (United States)

    Liu, Te-Huan; Zhou, Jiawei; Li, Mingda; Ding, Zhiwei; Song, Qichen; Liao, Bolin; Fu, Liang; Chen, Gang

    2018-01-30

    Recent advancements in thermoelectric materials have largely benefited from various approaches, including band engineering and defect optimization, among which the nanostructuring technique presents a promising way to improve the thermoelectric figure of merit ( zT ) by means of reducing the characteristic length of the nanostructure, which relies on the belief that phonons' mean free paths (MFPs) are typically much longer than electrons'. Pushing the nanostructure sizes down to the length scale dictated by electron MFPs, however, has hitherto been overlooked as it inevitably sacrifices electrical conduction. Here we report through ab initio simulations that Dirac material can overcome this limitation. The monotonically decreasing trend of the electron MFP allows filtering of long-MFP electrons that are detrimental to the Seebeck coefficient, leading to a dramatically enhanced power factor. Using SnTe as a material platform, we uncover this MFP filtering effect as arising from its unique nonparabolic Dirac band dispersion. Room-temperature zT can be enhanced by nearly a factor of 3 if one designs nanostructures with grain sizes of ∼10 nm. Our work broadens the scope of the nanostructuring approach for improving the thermoelectric performance, especially for materials with topologically nontrivial electronic dynamics.

  5. Normal stress databases in myocardial perfusion scintigraphy – how many subjects do you need?

    DEFF Research Database (Denmark)

    Trägårdh, Elin; Sjöstrand, Karl; Edenbrandt, Lars

    2012-01-01

    ) for male, NC for female, attenuation‐corrected images (AC) for male and AC for female subjects. 126 male and 205 female subjects were included. The normal database was created by alternatingly computing the mean of all normal subjects and normalizing the subjects with respect to this mean, until...... convergence. Coefficients of variation (CV) were created for increasing number of included patients in the four different normal stress databases. Normal stress databases with ...Commercial normal stress databases in myocardial perfusion scintigraphy (MPS) commonly consist of 30–40 individuals. The aim of the study was to determine how many subjects are needed. Four normal stress databases were developed using patients who underwent 99mTc MPS: non‐corrected images (NC...

  6. Inelastic behavior of cold-formed braced walls under monotonic and cyclic loading

    Science.gov (United States)

    Gerami, Mohsen; Lotfi, Mohsen; Nejat, Roya

    2015-06-01

    The ever-increasing need for housing generated the search for new and innovative building methods to increase speed and efficiency and enhance quality. One method is the use of light thin steel profiles as load-bearing elements having different solutions for interior and exterior cladding. Due to the increase in CFS construction in low-rise residential structures in the modern construction industry, there is an increased demand for performance inelastic analysis of CFS walls. In this study, the nonlinear behavior of cold-formed steel frames with various bracing arrangements including cross, chevron and k-shape straps was evaluated under cyclic and monotonic loading and using nonlinear finite element analysis methods. In total, 68 frames with different bracing arrangements and different ratios of dimensions were studied. Also, seismic parameters including resistance reduction factor, ductility and force reduction factor due to ductility were evaluated for all samples. On the other hand, the seismic response modification factor was calculated for these systems. It was concluded that the highest response modification factor would be obtained for walls with bilateral cross bracing systems with a value of 3.14. In all samples, on increasing the distance of straps from each other, shear strength increased and shear strength of the wall with bilateral bracing system was 60 % greater than that with lateral bracing system.

  7. Quantifying Normal Craniofacial Form and Baseline Craniofacial Asymmetry in the Pediatric Population.

    Science.gov (United States)

    Cho, Min-Jeong; Hallac, Rami R; Ramesh, Jananie; Seaward, James R; Hermann, Nuno V; Darvann, Tron A; Lipira, Angelo; Kane, Alex A

    2018-03-01

    Restoring craniofacial symmetry is an important objective in the treatment of many craniofacial conditions. Normal form has been measured using anthropometry, cephalometry, and photography, yet all of these modalities have drawbacks. In this study, the authors define normal pediatric craniofacial form and craniofacial asymmetry using stereophotogrammetric images, which capture a densely sampled set of points on the form. After institutional review board approval, normal, healthy children (n = 533) with no known craniofacial abnormalities were recruited at well-child visits to undergo full head stereophotogrammetric imaging. The children's ages ranged from 0 to 18 years. A symmetric three-dimensional template was registered and scaled to each individual scan using 25 manually placed landmarks. The template was deformed to each subject's three-dimensional scan using a thin-plate spline algorithm and closest point matching. Age-based normal facial models were derived. Mean facial asymmetry and statistical characteristics of the population were calculated. The mean head asymmetry across all pediatric subjects was 1.5 ± 0.5 mm (range, 0.46 to 4.78 mm), and the mean facial asymmetry was 1.2 ± 0.6 mm (range, 0.4 to 5.4 mm). There were no significant differences in the mean head or facial asymmetry with age, sex, or race. Understanding the "normal" form and baseline distribution of asymmetry is an important anthropomorphic foundation. The authors present a method to quantify normal craniofacial form and baseline asymmetry in a large pediatric sample. The authors found that the normal pediatric craniofacial form is asymmetric, and does not change in magnitude with age, sex, or race.

  8. APPLICATION OF A PRIMAL-DUAL INTERIOR POINT ALGORITHM USING EXACT SECOND ORDER INFORMATION WITH A NOVEL NON-MONOTONE LINE SEARCH METHOD TO GENERALLY CONSTRAINED MINIMAX OPTIMISATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    INTAN S. AHMAD

    2008-04-01

    Full Text Available This work presents the application of a primal-dual interior point method to minimax optimisation problems. The algorithm differs significantly from previous approaches as it involves a novel non-monotone line search procedure, which is based on the use of standard penalty methods as the merit function used for line search. The crucial novel concept is the discretisation of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order. An implementation within a logarithmic barrier algorithm for bounds handling is presented with capabilities for large scale application. Case studies presented demonstrate the capabilities of the proposed methodology, which relies on the reformulation of minimax models into standard nonlinear optimisation models. Some previously reported case studies from the open literature have been solved, and with significantly better optimal solutions identified. We believe that the nature of the non-monotone line search scheme allows the search procedure to escape from local minima, hence the encouraging results obtained.

  9. Normality and naturalness: a comparison of the meanings of concepts used within veterinary medicine and human medicine.

    Science.gov (United States)

    Lerner, Henrik; Hofmann, Bjørn

    2011-12-01

    This article analyses the different connotations of "normality" and "being natural," bringing together the theoretical discussion from both human medicine and veterinary medicine. We show how the interpretations of the concepts in the different areas could be mutually fruitful. It appears that the conceptions of "natural" are more elaborate in veterinary medicine, and can be of value to human medicine. In particular they can nuance and correct conceptions of nature in human medicine that may be too idealistic. Correspondingly, the wide ranging conceptions of "normal" in human medicine may enrich conceptions in veterinary medicine, where the discussions seem to be sparse. We do not argue that conceptions from veterinary medicine should be used in human medicine and vice versa, but only that it could be done and that it may well be fruitful. Moreover, there are overlaps between some notions of normal and natural, and further conceptual analysis on this overlap is needed.

  10. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  11. Improving the scaling normalization for high-density oligonucleotide GeneChip expression microarrays

    Directory of Open Access Journals (Sweden)

    Lu Chao

    2004-07-01

    Full Text Available Abstract Background Normalization is an important step for microarray data analysis to minimize biological and technical variations. Choosing a suitable approach can be critical. The default method in GeneChip expression microarray uses a constant factor, the scaling factor (SF, for every gene on an array. The SF is obtained from a trimmed average signal of the array after excluding the 2% of the probe sets with the highest and the lowest values. Results Among the 76 U34A GeneChip experiments, the total signals on each array showed 25.8% variations in terms of the coefficient of variation, although all microarrays were hybridized with the same amount of biotin-labeled cRNA. The 2% of the probe sets with the highest signals that were normally excluded from SF calculation accounted for 34% to 54% of the total signals (40.7% ± 4.4%, mean ± sd. In comparison with normalization factors obtained from the median signal or from the mean of the log transformed signal, SF showed the greatest variation. The normalization factors obtained from log transformed signals showed least variation. Conclusions Eliminating 40% of the signal data during SF calculation failed to show any benefit. Normalization factors obtained with log transformed signals performed the best. Thus, it is suggested to use the mean of the logarithm transformed data for normalization, rather than the arithmetic mean of signals in GeneChip gene expression microarrays.

  12. Sufficient Condition for Monotonicity in Constructing the Distribution Function With Bernoulli Scheme

    Directory of Open Access Journals (Sweden)

    Vedenyapin Aleksandr Dmitrievich

    2015-11-01

    Full Text Available This paper is the construction of the distribution function using the Bernoulli scheme, and is also designed to correct some of the mistakes that were made in the article [2]. Namely, a function built in [2] need not be monotonous, and some formulas need to be adjusted. The idea of building as well as in [2], is based on the model of Cox-Ross-Rubinstein "binary market". The essence of the model was to divide time into N steps, and assuming that the price of an asset at each step can move either up to a certain value with probability p, or down also by some certain value with probability q = 1 - p. Prices in step N can take only a finite number of values. "Success" or "failure" was the changing price for some fixed value in the model of Cox-Ross-Rubinstein. Here as a "success" or "failure" at every step we consider the affiliation of changing the index value to the section [r, S] either to the interval [I, r. Further a function P(r was introduced, which at any step gives us the probability of "success". The maximum index value increase for the all period of time [T, 2T] will be equal nS, and the maximum possible reduction will be equal nI. Then let x ∈ [nI, nS]. This segment will reflect every possible total variation that we can get at the end of a period of time [T, 2T]. The further introduced inequality k ≥ (x - nI/(S - I gives us the minimum number of successes that needed for total changing could be in the section [x, nS] if was n - k reductions with the index value to I. Then was introduced the function r(x, kmin which is defined on the interval (nI, nS] and provided us some assurance that the total index changing could be in the section [x, nS] if successful interval is [r(x, kmin, S] and the amount of success is satisfying to our inequality. The probability of k "successes" and n - k "failures" is calculated according to the formula of Bernoulli, where the probability of "success" is determined by the function P(r, and r is determined

  13. Remarks on the boundary curve of a constant mean curvature topological disc

    DEFF Research Database (Denmark)

    Brander, David; Lopéz, Rafael

    2017-01-01

    We discuss some consequences of the existence of the holomorphic quadratic Hopf differential on a conformally immersed constant mean curvature topological disc with analytic boundary. In particular, we derive a formula for the mean curvature as a weighted average of the normal curvature of the bo......We discuss some consequences of the existence of the holomorphic quadratic Hopf differential on a conformally immersed constant mean curvature topological disc with analytic boundary. In particular, we derive a formula for the mean curvature as a weighted average of the normal curvature...

  14. Response of skirted suction caissons to monotonic lateral loading in saturated medium sand

    Science.gov (United States)

    Li, Da-yong; Zhang, Yu-kun; Feng, Ling-yun; Guo, Yan-xue

    2014-08-01

    Monotonic lateral load model tests were carried out on steel skirted suction caissons embedded in the saturated medium sand to study the bearing capacity. A three-dimensional continuum finite element model was developed with Z_SOIL software. The numerical model was calibrated against experimental results. Soil deformation and earth pressures on skirted caissons were investigated by using the finite element model to extend the model tests. It shows that the "skirted" structure can significantly increase the lateral capacity and limit the deflection, especially suitable for offshore wind turbines, compared with regular suction caissons without the "skirted" at the same load level. In addition, appropriate determination of rotation centers plays a crucial role in calculating the lateral capacity by using the analytical method. It was also found that the rotation center is related to dimensions of skirted suction caissons and loading process, i.e. the rotation center moves upwards with the increase of the "skirted" width and length; moreover, the rotation center moves downwards with the increase of loading and keeps constant when all the sand along the caisson's wall yields. It is so complex that we cannot simply determine its position like the regular suction caisson commonly with a specified position to the length ratio of the caisson.

  15. Comparison of sensory-specific satiety between normal weight and overweight children.

    Science.gov (United States)

    Rischel, Helene Egebjerg; Nielsen, Louise Aas; Gamborg, Michael; Møller, Per; Holm, Jens-Christian

    2016-12-01

    Sensory properties of some foods may be of importance to energy consumption and thus the development and maintenance of childhood obesity. This study compares selected food related qualities in overweight and normal weight children. Ninety-two participants were included; 55 were overweight with a mean age of 11.6 years (range 6-18 years) and a mean BMI z-score of 2.71 (range 1.29-4.60). The 37 normal weight children had a mean age of 13.0 years (range 6-19 years) and a mean BMI z-score of 0.16 (range -1.71 to 1.24). All children completed a half-hour long meal test consisting of alternation between consumption of foods and answering of questionnaires. Compared to the normal weight, the overweight children displayed lower self-reported intake paces (χ 2 (2) = 6.3, p = 0.04), higher changes in liking for mozzarella (F(1,63) = 9.55, p = 0.003) and pretzels (F(1,87) = 5.27, p = 0.024), and declines in wanting for something fat, of which the normal weight children displayed an increase (F(1,83) = 4,10, p = 0.046). No differences were found for sensory-specific satiety, wanting for the main food yoghurt, hunger, or satiety. In conclusion, overweight children did not differ from normal weight children in terms of sensory-specific satiety, hunger, or satiety. However, overweight children had lower intake paces and appeared to differ from normal weight children regarding foods with a fatty taste. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Distributive justice and cognitive enhancement in lower, normal intelligence.

    Science.gov (United States)

    Dunlop, Mikael; Savulescu, Julian

    2014-01-01

    There exists a significant disparity within society between individuals in terms of intelligence. While intelligence varies naturally throughout society, the extent to which this impacts on the life opportunities it affords to each individual is greatly undervalued. Intelligence appears to have a prominent effect over a broad range of social and economic life outcomes. Many key determinants of well-being correlate highly with the results of IQ tests, and other measures of intelligence, and an IQ of 75 is generally accepted as the most important threshold in modern life. The ability to enhance our cognitive capacities offers an exciting opportunity to correct disabling natural variation and inequality in intelligence. Pharmaceutical cognitive enhancers, such as modafinil and methylphenidate, have been shown to have the capacity to enhance cognition in normal, healthy individuals. Perhaps of most relevance is the presence of an 'inverted U effect' for most pharmaceutical cognitive enhancers, whereby the degree of enhancement increases as intelligence levels deviate further below the mean. Although enhancement, including cognitive enhancement, has been much debated recently, we argue that there are egalitarian reasons to enhance individuals with low but normal intelligence. Under egalitarianism, cognitive enhancement has the potential to reduce opportunity inequality and contribute to relative income and welfare equality in the lower, normal intelligence subgroup. Cognitive enhancement use is justifiable under prioritarianism through various means of distribution; selective access to the lower, normal intelligence subgroup, universal access, or paradoxically through access primarily to the average and above average intelligence subgroups. Similarly, an aggregate increase in social well-being is achieved through similar means of distribution under utilitarianism. In addition, the use of cognitive enhancement within the lower, normal intelligence subgroup negates, or at

  17. Monotonic and cyclic bond behavior of confined concrete using NiTiNb SMA wires

    International Nuclear Information System (INIS)

    Choi, Eunsoo; Chung, Young-Soo; Kim, Yeon-Wook; Kim, Joo-Woo

    2011-01-01

    This study conducts bond tests of reinforced concrete confined by shape memory alloy (SMA) wires which provide active and passive confinement of concrete. This study uses NiTiNb SMA which usually shows wide temperature hysteresis; this is a good advantage for the application of shape memory effects. The aims of this study are to investigate the behavior of SMA wire under residual stress and the performance of SMA wire jackets in improving bond behavior through monotonic-loading tests. This study also conducts cyclic bond tests and analyzes cyclic bond behavior. The use of SMA wire jackets transfers the bond failure from splitting to pull-out mode and satisfactorily increases bond strength and ductile behavior. The active confinement provided by the SMA plays a major role in providing external pressure on the concrete because the developed passive confinement is much smaller than the active confinement. For cyclic behavior, slip and circumferential strain are recovered more with larger bond stress. This recovery of slip and circumferential strain are mainly due to the external pressure of the SMA wires since cracked concrete cannot provide any elastic recovery

  18. Multiscale Hybrid Nonlocal Means Filtering Using Modified Similarity Measure

    Directory of Open Access Journals (Sweden)

    Zahid Hussain Shamsi

    2015-01-01

    Full Text Available A new multiscale implementation of nonlocal means filtering (MHNLM for image denoising is proposed. The proposed algorithm also introduces a modification of the similarity measure for patch comparison. Assuming the patch as an oriented surface, the notion of a normal vectors patch is introduced. The inner product of these normal vectors patches is defined and then used in the weighted Euclidean distance of intensity patches as the weight factor. The algorithm involves two steps: the first step is a multiscale implementation of an accelerated nonlocal means filtering in the discrete stationary wavelet domain to obtain a refined version of the noisy patches for later comparison. The next step is to apply the proposed modification of standard nonlocal means filtering to the noisy image using the reference patches obtained in the first step. These refined patches contain less noise, and consequently the computation of normal vectors and partial derivatives is more precise. Experimental results show equivalent or better performance of the proposed algorithm compared to various state-of-the-art algorithms.

  19. Discolored Semen: What Does It Mean?

    Science.gov (United States)

    ... it mean? Should I be concerned about discolored semen? Answers from Todd B. Nippoldt, M.D. Semen is normally a whitish-gray color. It's usually ... within 30 minutes. Changes in the appearance of semen might be temporary and not a health concern. ...

  20. The Normal Value of Tibial Tubercle Trochlear Groove Distance in Patients With Normal Knee Examinations Using MRI

    Directory of Open Access Journals (Sweden)

    Mohammad Sobhanardekani

    2017-10-01

    Full Text Available Patellar instability is a multifactorial common knee pathology that has a high recurrence rate, and the symptoms continue and ultimately predispose the patient to chondromalacia and osteoarthritis. Tibial tuberosity-trochlear groove distance (TTTG is very important in the assessment of patellofemoral joint instability. The purpose of this study was to report the normal value of TTTG in males and females in different age groups and to assess the reliability of MRI in measuring TTTG. All patients presenting with knee pain and normal examinations of the knee joint, with a normal MRI report, referring to Shahid Sadoughi hospital of Yazd, Iran, from April 2014 to September 2014, were included in the study. MR images were studied once by two radiologists and for the second time by one radiologist. Mean value of TTTG was reported for males and females and in three age groups. Intra- and inter-observer reliability was calculated. A total of 98 patients were eligible to evaluate during 6 months (68 male and 30 female. Mean TTTG was 10.9±2.5 mm in total, which was 10.8±2.8 mm and 11.3±2.3 mm in males and females, respectively (P>0.05. Mean TTTG in males ≤30 years, 30-50 years and, ≥51-year-old were 10.8±2.6 mm, 10.8±2.7 mm, and 10.8±2.6 mm, respectively; that was 12.1±3.4 mm, 11.4±1.9 mm, and 10.5±1.7 mm in females ≤30 years, 31-50 years and, ≥51-year-old, respectively (95% CI. The coefficient of variation was <10% for both intra- and interobserver analysis. The results of the present study showed no significant difference in TTTG value between males and females in different age groups. In addition, it demonstrated that MRI is a reliable method in assessment of TTTG and identified normal value for TTTG at 10.9±2.5 mm.

  1. The interblink interval in normal and dry eye subjects

    Directory of Open Access Journals (Sweden)

    Johnston PR

    2013-02-01

    Full Text Available Patrick R Johnston,1 John Rodriguez,1 Keith J Lane,1 George Ousler,1 Mark B Abelson1,21Ora, Inc, Andover, MA, USA; 2Schepens Eye Research Institute and Harvard Medical School, Boston, MA, USAPurpose: Our aim was to extend the concept of blink patterns from average interblink interval (IBI to other aspects of the distribution of IBI. We hypothesized that this more comprehensive approach would better discriminate between normal and dry eye subjects.Methods: Blinks were captured over 10 minutes for ten normal and ten dry eye subjects while viewing a standardized televised documentary. Fifty-five blinks were analyzed for each of the 20 subjects. Means, standard deviations, and autocorrelation coefficients were calculated utilizing a single random effects model fit to all data points and a diagnostic model was subsequently fit to predict probability of a subject having dry eye based on these parameters.Results: Mean IBI was 5.97 seconds for normal versus 2.56 seconds for dry eye subjects (ratio: 2.33, P = 0.004. IBI variability was 1.56 times higher in normal subjects (P < 0.001, and the autocorrelation was 1.79 times higher in normal subjects (P = 0.044. With regard to the diagnostic power of these measures, mean IBI was the best dry eye versus normal classifier using receiver operating characteristics (0.85 area under curve (AUC, followed by the standard deviation (0.75 AUC, and lastly, the autocorrelation (0.63 AUC. All three predictors combined had an AUC of 0.89. Based on this analysis, cutoffs of ≤3.05 seconds for median IBI, and ≤0.73 for the coefficient of variation were chosen to classify dry eye subjects.Conclusion: (1 IBI was significantly shorter for dry eye patients performing a visual task compared to normals; (2 there was a greater variability of interblink intervals in normal subjects; and (3 these parameters were useful as diagnostic predictors of dry eye disease. The results of this pilot study merit investigation of IBI

  2. Search for scalar-tensor gravity theories with a non-monotonic time evolution of the speed-up factor

    Energy Technology Data Exchange (ETDEWEB)

    Navarro, A [Dept Fisica, Universidad de Murcia, E30071-Murcia (Spain); Serna, A [Dept Fisica, Computacion y Comunicaciones, Universidad Miguel Hernandez, E03202-Elche (Spain); Alimi, J-M [Lab. de l' Univers et de ses Theories (LUTH, CNRS FRE2462), Observatoire de Paris-Meudon, F92195-Meudon (France)

    2002-08-21

    We present a method to detect, in the framework of scalar-tensor gravity theories, the existence of stationary points in the time evolution of the speed-up factor. An attractive aspect of this method is that, once the particular scalar-tensor theory has been specified, the stationary points are found through a simple algebraic equation which does not contain any integration. By applying this method to the three classes of scalar-tensor theories defined by Barrow and Parsons, we have found several new cosmological models with a non-monotonic evolution of the speed-up factor. The physical interest of these models is that, as previously shown by Serna and Alimi, they predict the observed primordial abundance of light elements for a very wide range of baryon density. These models are then consistent with recent CMB and Lyman-{alpha} estimates of the baryon content of the universe.

  3. Quorum-Sensing Synchronization of Synthetic Toggle Switches: A Design Based on Monotone Dynamical Systems Theory.

    Directory of Open Access Journals (Sweden)

    Evgeni V Nikolaev

    2016-04-01

    Full Text Available Synthetic constructs in biotechnology, biocomputing, and modern gene therapy interventions are often based on plasmids or transfected circuits which implement some form of "on-off" switch. For example, the expression of a protein used for therapeutic purposes might be triggered by the recognition of a specific combination of inducers (e.g., antigens, and memory of this event should be maintained across a cell population until a specific stimulus commands a coordinated shut-off. The robustness of such a design is hampered by molecular ("intrinsic" or environmental ("extrinsic" noise, which may lead to spontaneous changes of state in a subset of the population and is reflected in the bimodality of protein expression, as measured for example using flow cytometry. In this context, a "majority-vote" correction circuit, which brings deviant cells back into the required state, is highly desirable, and quorum-sensing has been suggested as a way for cells to broadcast their states to the population as a whole so as to facilitate consensus. In this paper, we propose what we believe is the first such a design that has mathematically guaranteed properties of stability and auto-correction under certain conditions. Our approach is guided by concepts and theory from the field of "monotone" dynamical systems developed by M. Hirsch, H. Smith, and others. We benchmark our design by comparing it to an existing design which has been the subject of experimental and theoretical studies, illustrating its superiority in stability and self-correction of synchronization errors. Our stability analysis, based on dynamical systems theory, guarantees global convergence to steady states, ruling out unpredictable ("chaotic" behaviors and even sustained oscillations in the limit of convergence. These results are valid no matter what are the values of parameters, and are based only on the wiring diagram. The theory is complemented by extensive computational bifurcation analysis

  4. Forward treatment planning techniques to reduce the normalization effect in Gamma Knife radiosurgery.

    Science.gov (United States)

    Cheng, Hao-Wen; Lo, Wei-Lun; Kuo, Chun-Yuan; Su, Yu-Kai; Tsai, Jo-Ting; Lin, Jia-Wei; Wang, Yu-Jen; Pan, David Hung-Chi

    2017-11-01

    In Gamma Knife forward treatment planning, normalization effect may be observed when multiple shots are used for treating large lesions. This effect can reduce the proportion of coverage of high-value isodose lines within targets. The aim of this study was to evaluate the performance of forward treatment planning techniques using the Leksell Gamma Knife for the normalization effect reduction. We adjusted the shot positions and weightings to optimize the dose distribution and reduce the overlap of high-value isodose lines from each shot, thereby mitigating the normalization effect during treatment planning. The new collimation system, Leksell Gamma Knife Perfexion, which contains eight movable sectors, provides an additional means to reduce the normalization effect by using composite shots. We propose different techniques in forward treatment planning that can reduce the normalization effect. Reducing the normalization effect increases the coverage proportion of higher isodose lines within targets, making the high-dose region within targets more uniform and increasing the mean dose to targets. Because of the increase in the mean dose to the target after reducing the normalization effect, we can set the prescribed marginal dose at a higher isodose level and reduce the maximum dose, thereby lowering the risk of complications. © 2017 Shuang Ho Hospital-Taipei Medical University. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  5. Bearing Capacity of Foundations subjected to Impact Loads

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Jakobsen, Kim Parsberg

    1996-01-01

    In the design process for foundations, the bearing capacity calculations are normally restricted to monotonic loads. Even in cases where the impact load is of significance the dynamic aspects are neglected by use of a traditional deterministic ultimate limit state analysis. Nevertheless it is com......In the design process for foundations, the bearing capacity calculations are normally restricted to monotonic loads. Even in cases where the impact load is of significance the dynamic aspects are neglected by use of a traditional deterministic ultimate limit state analysis. Nevertheless...

  6. "Ser diferente é normal?"/"Being different: is it normal?"

    Directory of Open Access Journals (Sweden)

    Viviane Veras

    2007-01-01

    Full Text Available A pergunta título deste trabalho retoma o slogan “Ser diferente é normal”, que é parte da campanha criada para uma organização não-governamental que atende portadores de Síndrome de Down. O objetivo é a inclusão social da pessoa com deficiência e o primeiro passo foi propor a inclusão de um grupo de diferentes no grupo dito normal. No vídeo de lançamento da campanha, o diferente, identificado como normal, é mostrado por meio de exemplos – um negro com cabelo black-power, um skin-head, um corpo tatuado, um corpo feminino halterofílico, uma família hippie, uma garota com síndrome de Down. A visão da adolescente dançando reduz, de certo modo, o efeito imaginário que vai além da síndrome, uma vez que apenas o corpo com seus olhinhos puxados se destacam, e não se interrogam questões cognitivas. Minha proposta é refletir sobre o estatuto paradoxal do exemplo, tal como é trabalhado nesse vídeo: se, por definição, um exemplo mostra de fato seu pertencimento a uma classe, pode-se concluir que é exatamente por ser exemplar que ele se encontra fora dela, no exato momento em que a exibe e define. The question in the title of this paper refers to the slogan "ser diferente é normal" ("It´s normal to be different", which is part of a campaign created for a NGO that supports people with Down syndrome. The objective of the campaign is to promote the social inclusion of individuals with Down syndrome, and the first step was to propose the inclusion of a group of "differents" in the so-called normal group. The film launching the campaign shows the different identified as normal by means of examples: a black man exhibiting blackpower haircut, a skin-head, a tattooed body, an over-athletic female body, a hippie family and a girl with Down syndrome. The vision of the dancing teenager lessens the imaginary effect that surpasses the syndrome, since only her body and her little oriental eyes stand out and no cognitive issues are

  7. Time course of dichoptic masking in normals and suppression in amblyopes.

    Science.gov (United States)

    Zhou, Jiawei; McNeal, Suzanne; Babu, Raiju J; Baker, Daniel H; Bobier, William R; Hess, Robert F

    2014-04-17

    To better understand the relationship between dichoptic masking in normal vision and suppression in amblyopia we address three questions: First, what is the time course of dichoptic masking in normals and amblyopes? Second, is interocular suppression low-pass or band-pass in its spatial dependence? And third, in the above two regards, is dichoptic masking in normals different from amblyopic suppression? We measured the dependence of dichoptic masking in normal controls and amblyopes on the temporal duration of presentation under three conditions; monocular (the nontested eye-i.e., dominant eye of normals or nonamblyopic eye of amblyopes, being patched), dichoptic-luminance (the nontested eye seeing a mean luminance-i.e., a DC component) and dichoptic-contrast (the nontested eye seeing high-contrast visual noise). The subject had to detect a letter in the other eye, the contrast of which was varied. We found that threshold elevation relative to the patched condition occurred in both normals and amblyopes when the nontested eye saw either 1/f or band-pass filtered noise, but not just mean luminance (i.e., there was no masking from the DC component that corresponds to a channel responsive to a spatial frequency of 0 cyc/deg); longer presentation of the target (corresponding to lower temporal frequencies) produced greater threshold elevation. Dichoptic masking exhibits similar properties in both subject groups, being low-pass temporally and band-pass spatially, so that masking was greatest at the longest presentation durations and was not greatly affected by mean luminance in the nontested eye. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  8. Pellet cladding interaction (PCI) fuel duty during normal operation of ASEA-ATOM BWRs

    International Nuclear Information System (INIS)

    Vaernild, O.; Olsson, S.

    1983-01-01

    Local power changes may under special conditions cause PCI fuel failures in a power reactor. By restricting the local power increase rate in certain situations it is possible to prevent PCI failures. Fine motion control rod drives, large operating range of the main recirculation pumps and an advanced burnable absorber design have minimized the impact of the PCI restrictions. With current ICFM schemes the power of an assembly is due to the burnup of the gadolinia gradually increasing during the first cycle of operation. After this the power is essentially decreasing monotonously during the remaining life of the assembly. Some assemblies are for short burnup intervals operated at very low power in control cells. The control rods in these cells may however be withdrawn without restrictions leading to energy production losses. Base load operation would in the normal case lead to very minor PCI loads on the fuel regardless of any PCI related operating restrictions. At the return to full power after a short shutdown or in connection with load follow operation, the xenon transient may cause PCI loads on the fuel. To avoid this a few hoursholdtime before going back to full power is recommended. (author)

  9. Pellet-cladding interaction (PCI) fuel duty during normal operation of ASEA-ATOM BWRs

    International Nuclear Information System (INIS)

    Vaernild, O.; Olsson, S.

    1985-01-01

    Local power changes may, under special conditions, cause PCI fuel failures in a power reactor. By restricting the local power increase rate in certain situations it is possible to prevent PCI failures. Fine motion control rod drives, large operating range of the main recirculation pumps and an advanced burnable absorber design have minimized the impact of the PCI restrictions. With current ICFM schemes the power of an assembly is due to the burnup of the gadolinia gradually increasing during the first cycle of operation. After this the power is essentially decreasing monotonously during the remaining life of the assembly. Some assemblies are for short burnup intervals operated at very low power in control cells. The control rods in these cells may, however, be withdrawn without restrictions leading to energy production losses. Base load operation would in the normal case lead to very minor PCI loads on the fuel regardless of any PCI-related operating restrictions. At the return to full power after a short shutdown or in connection with load follow operation, the xenon transient may cause PCI loads on the fuel. To avoid this a few hours hold-time before going back to full power is recommended. (author)

  10. Does the dose-solubility ratio affect the mean dissolution time of drugs?

    Science.gov (United States)

    Lánský, P; Weiss, M

    1999-09-01

    To present a new model for describing drug dissolution. On the basis of the new model to characterize the dissolution profile by the distribution function of the random dissolution time of a drug molecule, which generalizes the classical first order model. Instead of assuming a constant fractional dissolution rate, as in the classical model, it is considered that the fractional dissolution rate is a decreasing function of the dissolved amount controlled by the dose-solubility ratio. The differential equation derived from this assumption is solved and the distribution measures (half-dissolution time, mean dissolution time, relative dispersion of the dissolution time, dissolution time density, and fractional dissolution rate) are calculated. Finally, instead of monotonically decreasing the fractional dissolution rate, a generalization resulting in zero dissolution rate at time origin is introduced. The behavior of the model is divided into two regions defined by q, the ratio of the dose to the solubility level: q 1 (saturation of the solution, saturation time). The singular case q = 1 is also treated and in this situation the mean as well as the relative dispersion of the dissolution time increase to infinity. The model was successfully fitted to data (1). This empirical model is descriptive without detailed physical reasoning behind its derivation. According to the model, the mean dissolution time is affected by the dose-solubility ratio. Although this prediction appears to be in accordance with preliminary application, further validation based on more suitable experimental data is required.

  11. PHARMACOKINETIC VARIATIONS OF OFLOXACIN IN NORMAL AND FEBRILE RABBITS

    Directory of Open Access Journals (Sweden)

    M. AHMAD, H. RAZA, G. MURTAZA AND N. AKHTAR

    2008-12-01

    Full Text Available The influence of experimentally Escherichia coli-induced fever (EEIF on the pharmacokinetics of ofloxacin was evaluated. Ofloxacin was administered @ 20 mg.kg-1 body weight intravenously to a group of eight healthy rabbits and compared these results to values in same eight rabbits with EEIF. Pharmacokinetic parameters of ofloxacin in normal and febrile rabbits were determined by using two compartment open kinetic model. Peak plasma level (Cmax and area under the plasma concentration-time curve (AUC0-α in normal and febrile rabbits did not differ (P>0.05. However, area under first moment of plasma concentration-time curve (AUMC0-α in febrile rabbits was significantly (P<0.05 higher than that in normal rabbits. Mean values for elimination rate constant (Ke, elimination half life (t1/2β and apparent volume of distribution (Vd were significantly (P<0.05 lower in febrile rabbits compared to normal rabbits, while mean residence time (MRT and total body clearance (Cl of ofloxacin did not show any significant difference in the normal and febrile rabbits. Clinical significance of the above results can be related to the changes in the volume of distribution and elimination half life that illustrates an altered steady state in febrile condition; hence, the need for an adjustment of dosage regimen in EEIF is required.

  12. Assessment of age-related bone loss in normal South African ...

    African Journals Online (AJOL)

    The range encompasses the mean and 2. SO to either side of the mean per decade. Subjects are regarded as being at high risk for fracture if the BMO falls below 2 SO of the mean (or the lowest quartile) relative to young normals (aged 30 - 39 years) (t-score). Both sets of graphs confirm the relatively steep fall in BMO after.

  13. Revisiting the'Duality of Meaning of some English Words: What's on ...

    African Journals Online (AJOL)

    The objective was to determine whether the students knew both the technical or scientific/engineering meanings and the normal meanings of the words, namely: elevation, surveying, function, sign, model, ... Thus, the majority of the students did not know both meanings, which pointed to students' vocabulary challenges.

  14. Quantification of microangiopathic lesions in brain parenchyma and age-adjusted mean scores for the diagnostic separation of normal from pathological values in senile dementia

    International Nuclear Information System (INIS)

    Hentschel, F.; Kreis, M.; Damian, M.; Krumm, B.; Froelich, F.

    2005-01-01

    Purpose: to quantify microangiopathic lesions in the cerebral white matter and to develop age-corrected cut-off values for separating normal from dementia-related pathological lesions. Materials and methods: in a memory clinic, 338 patients were investigated neuropsychiatrically by a psychological test battery and by MRI. Using a FLAIR sequence and a newly developed rating scale, white matter lesions (WMLs) were quantified with respect to localization, number and intensity, and these ratings were condensed into a score. The WML scores were correlated with the mini-mental state examination (MMSE) and clinical dementia rating (CDR) score in dementia patients. A non-linear smoothing procedure was used to calculate age-related mean values and confidence intervals, separate for cognitively intact subjects and dementia patients. Results: the WML scores correlated highly significantly with age in cognitively intact subjects and with psychometric scores in dementia patients. Age-adjusted WML scores of cognitively intact subjects were significantly different from those of dementia patients with respect to the whole brain as well as to the frontal lobe. Mean value and confidence intervals adjusted for age significantly separated dementia patients from cognitively intact subjects over an age range of 54 through 84 years. Conclusion: a rating scale for the quantification of WML was validated and age-adjusted mean values with their confidence intervals for a diagnostically relevant age range were developed. This allows an easy to handle, fast and reliable diagnosis of the vascular component in senile dementia. (orig.)

  15. Normalizations of High Taylor Reynolds Number Power Spectra

    Science.gov (United States)

    Puga, Alejandro; Koster, Timothy; Larue, John C.

    2014-11-01

    The velocity power spectrum provides insight in how the turbulent kinetic energy is transferred from larger to smaller scales. Wind tunnel experiments are conducted where high intensity turbulence is generated by means of an active turbulence grid modeled after Makita's 1991 design (Makita, 1991) as implemented by Mydlarski and Warhaft (M&W, 1998). The goal of this study is to document the evolution of the scaling region and assess the relative collapse of several proposed normalizations over a range of Rλ from 185 to 997. As predicted by Kolmogorov (1963), an asymptotic approach of the slope (n) of the inertial subrange to - 5 / 3 with increasing Rλ is observed. There are three velocity power spectrum normalizations as presented by Kolmogorov (1963), Von Karman and Howarth (1938) and George (1992). Results show that the Von Karman and Howarth normalization does not collapse the velocity power spectrum as well as the Kolmogorov and George normalizations. The Kolmogorov normalization does a good job of collapsing the velocity power spectrum in the normalized high wavenumber range of 0 . 0002 University of California, Irvine Research Fund.

  16. 46 CFR 182.620 - Auxiliary means of steering.

    Science.gov (United States)

    2010-10-01

    ... TONS) MACHINERY INSTALLATION Steering Systems § 182.620 Auxiliary means of steering. (a) Except as... personnel hazards during normal or heavy weather operation. (b) A suitable hand tiller may be acceptable as...

  17. Sketching Curves for Normal Distributions--Geometric Connections

    Science.gov (United States)

    Bosse, Michael J.

    2006-01-01

    Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…

  18. Essential Oil of Japanese Cedar (Cryptomeria japonica) Wood Increases Salivary Dehydroepiandrosterone Sulfate Levels after Monotonous Work.

    Science.gov (United States)

    Matsubara, Eri; Tsunetsugu, Yuko; Ohira, Tatsuro; Sugiyama, Masaki

    2017-01-21

    Employee problems arising from mental illnesses have steadily increased and become a serious social problem in recent years. Wood is a widely available plant material, and knowledge of the psychophysiological effects of inhalation of woody volatile compounds has grown considerably. In this study, we established an experimental method to evaluate the effects of Japanese cedar wood essential oil on subjects performing monotonous work. Two experiment conditions, one with and another without diffusion of the essential oil were prepared. Salivary stress markers were determined during and after a calculation task followed by distribution of questionnaires to achieve subjective odor assessment. We found that inhalation of air containing the volatile compounds of Japanese cedar wood essential oil increased the secretion of dehydroepiandrosterone sulfate (DHEA-s). Slight differences in the subjective assessment of the odor of the experiment rooms were observed. The results of the present study indicate that the volatile compounds of Japanese cedar wood essential oil affect the endocrine regulatory mechanism to facilitate stress responses. Thus, we suggest that this essential oil can improve employees' mental health.

  19. Study of molecule-metal interfaces by means of the normal incidence X-ray standing wave technique

    International Nuclear Information System (INIS)

    Mercurio, Giuseppe

    2012-01-01

    Functional surfaces based on monolayers of organic molecules are currently subject of an intense research effort due to their applications in molecular electronics, sensing and catalysis. Because of the strong dependence of organic based devices on the local properties of the molecule-metal interface, a direct investigation of the interface chemistry is of paramount importance. In this context, the bonding distance, measured by means of the normal incidence X-ray standing wave technique (NIXSW), provides a direct access to the molecule-metal interactions. At the same time, NIXSW adsorption heights are used to benchmark different density functional theory (DFT) schemes and determine the ones with predictive power for similar systems. This work investigates the geometric and chemical properties of different molecule/metal interfaces, relevant to molecular electronics and functional surfaces applications, primarily by means of the NIXSW technique. All NIXSW data are analyzed with the newly developed open source program Torricelli, which is thoroughly documented in the thesis. In order to elucidate the role played by the substrate within molecule/metal interfaces, the prototype organic molecule 3,4,9,10-perylene-tetracarboxylic-dianhydride (PTCDA) is explored on the Ag(110) surface. The molecule results more distorted and at smaller bonding distances on the more reactive Ag(110) surface, in comparison with the Ag(100), the Ag(111) and Au(111) substrates. This conclusion follows from the detailed molecular adsorption geometry obtained from the differential analysis of nonequivalent carbon and oxygen species (including a careful error analysis). Subsequently, the chemisorptive PTCDA/Ag(110) interaction is tuned by the co-deposition of an external alkali metal, namely K. As a consequence, the functional groups of PTCDA unbind from the surface, which, in turn, undergoes major reconstruction. In fact, the resulting nanopatterned surface consists of alternated up and down

  20. Testing a novel method for improving wayfinding by means of a P3b Virtual Reality Visual Paradigm in normal aging.

    Science.gov (United States)

    de Tommaso, Marina; Ricci, Katia; Delussi, Marianna; Montemurno, Anna; Vecchio, Eleonora; Brunetti, Antonio; Bevilacqua, Vitoantonio

    2016-01-01

    We propose a virtual reality (VR) model, reproducing a house environment, where color modification of target places, obtainable by home automation in a real ambient, was tested by means of a P3b paradigm. The target place (bathroom door) was designed to be recognized during a virtual wayfinding in a realistic reproduction of a house environment. Different color and luminous conditions, easily obtained in the real ambient from a remote home automation control, were applied to the target and standard places, all the doors being illuminated in white (W), and only target doors colored with a green (G) or red (R) spotlight. Three different Virtual Environments (VE) were depicted, as the bathroom was designed in the aisle (A), living room (L) and bedroom (B). EEG was recorded from 57 scalp electrodes in 10 healthy subjects in the 60-80 year age range (O-old group) and 12 normal cases in the 20-30 year age range (Y-young group). In Young group, all the target stimuli determined a significant increase in P3b amplitude on the parietal, occipital and central electrodes compared to frequent stimuli condition, whatever was the color of the target door, while in elderly group the P3b obtained by the green and red colors was significantly different from the frequent stimulus, on the parietal, occipital, and central derivations, while the White stimulus did not evoke a significantly larger P3b with respect to frequent stimulus. The modulation of P3b amplitude, obtained by color and luminance change of target place, suggests that cortical resources, able to compensate the age-related progressive loss of cognitive performance, need to be facilitated even in normal elderly. The event-related responses obtained by virtual reality may be a reliable method to test the environmental feasibility to age-related cognitive changes.

  1. Finite-State Mean-Field Games, Crowd Motion Problems, and its Numerical Methods

    KAUST Repository

    Machado Velho, Roberto

    2017-09-10

    In this dissertation, we present two research projects, namely finite-state mean-field games and the Hughes model for the motion of crowds. In the first part, we describe finite-state mean-field games and some applications to socio-economic sciences. Examples include paradigm shifts in the scientific community and the consumer choice behavior in a free market. The corresponding finite-state mean-field game models are hyperbolic systems of partial differential equations, for which we propose and validate a new numerical method. Next, we consider the dual formulation to two-state mean-field games, and we discuss numerical methods for these problems. We then depict different computational experiments, exhibiting a variety of behaviors, including shock formation, lack of invertibility, and monotonicity loss. We conclude the first part of this dissertation with an investigation of the shock structure for two-state problems. In the second part, we consider a model for the movement of crowds proposed by R. Hughes in [56] and describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an Eikonal equation with Dirichlet or Neumann data. We first establish a priori estimates for the solutions. Next, we consider radial solutions, and we identify a shock formation mechanism. Subsequently, we illustrate the existence of congestion, the breakdown of the model, and the trend to the equilibrium. We also propose a new numerical method for the solution of Fokker-Planck equations and then to systems of PDEs composed by a Fokker-Planck equation and a potential type equation. Finally, we illustrate the use of the numerical method both to the Hughes model and mean-field games. We also depict cases such as the evacuation of a room and the movement of persons around Kaaba (Saudi Arabia).

  2. A Lattice-Misfit-Dependent Damage Model for Non-linear Damage Accumulations Under Monotonous Creep in Single Crystal Superalloys

    Science.gov (United States)

    le Graverend, J.-B.

    2018-05-01

    A lattice-misfit-dependent damage density function is developed to predict the non-linear accumulation of damage when a thermal jump from 1050 °C to 1200 °C is introduced somewhere in the creep life. Furthermore, a phenomenological model aimed at describing the evolution of the constrained lattice misfit during monotonous creep load is also formulated. The response of the lattice-misfit-dependent plasticity-coupled damage model is compared with the experimental results obtained at 140 and 160 MPa on the first generation Ni-based single crystal superalloy MC2. The comparison reveals that the damage model is well suited at 160 MPa and less at 140 MPa because the transfer of stress to the γ' phase occurs for stresses above 150 MPa which leads to larger variations and, therefore, larger effects of the constrained lattice misfit on the lifetime during thermo-mechanical loading.

  3. Elastic K-means using posterior probability.

    Science.gov (United States)

    Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris

    2017-01-01

    The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.

  4. Semantic and phonological coding in poor and normal readers.

    Science.gov (United States)

    Vellutino, F R; Scanlon, D M; Spearing, D

    1995-02-01

    Three studies were conducted evaluating semantic and phonological coding deficits as alternative explanations of reading disability. In the first study, poor and normal readers in second and sixth grade were compared on various tests evaluating semantic development as well as on tests evaluating rapid naming and pseudoword decoding as independent measures of phonological coding ability. In a second study, the same subjects were given verbal memory and visual-verbal learning tasks using high and low meaning words as verbal stimuli and Chinese ideographs as visual stimuli. On the semantic tasks, poor readers performed below the level of the normal readers only at the sixth grade level, but, on the rapid naming and pseudoword learning tasks, they performed below the normal readers at the second as well as at the sixth grade level. On both the verbal memory and visual-verbal learning tasks, performance in poor readers approximated that of normal readers when the word stimuli were high in meaning but not when they were low in meaning. These patterns were essentially replicated in a third study that used some of the same semantic and phonological measures used in the first experiment, and verbal memory and visual-verbal learning tasks that employed word lists and visual stimuli (novel alphabetic characters) that more closely approximated those used in learning to read. It was concluded that semantic coding deficits are an unlikely cause of reading difficulties in most poor readers at the beginning stages of reading skills acquisition, but accrue as a consequence of prolonged reading difficulties in older readers. It was also concluded that phonological coding deficits are a probable cause of reading difficulties in most poor readers.

  5. Absorption of orally administered 65Zn by normal human subjects

    International Nuclear Information System (INIS)

    Aamodt, R.L.; Rumble, W.F.; Johnston, G.S.; Markley, E.J.; Henkin, R.I.

    1981-01-01

    Despite studies by several investigators of human gastrointestinal 65Zn absorption, implications of these data for evaluation of functional zinc status are unclear because limited numbers of normal subjects have been studied. To evaluated zinc absorption in normal humans, 75 subjects (31 women, 44 men, ages 18 to 84 yr) were given 10 micro Ci carrier-free 65Zn orally after an overnight fast. Absorption calculated from total body retention measured 7, 14, and 21 days after administration of tracer was 65 +/- 11% (mean +/- 1 SD), range from 40 to 86%. Comparison of these results with those for patients with a variety of diseases indicate that patients exhibit a wider range of absorption and, in four of six studies patients exhibit decreased mean zinc absorption. These results of gastrointestinal zinc absorption in a large number of normal humans offer a basis for a clearer comparison with data from patients who exhibit abnormalities of zinc absorption

  6. Numerical Study on Dynamic Response of a Horizontal Layered-Structure Rock Slope under a Normally Incident Sv Wave

    Directory of Open Access Journals (Sweden)

    Zhifa Zhan

    2017-07-01

    Full Text Available Several post-earthquake investigations have indicated that the slope structure plays a leading role in the stability of rock slopes under dynamic loads. In this paper, the dynamic response of a horizontal layered-structure rock slope under harmonic Sv wave is studied by making use of the Fast Lagrangian Analysis of Continua method (FLAC. The suitability of FLAC for studying wave transmission across rock joints is validated through comparison with analytical solutions. After parametric studies on Sv wave transmission across the horizontal layered-structure rock slope, it is found that the acceleration amplification coefficient η, which is defined as the ratio of the acceleration at the monitoring point to the value at the toe, wavily increases with an increase of the height along the slope surface. Meanwhile, the fluctuation weakens with normalized joint stiffness K increasing and enhances with normalized joint spacing ξ increasing. The acceleration amplification coefficient of the slope crest ηcrest does not monotonously increase with the increase of ξ, but decreases with the increase of K. Additionally, ηcrest is more sensitive to ξ compared to K. From the contour figures, it can also be found that the contour figures of η take on rhythm, and the effects of ξ on the acceleration amplification coefficient are more obvious compared to the effects on K.

  7. Mean-field models and superheavy elements

    International Nuclear Information System (INIS)

    Reinhard, P.G.; Bender, M.; Maruhn, J.A.; Frankfurt Univ.

    2001-03-01

    We discuss the performance of two widely used nuclear mean-field models, the relativistic mean-field theory (RMF) and the non-relativistic Skyrme-Hartree-Fock approach (SHF), with particular emphasis on the description of superheavy elements (SHE). We provide a short introduction to the SHF and RMF, the relations between these two approaches and the relations to other nuclear structure models, briefly review the basic properties with respect to normal nuclear observables, and finally present and discuss recent results on the binding properties of SHE computed with a broad selection of SHF and RMF parametrisations. (orig.)

  8. Diffusion-weighted MRI in prostatic lesions: Diagnostic performance of normalized ADC using normal peripheral prostatic zone as a reference

    Directory of Open Access Journals (Sweden)

    Tamer F. Taha Ali

    2018-03-01

    Full Text Available Aim of study: Evaluate the potential value of the normal peripheral zone as a reference organ to normalize prostatic lesion apparent diffusion coefficient (ADC to improve its evaluation of prostatic lesions. Patients and methods: This prospective study included 38 patients with clinical suspicion of cancer prostate (increased PSA levels (>4 ng/ml, hard prostate in digital rectal examination and who are scheduled to undergo a TRUS-guided biopsy. Conventional and DW-MRI was done and ADC was calculated. The normalized ADC value was calculated by dividing the ADC of lesion by ADC of reference site (healthy peripheral zone. DWI-MRI results were compared to the results of biopsy. Comparison of ADCs and nADCs of benign and malignant lesions was done. Receiver operating characteristics (ROC curve analysis was done. Results: The patients were classified by histopathology into non-malignant group (16 patients and malignant group (22 patients. Significant negative correlation between ADC and normalized ADC (nADC and malignancy was detected. There was no significant difference between the mean ADC of peripheral health prostatic zones (PZ between benign and malignant cases (2.221 ± 0.356 versus 1.99 ± 0.538x10−3 mm2/sec, p = 0.144.There was significant difference between the mean ADC and mean nADC in benign and malignant lesions (1.049 ± 0.217 versus 0.659 ± 0.221x10−3 mm2/sec, p < 0.001 and (0.475 ± 0.055 versus 0.328 ± 0.044, p < 0.001 respectively.There was significant higher diagnostic performance of nADC than ADC with ADC Cut-off value 0.75 × 10−3 mm2/sec and nADC cut-off value 0.39 could significantly differentiate between benign and malignant lesion with sensitivity, specificity, PPV,NPV of 86.36,75,82.61 and 80% respectively, p < 0.0001 for ADC and 95.45, 93.75, 95.45 and 93.75%, p < 0.0001 for nADC. Conclusion: diagnostic performance of nADC using normal peripheral zone is higher than

  9. GC-Content Normalization for RNA-Seq Data

    Science.gov (United States)

    2011-01-01

    Background Transcriptome sequencing (RNA-Seq) has become the assay of choice for high-throughput studies of gene expression. However, as is the case with microarrays, major technology-related artifacts and biases affect the resulting expression measures. Normalization is therefore essential to ensure accurate inference of expression levels and subsequent analyses thereof. Results We focus on biases related to GC-content and demonstrate the existence of strong sample-specific GC-content effects on RNA-Seq read counts, which can substantially bias differential expression analysis. We propose three simple within-lane gene-level GC-content normalization approaches and assess their performance on two different RNA-Seq datasets, involving different species and experimental designs. Our methods are compared to state-of-the-art normalization procedures in terms of bias and mean squared error for expression fold-change estimation and in terms of Type I error and p-value distributions for tests of differential expression. The exploratory data analysis and normalization methods proposed in this article are implemented in the open-source Bioconductor R package EDASeq. Conclusions Our within-lane normalization procedures, followed by between-lane normalization, reduce GC-content bias and lead to more accurate estimates of expression fold-changes and tests of differential expression. Such results are crucial for the biological interpretation of RNA-Seq experiments, where downstream analyses can be sensitive to the supplied lists of genes. PMID:22177264

  10. The Study and Comparison of Irrational Beliefs in Addicted and Normal People

    Directory of Open Access Journals (Sweden)

    Hasan Aminpoor

    2011-05-01

    Full Text Available Introduction: Irrational beliefs have some destructive and serious effects on individuals’ behavior at home, work environment, and in social environment also implicate emotional deep effects (depression, grief, self teasing, self reproaching, and contrition. The main aim of present study was the comparison of irrational beliefs in addicted and normal people. Method: the research method was causal comparative research and taking into account the subject importance and its role in individuals tendency toward addiction a sample of 120 persons (60 addicted people and 60 normal ones was selected based on available samplingand similar in order of age and the Joens irrational beliefs questionnaire was administered among selected sample. In order to analyze data, independent samples t test, and ANOVA were run. Results: There was significant difference between the mean score of irrational beliefs with consideration of group (addicted and normal group, also, in addicted people there was significant difference between the mean score of irrational beliefs with consideration of education level, and economical status. Conclusion: The mean score of irrational beliefs in addicted people is more than normal ones. Taking into consideration that individuals can change their behaviors and feelings through changing their beliefs, so one must replace irrational beliefs with rational ones based on educational qualifications.

  11. Comparison of plasma endothelin levels between osteoporotic, osteopenic and normal subjects

    Directory of Open Access Journals (Sweden)

    Biçimoğlu Ali

    2005-09-01

    Full Text Available Abstract Background It has been demonstrated that endothelins (ET have significant roles in bone remodeling, metabolism and physiopathology of several bone diseases. We aimed to investigate if there was any difference between the plasma ET levels of osteoporotic patients and normals. Methods 86 patients (70 women and 16 men with a mean age of 62.6 (ranges: 51–90 years were included in this study. Patients were divided into groups of osteoporosis, osteopenia and normal regarding reported T scores of DEXA evaluation according to the suggestions of World Health Organization. According to these criteria 19, 43 and 24 were normal, osteopenic and osteoporotic respectively. Then total plasma level of ET was measured in all patients with monoclonal antibody based sandwich immunoassay (EIA method. One-way analysis of variance test was used to compare endothelin values between normals, osteopenics and osteoporotics. Results Endothelin total plasma level in patients was a mean of 98.36 ± 63.96, 100.92 ± 47.2 and 99.56 ± 56.6 pg/ml in osteoporotic, osteopenic and normal groups respectively. The difference between groups was not significant (p > 0.05. Conclusion No significant differences in plasma ET levels among three groups of study participants could be detected in this study.

  12. Reconstructing Normality

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Fristed, Peter Billeskov

    2012-01-01

    Forensic psychiatry is an area of priority for the Danish Government. As the field expands, this calls for increased knowledge about mental health nursing practice, as this is part of the forensic psychiatry treatment offered. However, only sparse research exists in this area. The aim of this study...... was to investigate the characteristics of forensic mental health nursing staff interaction with forensic mental health inpatients and to explore how staff give meaning to these interactions. The project included 32 forensic mental health staff members, with over 307 hours of participant observations, 48 informal....... The intention is to establish a trusting relationship to form behaviour and perceptual-corrective care, which is characterized by staff's endeavours to change, halt, or support the patient's behaviour or perception in relation to staff's perception of normality. The intention is to support and teach the patient...

  13. Behavior of annealed type 316 stainless steel under monotonic and cyclic biaxial loading at room temperature

    International Nuclear Information System (INIS)

    Ellis, J.R.; Robinson, D.N.; Pugh, C.E.

    1978-01-01

    This paper addresses the elastic-plastic behavior of type 316 stainless steel, one of the major structural alloys used in liquid-metal fast breeder reactor components. The study was part of a continuing program to develop a structural design technology applicable to advanced reactor systems. Here, behaviour of solution annealed material was examined through biaxial stress experiments conducted at room temperature under radial loadings (√3tau=sigma) in tension-torsion stress space. The effects of both stress limited monotonic loading and strain limited cyclic loading were determined on the size, shape and position of yield loci corresponding to small offset strain (10 microstrain) definition of yield. In the present work, the aim was to determine the extent to which the constitutive laws previously recommended for type 304 stainless steel are applicable to type 316 stainless steel. It was concluded that for the conditions investigated, the inelastic behavior of the two materials are qualitatively similar. Specifically, the von Mises yield criterion provides a reasonable approximation of initial yield behavior and the subsequent hardening behavior, at least under small offset definitions of yield, is to the first order kinematic in nature. (Auth.)

  14. Standardized uptake values of fluorine-18 fluorodeoxyglucose: the value of different normalization procedures

    International Nuclear Information System (INIS)

    Schomburg, A.; Bender, H.; Reichel, C.; Sommer, T.; Ruhlmann, J.; Kozak, B.; Biersack, H.J.

    1996-01-01

    While the evident advantages of absolute metabolic rate determinations cannot be equalled by static image analysis of fluorine-18 fluorodexyglucose positron emission tomographic (FDG PET) studies, various algorithms for the normalization of static FDG uptake values have been proposed. This study was performed to compare different normalization procedures in terms of dependency on individual patient characteristics. Standardized FDG uptake values (SUVs) were calculated for liver and lung tissue in 126 patients studied with whole-body FDG PET. Uptake values were normalized for total body weight, lean body mass and body surface area. Ranges, means, medians, standard deviations and variation coefficients of these SUV parameters were calculated and their interdependency with total body weight, lean body mass, body surface area, patient height and blood sugar levels was calculated by means of regression analysis. Standardized FDG uptake values normalized for body surface area were clearly superior to SUV parameters normalized for total body weight or lean body mass. Variation and correlation coefficients of body surface area-normalized uptake values were minimal when compared with SUV parameters derived from the other normalization procedures. Normalization for total body weight resulted in uptake values still dependent on body weight and blood sugar levels, while normalization for lean body mass did not eliminate the positive correlation with lean body mass and patient height. It is concluded that normalization of FDG uptake values for body surface area is less dependent on the individual patient characteristics than are FDG uptake values normalized for other parameters, and therefore appears to be preferable for FDG PET studies in oncology. (orig.)

  15. Three-dimensional assessment of the normal Japanese glenoid and comparison with the normal French glenoid.

    Science.gov (United States)

    Mizuno, N; Nonaka, S; Ozaki, R; Yoshida, M; Yoneda, M; Walch, G

    2017-12-01

    In 2014, reverse total shoulder arthroplasty was approved in Japan. We were concerned that the base plate might be incompatible with Japanese who were generally smaller than Westerners. Therefore, we investigated the dimensions and morphology of the normal Japanese glenoid and compared with the normal French glenoid. One hundred Japanese shoulders without glenoid lesions (50 men and 50 women) were investigated and compared with 100 French shoulders (50 men and 50 women). Computed tomography was performed with 3-dimensional image reconstruction and images were analyzed using Glenosys software. Glenoid parameters (width, height, retroversion and inclination) were compared between Japanese and French subjects. In Japanese subjects, the mean glenoid width was 25.5mm, height was 33.3mm, retroversion was 2.3° and inclination was 11.6° superiorly. In French subjects, the mean glenoid width was 26.7mm, height was 35.4mm, retroversion was 6.0° and inclination was 10.4° superiorly. Glenoid width and height were significantly smaller in Japanese subjects than French subjects (P=0.001 and PFrench subjects (P<0.001). There was no significant difference of inclination. These findings will help surgeons to identify suitable patients for RSA and perform the procedure with appropriate preoperative planning. IV: retrospective or historical series. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  16. Areas of normal pulmonary parenchyma on HRCT exhibit increased FDG PET signal in IPF patients

    Energy Technology Data Exchange (ETDEWEB)

    Win, Thida [Lister Hospital, Respiratory Medicine, Stevenage (United Kingdom); Thomas, Benjamin A.; Lambrou, Tryphon; Hutton, Brian F.; Endozo, Raymondo; Shortman, Robert I.; Afaq, Asim; Ell, Peter J.; Groves, Ashley M. [University College London, Institute of Nuclear Medicine, University College Hospital, London (United Kingdom); Screaton, Nicholas J. [Papworth Hospital, Radiology Department, Papworth Everard (United Kingdom); Porter, Joanna C. [University College London, Centre for Respiratory Diseases, University College Hospital, London (United Kingdom); Maher, Toby M. [Royal Brompton Hospital, Interstitial Lung Disease Unit, London (United Kingdom); Lukey, Pauline [GSK, Fibrosis DPU, Research and Development, Stevenage (United Kingdom)

    2014-02-15

    Patients with idiopathic pulmonary fibrosis (IPF) show increased PET signal at sites of morphological abnormality on high-resolution computed tomography (HRCT). The purpose of this investigation was to investigate the PET signal at sites of normal-appearing lung on HRCT in IPF. Consecutive IPF patients (22 men, 3 women) were prospectively recruited. The patients underwent {sup 18}F-FDG PET/HRCT. The pulmonary imaging findings in the IPF patients were compared to the findings in a control population. Pulmonary uptake of {sup 18}F-FDG (mean SUV) was quantified at sites of morphologically normal parenchyma on HRCT. SUVs were also corrected for tissue fraction (TF). The mean SUV in IPF patients was compared with that in 25 controls (patients with lymphoma in remission or suspected paraneoplastic syndrome with normal PET/CT appearances). The pulmonary SUV (mean ± SD) uncorrected for TF in the controls was 0.48 ± 0.14 and 0.78 ± 0.24 taken from normal lung regions in IPF patients (p < 0.001). The TF-corrected mean SUV in the controls was 2.24 ± 0.29 and 3.24 ± 0.84 in IPF patients (p < 0.001). IPF patients have increased pulmonary uptake of {sup 18}F-FDG on PET in areas of lung with a normal morphological appearance on HRCT. This may have implications for determining disease mechanisms and treatment monitoring. (orig.)

  17. Doppler ultrasound scan during normal gestation: umbilical circulation; Ecografia Doppler en la gestacion normal: circulacion umbilical

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz, T.; Sabate, J.; Martinez-Benavides, M. M.; Sanchez-Ramos, J. [Hospital Virgen Macarena. Sevilla (Spain)

    2002-07-01

    To determine normal umbilical circulation patterns by means of Doppler ultrasound scan in a healthy gestating population without risk factors and with normal perinatal results, and to evaluate any occurring modifications relative to gestational age by obtaining records kept during pregnancy. One hundred and sixteen pregnant women carrying a single fetus have been studied. These women had no risk factors, with both clinical and analytical controls, as well as ultrasound scans, all being normal. There were performed a total of 193 Doppler ultrasound scans between weeks 15 and 41 of gestation, with blood-flow analysis in the arteries and vein of the umbilical cord. The obtained information was correlated with parameters that evaluate fetal well-being (fetal monitoring and/or oxytocin test) and perinatal result (delivery type, birth weight, Apgar score). Statistical analysis was performed with the programs SPSS 6.0.1 for Windows and EPIINFO 6.0.4. With pulsed Doppler, the umbilical artery in all cases demonstrated a biphasic morphology with systolic and diastolic components and without retrograde blood flow. As the gestation period increased, there was observed a progressive decrease in resistance along with an increase in blood-flow velocity during the diastolic phase. The Doppler ultrasound scan is a non-invasive method that permits the hemodynamic study of umbilical blood circulation. A knowledge of normal blood-flow signal morphology, as well as of the normal values for Doppler indices in relation to gestational age would permit us to utilize this method in high-risk pregnancies. (Author) 30 refs.

  18. Results of radionuclide ventriculography in normal children and adolescents

    International Nuclear Information System (INIS)

    Reich, O.; Krejcir, M.; Ruth, C.

    1989-01-01

    In order to assess the range of normal values in radionuclide ventriculography, 53 normal children and adolescents were selected in retrospect. All were exdamined by radionuclide angiocardiography on account of clinical and echocardiographical suspicion of congenital heart disease with a left-to-right shunt; a significant shunt was, however, excluded. In all patients, after equilibration of the radiopharmaceutical the ventricular function was examined by radionuclide ventriculography. The usual volume, time and rate characteristics were evaluated. The normal range was defined as the mean ±2 standard deviations which is 47 to 72% for the ejection fraction of the left ventricle and 31 to 56% for the ejection fraction of the right ventricle. (author). 2 tabs., 18 refs

  19. Dosimetric precision requirements and quantities for characterizing the response of tumors and normal tissues

    Energy Technology Data Exchange (ETDEWEB)

    Brahme, A [Karolinska Inst., Stockholm (Sweden). Dept. of Radiation Physics

    1996-08-01

    Based on simple radiobiological models the effect of the distribution of absorbed dose in therapy beams on the radiation response of tumor and normal tissue volumes are investigated. Under the assumption that the dose variation in the treated volume is small it is shown that the response of the tissue to radiation is determined mainly by the mean dose to the tumor or normal tissue volume in question. Quantitative expressions are also given for the increased probability of normal tissue complications and the decreased probability of tumor control as a function of increasing dose variations around the mean dose level to these tissues. When the dose variations are large the minimum tumor dose (to cm{sup 3} size volumes) will generally be better related to tumor control and the highest dose to significant portions of normal tissue correlates best to complications. In order not to lose more than one out of 20 curable patients (95% of highest possible treatment outcome) the required accuracy in the dose distribution delivered to the target volume should be 2.5% (1{sigma}) for a mean dose response gradient {gamma} in the range 2 - 3. For more steeply responding tumors and normal tissues even stricter requirements may be desirable. (author). 15 refs, 6 figs.

  20. CT numbers of liver and spleen in normal children

    International Nuclear Information System (INIS)

    Kim, Young Kim

    2002-01-01

    To determine the mean liver CT numbers, and differences between liver and spleen, and liver and back muscle CT numbers in normal children, and to correlate the findings with sex and age. One hundred and five normal children aged 2-14 years underwent pre-contrast CT scanning. Mean CT numbers of the liver, spleen, and back muscles were calculated, as well as the differences in CT numbers between the liver and spleen (liver-spleen CT numbers), and between the liver-back muscle CT numbers were 70.22±6.51 HU, 53.28±3.57 HU, 17.13±6.57 HU, and 11.88±5.94 HU, respectively. Mean liver CT numbers and the difference between liver and back muscle CT numbers were not different by age. By sex, all the CT numbers did not vary according to age. The sex of a subject did not affect the CT number. The children's mean liver CT number was 70.22±6.51 HU and the difference between liver and spleen CT numbers was 17.13±6.57 HU. Younger children had higher liver CT and liver-spleen CT numbers than older children. No CT numbers varied according to sex

  1. Areas of normal pulmonary parenchyma on HRCT exhibit increased FDG PET signal in IPF patients

    International Nuclear Information System (INIS)

    Win, Thida; Thomas, Benjamin A.; Lambrou, Tryphon; Hutton, Brian F.; Endozo, Raymondo; Shortman, Robert I.; Afaq, Asim; Ell, Peter J.; Groves, Ashley M.; Screaton, Nicholas J.; Porter, Joanna C.; Maher, Toby M.; Lukey, Pauline

    2014-01-01

    Patients with idiopathic pulmonary fibrosis (IPF) show increased PET signal at sites of morphological abnormality on high-resolution computed tomography (HRCT). The purpose of this investigation was to investigate the PET signal at sites of normal-appearing lung on HRCT in IPF. Consecutive IPF patients (22 men, 3 women) were prospectively recruited. The patients underwent 18 F-FDG PET/HRCT. The pulmonary imaging findings in the IPF patients were compared to the findings in a control population. Pulmonary uptake of 18 F-FDG (mean SUV) was quantified at sites of morphologically normal parenchyma on HRCT. SUVs were also corrected for tissue fraction (TF). The mean SUV in IPF patients was compared with that in 25 controls (patients with lymphoma in remission or suspected paraneoplastic syndrome with normal PET/CT appearances). The pulmonary SUV (mean ± SD) uncorrected for TF in the controls was 0.48 ± 0.14 and 0.78 ± 0.24 taken from normal lung regions in IPF patients (p 18 F-FDG on PET in areas of lung with a normal morphological appearance on HRCT. This may have implications for determining disease mechanisms and treatment monitoring. (orig.)

  2. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  3. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  4. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    Science.gov (United States)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  5. Normal modes and continuous spectra

    International Nuclear Information System (INIS)

    Balmforth, N.J.; Morrison, P.J.

    1994-12-01

    The authors consider stability problems arising in fluids, plasmas and stellar systems that contain singularities resulting from wave-mean flow or wave-particle resonances. Such resonances lead to singularities in the differential equations determining the normal modes at the so-called critical points or layers. The locations of the singularities are determined by the eigenvalue of the problem, and as a result, the spectrum of eigenvalues forms a continuum. They outline a method to construct the singular eigenfunctions comprising the continuum for a variety of problems

  6. Nonischemic changes in right ventricular function on exercise. Do normal volunteers differ from patients with normal coronary arteries

    International Nuclear Information System (INIS)

    Caplin, J.L.; Maltz, M.B.; Flatman, W.D.; Dymond, D.S.

    1988-01-01

    Factors other than ischemia may alter right ventricular function both at rest and on exercise. Normal volunteers differ from cardiac patients with normal coronary arteries with regard to their left ventricular response to exercise. This study examined changes in right ventricular function on exercise in 21 normal volunteers and 13 patients with normal coronary arteries, using first-pass radionuclide angiography. There were large ranges of right ventricular ejection fraction in the two groups, both at rest and on exercise. Resting right ventricular ejection fraction was 40.2 +/- 10.6% (mean +/- SD) in the volunteers and 38.6 +/- 9.7% in the patients, p = not significant, and on exercise rose significantly in both groups to 46.1 +/- 9.9% and 45.8 +/- 9.7%, respectively. The difference between the groups was not significant. In both groups some subjects with high resting values showed large decreases in ejection fraction on exercise, and there were significant negative correlations between resting ejection fraction and the change on exercise, r = -0.59 (p less than 0.01) in volunteers, and r = -0.66 (p less than 0.05) in patients. Older volunteers tended to have lower rest and exercise ejection fractions, but there was no difference between normotensive and hypertensive patients in their rest or exercise values. In conclusion, changes in right ventricular function on exercise are similar in normal volunteers and in patients with normal coronary arteries. Some subjects show decreases in right ventricular ejection fraction on exercise which do not appear to be related to ischemia

  7. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    Science.gov (United States)

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  8. Multiple imputation in the presence of non-normal data.

    Science.gov (United States)

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Mean level signal crossing rate for an arbitrary stochastic process

    DEFF Research Database (Denmark)

    Yura, Harold T.; Hanson, Steen Grüner

    2010-01-01

    The issue of the mean signal level crossing rate for various probability density functions with primary relevance for optics is discussed based on a new analytical method. This method relies on a unique transformation that transforms the probability distribution under investigation into a normal...... probability distribution, for which the distribution of mean level crossings is known. In general, the analytical results for the mean level crossing rate are supported and confirmed by numerical simulations. In particular, we illustrate the present method by presenting analytic expressions for the mean level...

  10. Efficacy of nebulised L-adrenaline with 3% hypertonic saline versus normal saline in bronchiolitis

    Directory of Open Access Journals (Sweden)

    Shabnam Sharmin

    2016-08-01

    Full Text Available Background: Bronchiolitis is one of the most common respiratory diseases requiring hospitalization. Nebulized epineph­rine and salbutamol therapy has been used in different centres with varying results. Objective: The objective of the study was to compare the efficacy of nebulised adrenaline diluted with 3% hypertonic saline with nebulised adrenaline diluted with normal saline in bronchiolitis. Methods: Fifty three infants and young children with bronchiolitis, age ranging from 2 months to 2 years, presenting in the emergency department of Manikganj Sadar Hospital were enrolled in the study. After initial evaluation, patients were randomized to receive either nebulized adrenaline I .5 ml ( 1.5 mg diluted with 2 ml of3% hypertonic saline (group I ornebulised adrenaline 1.5 ml (1.5 mg diluted with 2 ml of normal saline (group II. Patients were evaluated again 30 minutes after nebulization. Results: Twenty eight patients in the group I (hypertonic saline and twenty five in groupII (normal saline were included in the study. After nebulization, mean respiratory rate decreased from 63.7 to 48.1 (p<.01, mean clinical severity score decreased from 8.5 to 3.5 (p<.01 and mean oxygen satw·ation increased 94.7% to 96.9% (p<.01 in group I. In group II, mean respiratory rate decreased from 62.4 to 47.4 (p<.01, mean clinical severity score decreased from 7.2 to 4.1 (p<.01 and mean oxygen saturation increased from 94. 7% to 96. 7% (p<.01. Mean respiratory rate decreased by 16 in group I versus 14.8 (p>.05 in group 11, mean clinical severity score decreased by 4.6 in group versus 3 (p<.05 in group, and mean oxygen saturation increased by 2.2% and 1.9% in group and group respectively. Difference in reduction in clinical severity score was statistically significant , though the changes in respiratory rate and oxygen saturation were not statistically significant. Conclusion: The study concluded that both nebulised adrenaline diluted with 3% hypertonic saline and

  11. Blood and plasma volumes in normal west African dwarf sheep ...

    African Journals Online (AJOL)

    Blood and plasma volumes were determined using T-1824 in 36 normal adult West African Dwarf sheep. In the rams, dry ewes, pregnant ewes and lactating ewes, the mean values for the blood volume (ml/kg body weight) were 64.08 ± 6.11, 55.74 ± 9.31, 71.46 ± 6.46 and 147.12 ± 12.79 respectively, while the mean values ...

  12. Quantifying normal ankle joint volume: An anatomic study

    Directory of Open Access Journals (Sweden)

    Draeger Reid

    2009-01-01

    Full Text Available Background: Many therapeutic and diagnostic modalities such as intraarticular injections, arthrography and ankle arthroscopy require introduction of fluid into the ankle joint. Little data are currently available in the literature regarding the maximal volume of normal, nonpathologic, human ankle joints. The purpose of this study was to measure the volume of normal human ankle joints. Materials and Methods: A fluoroscopic guided needle was passed into nine cadaveric adult ankle joints. The needle was connected to an intracompartmental pressure measurement device. A radiopaque dye was introduced into the joint in 2 mL boluses, while pressure measurements were recorded. Fluid was injected into the joint until three consecutive pressure measurements were similar, signifying a maximal joint volume. Results: The mean maximum ankle joint volume was 20.9 ± 4.9 mL (range, 16-30 mL. The mean ankle joint pressure at maximum volume was 142.2 ± 13.8 mm Hg (range, 122-166 mm Hg. Two of the nine samples showed evidence of fluid tracking into the synovial sheath of the flexor hallucis longus tendon. Conclusion: Maximal normal ankle joint volume was found to vary between 16-30 mL. This study ascertains the communication between the ankle joint and the flexor hallucis longus tendon sheath. Exceeding maximal ankle joint volume suggested by this study during therapeutic injections, arthrography, or arthroscopy could potentially damage the joint.

  13. Glymphatic MRI in idiopathic normal pressure hydrocephalus.

    Science.gov (United States)

    Ringstad, Geir; Vatnehol, Svein Are Sirirud; Eide, Per Kristian

    2017-10-01

    The glymphatic system has in previous studies been shown as fundamental to clearance of waste metabolites from the brain interstitial space, and is proposed to be instrumental in normal ageing and brain pathology such as Alzheimer's disease and brain trauma. Assessment of glymphatic function using magnetic resonance imaging with intrathecal contrast agent as a cerebrospinal fluid tracer has so far been limited to rodents. We aimed to image cerebrospinal fluid flow characteristics and glymphatic function in humans, and applied the methodology in a prospective study of 15 idiopathic normal pressure hydrocephalus patients (mean age 71.3 ± 8.1 years, three female and 12 male) and eight reference subjects (mean age 41.1 + 13.0 years, six female and two male) with suspected cerebrospinal fluid leakage (seven) and intracranial cyst (one). The imaging protocol included T1-weighted magnetic resonance imaging with equal sequence parameters before and at multiple time points through 24 h after intrathecal injection of the contrast agent gadobutrol at the lumbar level. All study subjects were kept in the supine position between examinations during the first day. Gadobutrol enhancement was measured at all imaging time points from regions of interest placed at predefined locations in brain parenchyma, the subarachnoid and intraventricular space, and inside the sagittal sinus. Parameters demonstrating gadobutrol enhancement and clearance in different locations were compared between idiopathic normal pressure hydrocephalus and reference subjects. A characteristic flow pattern in idiopathic normal hydrocephalus was ventricular reflux of gadobutrol from the subarachnoid space followed by transependymal gadobutrol migration. At the brain surfaces, gadobutrol propagated antegradely along large leptomeningeal arteries in all study subjects, and preceded glymphatic enhancement in adjacent brain tissue, indicating a pivotal role of intracranial pulsations for glymphatic function. In

  14. Evaluation of 14 winter bread wheat genotypes in normal irrigation ...

    African Journals Online (AJOL)

    Evaluation of 14 winter bread wheat genotypes in normal irrigation and stress conditions after anthesis stage. ... African Journal of Biotechnology ... Using biplot graphic method, comparison of indices amounts and mean rating of indices for ...

  15. Fractal Dimension Of CT Images Of Normal Parotid Glands

    International Nuclear Information System (INIS)

    Lee, Sang Jin; Heo, Min Suk; You, Dong Soo

    1999-01-01

    This study was to investigate the age and sex differences of the fractal dimension of the normal parotid glands in the digitized CT images. The six groups, which were composed of 42 men and women from 20's, 40's and 60's and over were picked. Each group contained seven people of the same sex. The normal parotid CT images were digitized, and their fractal dimensions were calculated using Scion Image PC program. The mean of fractal dimensions in males was 1.7292 (+/-0.0588) and 1.6329 (+/-0.0425) in females. The mean of fractal dimensions in young males was 1.7617, 1.7328 in middle males, and 1.6933 in old males. The mean of fractal dimensions in young females was 1.6318, 1.6365 in middle females, and 1.6303 in old females. There was no statistical difference in fractal dimension between left and right parotid gland of the same subject (p>0.05). Fractal dimensions in male were decreased in older group (p 0.05). The fractal dimension of parotid glands in the digitized CT images will be useful to evaluate the age and sex differences.

  16. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    International Nuclear Information System (INIS)

    Tyson, Jon

    2009-01-01

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  17. Low-dose computed tomography image restoration using previous normal-dose scan

    International Nuclear Information System (INIS)

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-01-01

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  18. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    Directory of Open Access Journals (Sweden)

    Eckhard Limpert

    Full Text Available BACKGROUND: The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. METHODOLOGY/PRINCIPAL FINDINGS: Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log- normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. CONCLUSIONS/SIGNIFICANCE: The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  19. The clinical significance of normal mammograms and normal sonograms in patients with palpable abnormalities of the breast

    International Nuclear Information System (INIS)

    Lee, Jin Hwa; Yoon, Seong Kuk; Choi, Sun Seob; Nam, Kyung Jin; Cho, Se Heon; Kim, Dae Cheol; Kim, Jung Il; Kim, Eun Kyung

    2006-01-01

    We wanted to evaluate the clinical significance of normal mammograms and normal sonograms in patients with palpable abnormalities of the breast. From Apr 2003 to Feb 2005, 107 patients with 113 palpable abnormalities who had combined normal sonographic and normal mammographic findings were retrospectively studied. The evaluated parameters included age of the patients, the clinical referrals, the distribution of the locations of the palpable abnormalities, whether there was a past surgical history, the mammographic densities and the sonographic echo patterns (purely hyperechoic fibrous tissue, mixed fibroglandular breast tissue, predominantly isoechoic glandular tissue and isoechoic subcutaneous fat tissue) at the sites of clinical concern, whether there was a change in imaging and/or the physical examination results at follow-up, and whether there were biopsy results. This study period was chosen to allow a follow-up period of at least 12 months. The patients' ages ranged from 22 to 66 years (mean age: 48.8 years) and 62 (58%) of the 107 patients were between 41 and 50 years old (58%). The most common location of the palpable abnormalities was the upper outer portion of the breast (45%) and most of the mammographic densities were dense patterns (BI-RADS Type 3 or 4: 91%). Our cases showed similar distribution for all the types of sonographic echo patterns. 23 patients underwent biopsy; all the biopsy specimens were benign. For the 84 patients with 90 palpable abnormalities who were followed, there was no interval development of breast cancer in the areas of clinical concern. Our results suggest that we can follow up and prevent unnecessary biopsies in women with palpable abnormalities when both the mammography and ultrasonography show normal tissue, but this study was limited by its small sample size. Therefore, a larger study will be needed to better define the negative predictive value of combined normal sonographic and mammographic findings

  20. Use of scintigraphy for the determination of mucociliary clearance rates in normal, sedated, diseased and exercised horses.

    Science.gov (United States)

    Willoughby, R A; Ecker, G L; McKee, S L; Riddolls, L J

    1991-01-01

    Mucociliary clearance rates from the trachea were determined in normal, sedated, diseased and exercised horses from scintigraphs obtained after an injection of technetium-99m sulphide colloid into the tracheal lumen. The group mean tracheal clearance rate of eight clinically normal horses during 42 trials was 2.06 +/- 0.38 cm/min. Significant between horse differences were found (p less than 0.05). When six and seven of these horses were given xylazine and detomidine hydrochloride, respectively, mean group tracheal clearance rates dropped significantly (p less than 0.05). The decreases from each normal horse's mean tracheal clearance rate ranged from 18 to 54%. There did not appear to be a difference between the tracheal clearance rates (TCRs) of the normal horses and those with chronic respiratory disease. Postexercise evaluations were not significantly different from the pre-exercise TCRs in three clinically normal horses and three horses with chronic obstructive pulmonary disease (p greater than 0.05). This minimally invasive scintigraphic technique for determining TCRs has proved to be useful and reliable. Images Fig. 1. PMID:1790485

  1. Suction caissons subjected to monotonic combined loading

    OpenAIRE

    Penzes, P.; Jensen, M.R.; Zania, Varvara

    2016-01-01

    Suction caissons are being increasingly used as offshore foundation solutions in shallow and intermediate water depths. The convenient installation method through the application of suction has rendered this type of foundation as an attractive alternative to the more traditional monopile foundation for offshore wind turbines. The combined loading imposed typically to a suction caisson has led to the estimation of their bearing capacity by means of 3D failure envelopes. This study aims to anal...

  2. A comparison with result of normalized image to different template image on statistical parametric mapping of ADHD children patients

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Dong Ho [Kyonggi University, Suwon (Korea, Republic of); Park, Soung Ock; Kwon, Soo Il [Dongnam Health College, Suwon (Korea, Republic of); Joh, Chol Woo; Yoon, Seok Nam [Medical College, Ajou University, Suwon (Korea, Republic of)

    2003-06-15

    We studied 64 ADHD children patients group (4 {approx} 15 ys, mean age: 8 {+-} 2.6 ys. M/F: 52/12) and 12 normal group (6 {approx} 7 ys, mean age: 9.4 {+-} 3.4 ys, M/F: 8/4) of the brain had been used to analysis of blood flow between normal and ADHD group. For analysis of Children ADHD, we used 12 children's mean brain images and made Template image of SPM99 program. In crease of blood flow (P-value 0.05), the result of normalized images to Template image to offer from SPM99 program, showed significant cluster in inter-Hemispheric and occipital Lobe, in the case of normalized images to children template image, showed inter-hemispheric and parietal lobe.

  3. A comparison with result of normalized image to different template image on statistical parametric mapping of ADHD children patients

    International Nuclear Information System (INIS)

    Shin, Dong Ho; Park, Soung Ock; Kwon, Soo Il; Joh, Chol Woo; Yoon, Seok Nam

    2003-01-01

    We studied 64 ADHD children patients group (4 ∼ 15 ys, mean age: 8 ± 2.6 ys. M/F: 52/12) and 12 normal group (6 ∼ 7 ys, mean age: 9.4 ± 3.4 ys, M/F: 8/4) of the brain had been used to analysis of blood flow between normal and ADHD group. For analysis of Children ADHD, we used 12 children's mean brain images and made Template image of SPM99 program. In crease of blood flow (P-value 0.05), the result of normalized images to Template image to offer from SPM99 program, showed significant cluster in inter-Hemispheric and occipital Lobe, in the case of normalized images to children template image, showed inter-hemispheric and parietal lobe

  4. Normal tissue dose-effect models in biological dose optimisation

    International Nuclear Information System (INIS)

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  5. Normalization in Lie algebras via mould calculus and applications

    Science.gov (United States)

    Paul, Thierry; Sauzin, David

    2017-11-01

    We establish Écalle's mould calculus in an abstract Lie-theoretic setting and use it to solve a normalization problem, which covers several formal normal form problems in the theory of dynamical systems. The mould formalism allows us to reduce the Lie-theoretic problem to a mould equation, the solutions of which are remarkably explicit and can be fully described by means of a gauge transformation group. The dynamical applications include the construction of Poincaré-Dulac formal normal forms for a vector field around an equilibrium point, a formal infinite-order multiphase averaging procedure for vector fields with fast angular variables (Hamiltonian or not), or the construction of Birkhoff normal forms both in classical and quantum situations. As a by-product we obtain, in the case of harmonic oscillators, the convergence of the quantum Birkhoff form to the classical one, without any Diophantine hypothesis on the frequencies of the unperturbed Hamiltonians.

  6. Effect of nifedipine on gastric emptying in normal subjects

    Energy Technology Data Exchange (ETDEWEB)

    Traube, M.; Lange, R.C.; McAllister, R.G.; McCallum, R.W.

    1985-05-01

    Nifedipine (N) inhibits calcium entry into smooth muscle cells and relaxes esophageal smooth muscle. The authors studied N's effect on gastric emptying of liquids and solids. Ten normal subjects underwent radionuclide (In-111-DTPA in water and Tc-99m-sulfur colloid tagged to chicken liver) emptying tests with and without 30 mg N given orally 20 min prior to meal ingestion. Peak plasma N levels were either 30 or 60 min after drug dosing and showed a 3-fold variation (low 145 ng/ml, high 434 ng/ml). Both mean N levels and integral concentration time values were twice as high as those obtained after 30 mg sublingual dosing in normals previously studied in our lab. The authors conclude that plasma N levels which are associated with significant esophageal motility effects do not change gastric emptying in normal subjects. The data also show that N levels are greater after oral than sublingual dosing of 30 mg in normal subjects.

  7. Effect of nifedipine on gastric emptying in normal subjects

    International Nuclear Information System (INIS)

    Traube, M.; Lange, R.C.; McAllister, R.G.; McCallum, R.W.

    1985-01-01

    Nifedipine (N) inhibits calcium entry into smooth muscle cells and relaxes esophageal smooth muscle. The authors studied N's effect on gastric emptying of liquids and solids. Ten normal subjects underwent radionuclide (In-111-DTPA in water and Tc-99m-sulfur colloid tagged to chicken liver) emptying tests with and without 30 mg N given orally 20 min prior to meal ingestion. Peak plasma N levels were either 30 or 60 min after drug dosing and showed a 3-fold variation (low 145 ng/ml, high 434 ng/ml). Both mean N levels and integral concentration time values were twice as high as those obtained after 30 mg sublingual dosing in normals previously studied in our lab. The authors conclude that plasma N levels which are associated with significant esophageal motility effects do not change gastric emptying in normal subjects. The data also show that N levels are greater after oral than sublingual dosing of 30 mg in normal subjects

  8. Effect of metoclopramide on normal and delayed gastric emptying in gastroesophageal reflux patients

    Energy Technology Data Exchange (ETDEWEB)

    Fink, S.M.; Lange, R.C.; McCallum, R.W.

    1983-12-01

    Gastric emptying has an important role in the pathophysiology of gastroesophageal reflux disease. The effect of metoclopramide, a gastric prokinetic agent, in gastroesophageal reflux patients with normal as well as delayed emptying was investigated. Twenty-six patients with subjective and objective evidence of gastroesophageal reflux ingested an egg salad sandwich meal labeled with /sub 99m/technetium-DTPA for a baseline study, and then again on a separate day after receiving oral metoclopramide, 10 mg, 30 min prior to the test meal. The mean percent isotope remaining in the stomach after 90 min improved significantly from 70.3 +/- 3.9% (SEM) to 55.2 +/- 4.2% after metoclopramide. Fourteen (54%) had a basal emptying in the normal range of 34-69% retention of isotope at 90 min, (means +/- 2 SD), while it was slow in 12 (46%). For those with delayed basal gastric emptying, the mean retention of 88.9 +/- 2.9% at 90 min was significantly decreased by metoclopramide to 68.6 +/- 6.1%. In those patients with a normal basal gastric emptying and a mean retention of 54.4 +/- 2.3% at 90 min, there was also significant improvement (P less than 0.025) to 43.6 +/- 3.6% after metoclopramide. These data indicate that metoclopramide increased gastric emptying in gastroesophageal reflux patients with normal as well as delayed gastric emptying. Therefore on a patient management level a trial of metoclopramide is warranted in patients with gastroesophageal reflux disease and is not limited by the gastric emptying status of the patient.

  9. Effect of metoclopramide on normal and delayed gastric emptying in gastroesophageal reflux patients

    International Nuclear Information System (INIS)

    Fink, S.M.; Lange, R.C.; McCallum, R.W.

    1983-01-01

    Gastric emptying has an important role in the pathophysiology of gastroesophageal reflux disease. The effect of metoclopramide, a gastric prokinetic agent, in gastroesophageal reflux patients with normal as well as delayed emptying was investigated. Twenty-six patients with subjective and objective evidence of gastroesophageal reflux ingested an egg salad sandwich meal labeled with /sub 99m/technetium-DTPA for a baseline study, and then again on a separate day after receiving oral metoclopramide, 10 mg, 30 min prior to the test meal. The mean percent isotope remaining in the stomach after 90 min improved significantly from 70.3 +/- 3.9% (SEM) to 55.2 +/- 4.2% after metoclopramide. Fourteen (54%) had a basal emptying in the normal range of 34-69% retention of isotope at 90 min, (means +/- 2 SD), while it was slow in 12 (46%). For those with delayed basal gastric emptying, the mean retention of 88.9 +/- 2.9% at 90 min was significantly decreased by metoclopramide to 68.6 +/- 6.1%. In those patients with a normal basal gastric emptying and a mean retention of 54.4 +/- 2.3% at 90 min, there was also significant improvement (P less than 0.025) to 43.6 +/- 3.6% after metoclopramide. These data indicate that metoclopramide increased gastric emptying in gastroesophageal reflux patients with normal as well as delayed gastric emptying. Therefore on a patient management level a trial of metoclopramide is warranted in patients with gastroesophageal reflux disease and is not limited by the gastric emptying status of the patient

  10. Derivation of Conditions for the Normal Gain Behavior of Conical Horns

    Directory of Open Access Journals (Sweden)

    Chin Yeng Tan

    2007-01-01

    Full Text Available Monotonically increasing gain-versus-frequency pattern is in general expected to be a characteristic of aperture antennas that include the smooth-wall conical horn. While optimum gain conical horns do naturally exhibit this behavior, nonoptimum horns need to meet certain criterion: a minimum axial length for given aperture diameter, or, alternatively, a maximum aperture diameter for the given axial length. In this paper, approximate expressions are derived to determine these parameters.

  11. Perilymphatic and endolymphatic pressure in the normal guinea pig

    NARCIS (Netherlands)

    Warmerdam, TJ; Schroder, FHH; Wit, HP; Albers, FWJ

    1999-01-01

    Perilymphatic and endolymphatic pressures were measured consecutively in the normal guinea pig using a WPI 900A micropressure system. The mean (n = 10) perilymphatic and endolymphatic pressures were 2.26 and 2.31 cm H2O, respectively. There was no statistically significant difference between the

  12. Effect of non-normality on test statistics for one-way independent groups designs.

    Science.gov (United States)

    Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R

    2012-02-01

    The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.

  13. Normal-range verbal-declarative memory in schizophrenia.

    Science.gov (United States)

    Heinrichs, R Walter; Parlar, Melissa; Pinnock, Farena

    2017-10-01

    Cognitive impairment is prevalent and related to functional outcome in schizophrenia, but a significant minority of the patient population overlaps with healthy controls on many performance measures, including declarative-verbal-memory tasks. In this study, we assessed the validity, clinical, and functional implications of normal-range (NR), verbal-declarative memory in schizophrenia. Performance normality was defined using normative data for 8 basic California Verbal Learning Test (CVLT-II; Delis, Kramer, Kaplan, & Ober, 2000) recall and recognition trials. Schizophrenia patients (n = 155) and healthy control participants (n = 74) were assessed for performance normality, defined as scores within 1 SD of the normative mean on all 8 trials, and assigned to normal- and below-NR memory groups. NR schizophrenia patients (n = 26) and control participants (n = 51) did not differ in general verbal ability, on a reading-based estimate of premorbid ability, across all 8 CVLT-II-score comparisons or in terms of intrusion and false-positive errors and auditory working memory. NR memory patients did not differ from memory-impaired patients (n = 129) in symptom severity, and both patient groups were significantly and similarly disabled in terms of functional status in the community. These results confirm a subpopulation of schizophrenia patients with normal, verbal-declarative-memory performance and no evidence of decline from higher premorbid ability levels. However, NR patients did not experience less severe psychopathology, nor did they show advantage in community adjustment relative to impaired patients. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Diurnal glycemic profile in obese and normal weight nondiabetic pregnant women.

    Science.gov (United States)

    Yogev, Yariv; Ben-Haroush, Avi; Chen, Rony; Rosenn, Barak; Hod, Moshe; Langer, Oded

    2004-09-01

    A paucity of data exists concerning the normal glycemic profile in nondiabetic pregnancies. Using a novel approach that provides continuous measurement of blood glucose, we sought to evaluate the ambulatory daily glycemic profile in the second half of pregnancy in nondiabetic women. Fifty-seven obese and normal weight nondiabetic subjects were evaluated for 72 consecutive hours with continuous glucose monitoring by measurement interstitial glucose levels in subcutaneous tissue every 5 minutes. Subjects were instructed not to modify their lifestyle or to follow any dietary restriction. For each woman, mean and fasting blood glucose values were determined; for each meal during the study period, the first 180 minutes were analyzed. For the study group, the fasting blood glucose level was 75 +/- 12 mg/dL; the mean blood glucose level was 83.7 +/- 18 mg/dL; the postprandial peak glucose value level was 110 +/- 16 mg/dL, and the time interval that was needed to reach peak postprandial glucose level was 70 +/- 13 minutes. A similar postprandial glycemic profile was obtained for breakfast, lunch, and dinner. Obese women were characterized by a significantly higher postprandial glucose peak value, increased 1- and 2-hour postprandial glucose levels, increased time interval for glucose peak, and significantly lower mean blood glucose during the night. No difference was found in fasting and mean blood glucose between obese and nonobese subjects. Glycemic profile characterization in both obese and normal weight nondiabetic subjects provide a measure for the desired level of glycemic control in pregnancy that is complicated with diabetes mellitus.

  15. Drug Use Normalization: A Systematic and Critical Mixed-Methods Review.

    Science.gov (United States)

    Sznitman, Sharon R; Taubman, Danielle S

    2016-09-01

    Drug use normalization, which is a process whereby drug use becomes less stigmatized and more accepted as normative behavior, provides a conceptual framework for understanding contemporary drug issues and changes in drug use trends. Through a mixed-methods systematic review of the normalization literature, this article seeks to (a) critically examine how the normalization framework has been applied in empirical research and (b) make recommendations for future research in this area. Twenty quantitative, 26 qualitative, and 4 mixed-methods studies were identified through five electronic databases and reference lists of published studies. Studies were assessed for relevance, study characteristics, quality, and aspects of normalization examined. None of the studies applied the most rigorous research design (experiments) or examined all of the originally proposed normalization dimensions. The most commonly assessed dimension of drug use normalization was "experimentation." In addition to the original dimensions, the review identified the following new normalization dimensions in the literature: (a) breakdown of demographic boundaries and other risk factors in relation to drug use; (b) de-normalization; (c) drug use as a means to achieve normal goals; and (d) two broad forms of micro-politics associated with managing the stigma of illicit drug use: assimilative and transformational normalization. Further development in normalization theory and methodology promises to provide researchers with a novel framework for improving our understanding of drug use in contemporary society. Specifically, quasi-experimental designs that are currently being made feasible by swift changes in cannabis policy provide researchers with new and improved opportunities to examine normalization processes.

  16. Hardness of the subchondral bone of the patella in the normal state, in chondromalacia, and in osteoarthrosis.

    Science.gov (United States)

    Björkström, S; Goldie, I F

    1982-06-01

    The hardness of bone is its property of withstanding the impact of a penetrating agent. It has been found that articular degenerative changes in, for example, the tibia (knee) are combined with a decrease in the hardness of the subchondral bone. In this investigation the hardness of subchondral bone in chondromalacia and osteoarthrosis of the patella has been analysed and compared with normal subchondral bone. Using an indentation method originally described by Brinell the hardness of the subchondral bone was evaluated in 7 normal patellae, in 20 with chondromalacia and in 33 with osteoarthrosis. A microscopic and microradiographic study of the subchondral bone was carried out simultaneously. Hardness was lowest in the normal material. The mean hardness value beneath the degenerated cartilage differed only slightly from that of the normal material, but the variation of values was increased. The hardness in bone in the chondromalacia area was lower than the hardness in bone covered by surrounding normal cartilage. The mean hardness value in bone beneath normal parts of cartilage in specimens with chondromalacia was higher than the mean hardness value of the normal material. In the microscopic and microradiographic examination it became evident that there was a relationship between trabecular structure and subchondral bone hardness; high values: coarse and solid structure; low values: slender and less regular structure.

  17. [Hollywood movies and the production of meanings about nurses].

    Science.gov (United States)

    Rambor, Angélica; Kruse, Maria Henriqueta Luce

    2007-03-01

    This article discusses the production of meanings on nurses in Hollywood films. It aimed at studying how these films have depicted nurses, and meanings are built by the stories told. The body of analysis consisted of' Hollywood films, and data were collected according to Rose. The following discourse analysis categories were identified: the nurse as the hospital normalizer, the nurse as a subordinate and low-rank professional, the nurse as villain or hero, and nursing as a feminine profession.

  18. Loss of normal anagen hair in pemphigus vulgaris.

    Science.gov (United States)

    Daneshpazhooh, M; Mahmoudi, H R; Rezakhani, S; Valikhani, M; Naraghi, Z S; Mohammadi, Y; Habibi, A; Chams-Davatchi, C

    2015-07-01

    Pemphigus vulgaris (PV) is a known cause of loss of 'normal' anagen hair; that is, shedding of intact anagen hairs covered by root sheaths. However, studies on this subject are limited. To investigate anagen hair shedding in patients with PV, and ascertain its association with disease severity. In total, 96 consecutive patients with PV (new patients or patients in relapse) who were admitted to the dermatology wards of a tertiary hospital were enrolled in this study. Demographic data, PV phenotype, disease severity and presence of scalp lesions were recorded. A group of 10-20 hairs were pulled gently from different areas of the scalp (lesional and nonlesional skin) in all patients, and anagen hairs were counted. Disease severity was graded according to Harman score. Anagen hair was obtained by pull test in 59 of the 96 patients (61.5%), of whom 2 had normal scalp. The mean ± SD anagen hair count was 5.9 ± 7.6 (range 0-31). In univariate analysis, anagen hair loss (P hair count was significantly higher in the severe (mean 6.83 ± 7.89) than the moderate (mean 1.06 ± 1.94) subgroup (P hair loss (OR = 1.16, 95% CI = 1.05-1.28, P hair loss was an independent predictor of the disease severity. © 2015 British Association of Dermatologists.

  19. Spike persistence and normalization in benign epilepsy with centrotemporal spikes - Implications for management.

    Science.gov (United States)

    Kim, Hunmin; Kim, Soo Yeon; Lim, Byung Chan; Hwang, Hee; Chae, Jong-Hee; Choi, Jieun; Kim, Ki Joong; Dlugos, Dennis J

    2018-05-10

    This study was performed 1) to determine the timing of spike normalization in patients with benign epilepsy with centrotemporal spikes (BECTS); 2) to identify relationships between age of seizure onset, age of spike normalization, years of spike persistence and treatment; and 3) to assess final outcomes between groups of patients with or without spikes at the time of medication tapering. Retrospective analysis of BECTS patients confirmed by clinical data, including age of onset, seizure semiology and serial electroencephalography (EEG) from diagnosis to remission. Age at spike normalization, years of spike persistence, and time of treatment onset to spike normalization were assessed. Final seizure and EEG outcome were compared between the groups with or without spikes at the time of AED tapering. One hundred and thirty-four patients were included. Mean age at seizure onset was 7.52 ± 2.11 years. Mean age at spike normalization was 11.89 ± 2.11 (range: 6.3-16.8) years. Mean time of treatment onset to spike normalization was 4.11 ± 2.13 (range: 0.24-10.08) years. Younger age of seizure onset was correlated with longer duration of spike persistence (r = -0.41, p < 0.001). In treated patients, spikes persisted for 4.1 ± 1.95 years, compared with 2.9 ± 1.97 years in untreated patients. No patients had recurrent seizures after AED was discontinued, regardless of the presence/absence of spikes at time of AED tapering. Years of spike persistence was longer in early onset BECTS patients. Treatment with AEDs did not shorten years of spike persistence. Persistence of spikes at time of treatment withdrawal was not associated with seizure recurrence. Copyright © 2018 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  20. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.