Franklin, Elda
1981-01-01
Reviews studies on the etiology of monotonism, the monotone being that type of uncertain or inaccurate singer who cannot vocally match pitches and who has trouble accurately reproducing even a familiar song. Neurological factors (amusia, right brain abnormalities), age, and sex differences are considered. (Author/SJL)
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data
Directory of Open Access Journals (Sweden)
Xueqin Zhou
2017-01-01
Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.
Estimation of a monotone percentile residual life function under random censorship.
Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo
2013-01-01
In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Feng Qi
2014-10-01
Full Text Available The authors find the absolute monotonicity and complete monotonicity of some functions involving trigonometric functions and related to estimates the lower bounds of the first eigenvalue of Laplace operator on Riemannian manifolds.
Bayesian nonparametric estimation of hazard rate in monotone Aalen model
Czech Academy of Sciences Publication Activity Database
Timková, Jana
2014-01-01
Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf
Heckman, James J; Pinto, Rodrigo
2018-01-01
This paper defines and analyzes a new monotonicity condition for the identification of counterfactuals and treatment effects in unordered discrete choice models with multiple treatments, heterogenous agents and discrete-valued instruments. Unordered monotonicity implies and is implied by additive separability of choice of treatment equations in terms of observed and unobserved variables. These results follow from properties of binary matrices developed in this paper. We investigate conditions under which unordered monotonicity arises as a consequence of choice behavior. We characterize IV estimators of counterfactuals as solutions to discrete mixture problems.
Asymptotic estimates and exponential stability for higher-order monotone difference equations
Directory of Open Access Journals (Sweden)
Pituk Mihály
2005-01-01
Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.
Asymptotic estimates and exponential stability for higher-order monotone difference equations
Directory of Open Access Journals (Sweden)
Mihály Pituk
2005-03-01
Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.
Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.
Du, Pang; Tang, Liansheng
2009-01-30
When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.
Bornkamp, Björn; Ickstadt, Katja
2009-03-01
In this article, we consider monotone nonparametric regression in a Bayesian framework. The monotone function is modeled as a mixture of shifted and scaled parametric probability distribution functions, and a general random probability measure is assumed as the prior for the mixing distribution. We investigate the choice of the underlying parametric distribution function and find that the two-sided power distribution function is well suited both from a computational and mathematical point of view. The model is motivated by traditional nonlinear models for dose-response analysis, and provides possibilities to elicitate informative prior distributions on different aspects of the curve. The method is compared with other recent approaches to monotone nonparametric regression in a simulation study and is illustrated on a data set from dose-response analysis.
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Karátson, J.
2017-01-01
Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Karátson, J.
2017-01-01
Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub
Monotone piecewise bicubic interpolation
International Nuclear Information System (INIS)
Carlson, R.E.; Fritsch, F.N.
1985-01-01
In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables
International Nuclear Information System (INIS)
Korshunov, A D
2003-01-01
Monotone Boolean functions are an important object in discrete mathematics and mathematical cybernetics. Topics related to these functions have been actively studied for several decades. Many results have been obtained, and many papers published. However, until now there has been no sufficiently complete monograph or survey of results of investigations concerning monotone Boolean functions. The object of this survey is to present the main results on monotone Boolean functions obtained during the last 50 years
Edit Distance to Monotonicity in Sliding Windows
DEFF Research Database (Denmark)
Chan, Ho-Leung; Lam, Tak-Wah; Lee, Lap Kei
2011-01-01
Given a stream of items each associated with a numerical value, its edit distance to monotonicity is the minimum number of items to remove so that the remaining items are non-decreasing with respect to the numerical value. The space complexity of estimating the edit distance to monotonicity of a ...
Estimating the Heading Direction Using Normal Flow
1994-01-01
understood (Faugeras and Maybank 1990), 3 Kinetic Stabilization under the assumption that optic flow or correspon- dence is known with some uncertainty...accelerometers can achieve very It can easily be shown (Koenderink and van Doom high accuracy, the same is not true for inexpensive 1975; Maybank 1985... Maybank . ’Motion from point matches: Multi- just don’t compute normal flow there (see Section 6). plicity of solutions". Int’l J. Computer Vision 4
Normal estimation for pointcloud using GPU based sparse tensor voting
Liu , Ming; Pomerleau , François; Colas , Francis; Siegwart , Roland
2012-01-01
International audience; Normal estimation is the basis for most applications using pointcloud, such as segmentation. However, it is still a challenging problem regarding computational complexity and observation noise. In this paper, we propose a normal estimation method for pointcloud using results from tensor voting. Comparing with other approaches, we show it has smaller estimation error. Moreover, by varying the voting kernel size, we find it is a flexible approach for structure extraction...
Czech Academy of Sciences Publication Activity Database
Jeřábek, Emil
2012-01-01
Roč. 58, č. 3 (2012), s. 177-187 ISSN 0942-5616 R&D Projects: GA AV ČR IAA100190902; GA MŠk(CZ) 1M0545 Institutional support: RVO:67985840 Keywords : proof complexity * monotone sequent calculus Subject RIV: BA - General Mathematics Impact factor: 0.376, year: 2012 http://onlinelibrary.wiley.com/doi/10.1002/malq.201020071/full
Penalized Maximum Likelihood Estimation for univariate normal mixture distributions
International Nuclear Information System (INIS)
Ridolfi, A.; Idier, J.
2001-01-01
Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test
Monotonicity and bounds on Bessel functions
Directory of Open Access Journals (Sweden)
Larry Landau
2000-07-01
Full Text Available survey my recent results on monotonicity with respect to order of general Bessel functions, which follow from a new identity and lead to best possible uniform bounds. Application may be made to the "spreading of the wave packet" for a free quantum particle on a lattice and to estimates for perturbative expansions.
Percentile estimation using the normal and lognormal probability distribution
International Nuclear Information System (INIS)
Bement, T.R.
1980-01-01
Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution
Matching by Monotonic Tone Mapping.
Kovacs, Gyorgy
2018-06-01
In this paper, a novel dissimilarity measure called Matching by Monotonic Tone Mapping (MMTM) is proposed. The MMTM technique allows matching under non-linear monotonic tone mappings and can be computed efficiently when the tone mappings are approximated by piecewise constant or piecewise linear functions. The proposed method is evaluated in various template matching scenarios involving simulated and real images, and compared to other measures developed to be invariant to monotonic intensity transformations. The results show that the MMTM technique is a highly competitive alternative of conventional measures in problems where possible tone mappings are close to monotonic.
BIMOND3, Monotone Bivariate Interpolation
International Nuclear Information System (INIS)
Fritsch, F.N.; Carlson, R.E.
2001-01-01
1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data
Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation
Directory of Open Access Journals (Sweden)
Namyong Kim
2016-06-01
Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.
Optimal Monotone Drawings of Trees
He, Dayu; He, Xin
2016-01-01
A monotone drawing of a graph G is a straight-line drawing of G such that, for every pair of vertices u,w in G, there exists abpath P_{uw} in G that is monotone in some direction l_{uw}. (Namely, the order of the orthogonal projections of the vertices of P_{uw} on l_{uw} is the same as the order they appear in P_{uw}.) The problem of finding monotone drawings for trees has been studied in several recent papers. The main focus is to reduce the size of the drawing. Currently, the smallest drawi...
Radiographic heart-volume estimation in normal cats
International Nuclear Information System (INIS)
Ahlberg, N.E.; Hansson, K.; Svensson, L.; Iwarsson, K.
1989-01-01
Heart volume mensuration was evaluated on conventional radiographs from eight normal cats in different body positions using computed tomography (CT). Heart volumes were calculated from orthogonal thoracic radiographs in ventral and dorsal recumbency and from radiographs exposed with a vertical X-ray beam in dorsal and lateral recumbency using the formula for an ellipsoid body. Heart volumes were also estimated with CT in ventral, dorsal, right lateral and left lateral recumbency. No differences between heart volumes from CT in ventral recumbency and those from CT in right and left lateral recumbency were seen. In dorsal recumbency, however, significantly lower heart volumes were obtained. Heart volumes from CT in ventral recumbency were similar to those from radiographs in ventral and dorsal recumbency and dorsal/left lateral recumbency. Close correlation was also demonstrated between heart volumes from radiographs in dorsal/ left lateral recumbency and body weights of the eight cats
Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.
Directory of Open Access Journals (Sweden)
Umair Khalil
Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.
Monotonicity of social welfare optima
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Østerdal, Lars Peter Raahave
2010-01-01
This paper considers the problem of maximizing social welfare subject to participation constraints. It is shown that for an income allocation method that maximizes a social welfare function there is a monotonic relationship between the incomes allocated to individual agents in a given coalition...
Sass, D. A.; Schmitt, T. A.; Walker, C. M.
2008-01-01
Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…
International Nuclear Information System (INIS)
Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow
2013-01-01
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more
Energy Technology Data Exchange (ETDEWEB)
Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)
2013-11-01
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more
Generalized monotone operators in Banach spaces
International Nuclear Information System (INIS)
Nanda, S.
1988-07-01
The concept of F-monotonicity was first introduced by Kato and this generalizes the notion of monotonicity introduced by Minty. The purpose of this paper is to define various types of F-monotonicities and discuss the relationships among them. (author). 6 refs
Learning normalized inputs for iterative estimation in medical image segmentation.
Drozdzal, Michal; Chartrand, Gabriel; Vorontsov, Eugene; Shakeri, Mahsa; Di Jorio, Lisa; Tang, An; Romero, Adriana; Bengio, Yoshua; Pal, Chris; Kadoury, Samuel
2018-02-01
In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantisation of monotonic twist maps
International Nuclear Information System (INIS)
Boasman, P.A.; Smilansky, U.
1993-08-01
Using an approach suggested by Moser, classical Hamiltonians are generated that provide an interpolating flow to the stroboscopic motion of maps with a monotonic twist condition. The quantum properties of these Hamiltonians are then studied in analogy with recent work on the semiclassical quantization of systems based on Poincare surfaces of section. For the generalized standard map, the correspondence with the usual classical and quantum results is shown, and the advantages of the quantum Moser Hamiltonian demonstrated. The same approach is then applied to the free motion of a particle on a 2-torus, and to the circle billiard. A natural quantization condition based on the eigenphases of the unitary time--development operator is applied, leaving the exact eigenvalues of the torus, but only the semiclassical eigenvalues for the billiard; an explanation for this failure is proposed. It is also seen how iterating the classical map commutes with the quantization. (authors)
International Nuclear Information System (INIS)
Tyson, Jon
2009-01-01
Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.
Statistical analysis of sediment toxicity by additive monotone regression splines
Boer, de W.J.; Besten, den P.J.; Braak, ter C.J.F.
2002-01-01
Modeling nonlinearity and thresholds in dose-effect relations is a major challenge, particularly in noisy data sets. Here we show the utility of nonlinear regression with additive monotone regression splines. These splines lead almost automatically to the estimation of thresholds. We applied this
Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation
Directory of Open Access Journals (Sweden)
Jaehoon Jung
2016-01-01
Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.
On the size of monotone span programs
Nikov, V.S.; Nikova, S.I.; Preneel, B.; Blundo, C.; Cimato, S.
2005-01-01
Span programs provide a linear algebraic model of computation. Monotone span programs (MSP) correspond to linear secret sharing schemes. This paper studies the properties of monotone span programs related to their size. Using the results of van Dijk (connecting codes and MSPs) and a construction for
Estimating structural equation models with non-normal variables by using transformations
Montfort, van K.; Mooijaart, A.; Meijerink, F.
2009-01-01
We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample
Asymptotic normality of kernel estimator of $\\psi$-regression function for functional ergodic data
Laksaci ALI; Benziadi Fatima; Gheriballak Abdelkader
2016-01-01
In this paper we consider the problem of the estimation of the $\\psi$-regression function when the covariates take values in an infinite dimensional space. Our main aim is to establish, under a stationary ergodic process assumption, the asymptotic normality of this estimate.
Akdenur, B; Okkesum, S; Kara, S; Günes, S
2009-11-01
In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.
Energy Technology Data Exchange (ETDEWEB)
Kim, Woo Hyoung; Kim, Chang Guhn; Kim, Dae Weung [Wonkwang Univ. School of Medicine, Iksan (Korea, Republic of)
2012-09-15
Standardized uptake values (SUVs)normalized by lean body mass (LBM)determined by CT were compared with those normalized by LBM estimated using predictive equations (PEs)in normal liver, spleen, and aorta using {sup 18}F FDG PET/CT. Fluorine 18 fluorodeoxyglucose (F FDG)positron emission tomography/computed tomography (PET/CT)was conducted on 453 patients. LBM determined by CT was defined in 3 ways (LBM{sup CT1}-3). Five PEs were used for comparison (LBM{sup PE1}-5). Tissue SUV normalized by LBM (SUL) was calculated using LBM from each method (SUL{sup CT1}-3, SUL{sup PE1}-5). Agreement between methods was assessed by Bland Altman analysis. Percentage difference and percentage error were also calculated. For all liver SUL{sup CTS} vs. liver SUL{sup PES} except liver SUL{sup PE3}, the range of biases, SDs of percentage difference and percentage errors were -0.17-0.24 SUL, 6.15-10.17%, and 25.07-38.91%, respectively. For liver SUL{sup CTs} vs. liver SUL{sup PE3}, the corresponding figures were 0.47-0.69 SUL, 10.90-11.25%, and 50.85-51.55%, respectively, showing the largest percentage errors and positive biases. Irrespective of magnitudes of the biases, large percentage errors of 25.07-51.55% were observed between liver SUL{sup CT1}-3 and liver SUL{sup PE1}-5. The results of spleen and aorta SUL{sup CTs} and SUL{sup PEs} comparison were almost identical to those for liver. The present study demonstrated substantial errors in individual SUL{sup PEs} compared with SUL{sup CTs} as a reference value. Normalization of SUV by LBM determined by CT rather than PEs may be a useful approach to reduce errors in individual SUL{sup PEs}.
International Nuclear Information System (INIS)
Kim, Woo Hyoung; Kim, Chang Guhn; Kim, Dae Weung
2012-01-01
Standardized uptake values (SUVs)normalized by lean body mass (LBM)determined by CT were compared with those normalized by LBM estimated using predictive equations (PEs)in normal liver, spleen, and aorta using 18 F FDG PET/CT. Fluorine 18 fluorodeoxyglucose (F FDG)positron emission tomography/computed tomography (PET/CT)was conducted on 453 patients. LBM determined by CT was defined in 3 ways (LBM CT1 -3). Five PEs were used for comparison (LBM PE1 -5). Tissue SUV normalized by LBM (SUL) was calculated using LBM from each method (SUL CT1 -3, SUL PE1 -5). Agreement between methods was assessed by Bland Altman analysis. Percentage difference and percentage error were also calculated. For all liver SUL CTS vs. liver SUL PES except liver SUL PE3 , the range of biases, SDs of percentage difference and percentage errors were -0.17-0.24 SUL, 6.15-10.17%, and 25.07-38.91%, respectively. For liver SUL CTs vs. liver SUL PE3 , the corresponding figures were 0.47-0.69 SUL, 10.90-11.25%, and 50.85-51.55%, respectively, showing the largest percentage errors and positive biases. Irrespective of magnitudes of the biases, large percentage errors of 25.07-51.55% were observed between liver SUL CT1 -3 and liver SUL PE1 -5. The results of spleen and aorta SUL CTs and SUL PEs comparison were almost identical to those for liver. The present study demonstrated substantial errors in individual SUL PEs compared with SUL CTs as a reference value. Normalization of SUV by LBM determined by CT rather than PEs may be a useful approach to reduce errors in individual SUL PEs
Doss, Hani; Tan, Aixin
2014-09-01
In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
A New Family of Consistent and Asymptotically-Normal Estimators for the Extremal Index
Directory of Open Access Journals (Sweden)
Jose Olmo
2015-08-01
Full Text Available The extremal index (θ is the key parameter for extending extreme value theory results from i.i.d. to stationary sequences. One important property of this parameter is that its inverse determines the degree of clustering in the extremes. This article introduces a novel interpretation of the extremal index as a limiting probability characterized by two Poisson processes and a simple family of estimators derived from this new characterization. Unlike most estimators for θ in the literature, this estimator is consistent, asymptotically normal and very stable across partitions of the sample. Further, we show in an extensive simulation study that this estimator outperforms in finite samples the logs, blocks and runs estimation methods. Finally, we apply this new estimator to test for clustering of extremes in monthly time series of unemployment growth and inflation rates and conclude that runs of large unemployment rates are more prolonged than periods of high inflation.
Directory of Open Access Journals (Sweden)
Chakkrid Klin-eam
2009-01-01
Full Text Available We prove strong convergence theorems for finding a common element of the zero point set of a maximal monotone operator and the fixed point set of a hemirelatively nonexpansive mapping in a Banach space by using monotone hybrid iteration method. By using these results, we obtain new convergence results for resolvents of maximal monotone operators and hemirelatively nonexpansive mappings in a Banach space.
Monte Carlo comparison of four normality tests using different entropy estimates
Czech Academy of Sciences Publication Activity Database
Esteban, M. D.; Castellanos, M. E.; Morales, D.; Vajda, Igor
2001-01-01
Roč. 30, č. 4 (2001), s. 761-785 ISSN 0361-0918 R&D Projects: GA ČR GA102/99/1137 Institutional research plan: CEZ:AV0Z1075907 Keywords : test of normality * entropy test and entropy estimator * table of critical values Subject RIV: BD - Theory of Information Impact factor: 0.153, year: 2001
Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.
Nguyen, Hien D; Wood, Ian A
2016-04-01
Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.
Multipartite classical and quantum secrecy monotones
International Nuclear Information System (INIS)
Cerf, N.J.; Massar, S.; Schneider, S.
2002-01-01
In order to study multipartite quantum cryptography, we introduce quantities which vanish on product probability distributions, and which can only decrease if the parties carry out local operations or public classical communication. These 'secrecy monotones' therefore measure how much secret correlation is shared by the parties. In the bipartite case we show that the mutual information is a secrecy monotone. In the multipartite case we describe two different generalizations of the mutual information, both of which are secrecy monotones. The existence of two distinct secrecy monotones allows us to show that in multipartite quantum cryptography the parties must make irreversible choices about which multipartite correlations they want to obtain. Secrecy monotones can be extended to the quantum domain and are then defined on density matrices. We illustrate this generalization by considering tripartite quantum cryptography based on the Greenberger-Horne-Zeilinger state. We show that before carrying out measurements on the state, the parties must make an irreversible decision about what probability distribution they want to obtain
Sandberg, Mattias
2015-01-07
The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.
Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2017-01-01
This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.
Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators
Kammoun, Abla
2017-10-25
This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.
Estimation of serum ferritin for normal subject living in Khartoum area
International Nuclear Information System (INIS)
Eltayeb, E.A; Khangi, F.A.; Satti, G.M.; Abu Salab, A.
2003-01-01
This study was conducted with a main objective; the estimation of serum ferritin level in normal subjects in Khartoum area.To fulfil this objective, two hundred and sixty symptoms-free subjects were included in the study, 103 males with 15 to 45 years. serum ferritin was determined by radioimmunoassay (RIA). It was found that the mean concentration of males' serum ferritin was much higher than that of the females' (p<0.001). (Author)
Zarrouk, Fayçal; Bouhlel, Ezdine; Feki, Youssef; Amri, Mohamed; Shephard, Roy J
2009-01-01
Our aim was to test the normality of physical activity patterns and energy expenditures in normal weight and overweight primary school students. Heart rate estimates of total daily energy expenditure (TEE), active energy expenditure (AEE), and activity patterns were made over 3 consecutive school days in healthy middle-class Tunisian children (46 boys, 44 girls, median age (25(th)-75(th)) percentile, 9.2 (8.8-9.9) years. Our cross-section included 52 students with a normal body mass index (BMI) and 38 who exceeded age-specific BMI limits. TEE, AEE and overall physical activity level (PAL) were not different between overweight children and those with a normal BMI [median values (25(th)-75(th)) 9.20 (8.20-9.84) vs. 8.88 (7.42-9.76) MJ/d; 3.56 (2.59-4.22) vs. 3.85 (2.77-4.78) MJ/d and 1.74 (1.54-2.04) vs. 1.89 (1.66-2.15) respectively]. Physical activity intensities (PAI) were expressed as percentages of the individual's heart rate reserve (%HRR). The median PAI for the entire day (PAI24) and for the waking part of day (PAIw) were lower in overweight than in normal weight individuals [16.3 (14.2-18.9) vs. 20.6 (17.9-22.3) %HRR, p spend more time in moderate activity and less time in sedentary pursuits than overweight children.
New concurrent iterative methods with monotonic convergence
Energy Technology Data Exchange (ETDEWEB)
Yao, Qingchuan [Michigan State Univ., East Lansing, MI (United States)
1996-12-31
This paper proposes the new concurrent iterative methods without using any derivatives for finding all zeros of polynomials simultaneously. The new methods are of monotonic convergence for both simple and multiple real-zeros of polynomials and are quadratically convergent. The corresponding accelerated concurrent iterative methods are obtained too. The new methods are good candidates for the application in solving symmetric eigenproblems.
Siozopoulos, Achilleas; Thomaidis, Vasilios; Prassopoulos, Panos; Fiska, Aliki
2018-02-01
Literature includes a number of studies using structural MRI (sMRI) to determine the volume of the amygdala, which is modified in various pathologic conditions. The reported values vary widely mainly because of different anatomical approaches to the complex. This study aims at estimating of the normal amygdala volume from sMRI scans using a recent anatomical definition described in a study based on post-mortem material. The amygdala volume has been calculated in 106 healthy subjects, using sMRI and anatomical-based segmentation. The resulting volumes have been analyzed for differences related to hemisphere, sex, and age. The mean amygdalar volume was estimated at 1.42 cm 3 . The mean right amygdala volume has been found larger than the left, but the difference for the raw values was within the limits of the method error. No intersexual differences or age-related alterations have been observed. The study provides a method for determining the boundaries of the amygdala in sMRI scans based on recent anatomical considerations and an estimation of the mean normal amygdala volume from a quite large number of scans for future use in comparative studies.
Slope Estimation during Normal Walking Using a Shank-Mounted Inertial Sensor
Directory of Open Access Journals (Sweden)
Juan C. Álvarez
2012-08-01
Full Text Available In this paper we propose an approach for the estimation of the slope of the walking surface during normal walking using a body-worn sensor composed of a biaxial accelerometer and a uniaxial gyroscope attached to the shank. It builds upon a state of the art technique that was successfully used to estimate the walking velocity from walking stride data, but did not work when used to estimate the slope of the walking surface. As claimed by the authors, the reason was that it did not take into account the actual inclination of the shank of the stance leg at the beginning of the stride (mid stance. In this paper, inspired by the biomechanical characteristics of human walking, we propose to solve this issue by using the accelerometer as a tilt sensor, assuming that at mid stance it is only measuring the gravity acceleration. Results from a set of experiments involving several users walking at different inclinations on a treadmill confirm the feasibility of our approach. A statistical analysis of slope estimations shows in first instance that the technique is capable of distinguishing the different slopes of the walking surface for every subject. It reports a global RMS error (per-unit difference between actual and estimated inclination of the walking surface for each stride identified in the experiments of 0.05 and this can be reduced to 0.03 with subject-specific calibration and post processing procedures by means of averaging techniques.
Directory of Open Access Journals (Sweden)
Edmond Zahedi
2015-01-01
Full Text Available The feasibility of a novel system to reliably estimate the normalized central blood pressure (CBPN from the radial photoplethysmogram (PPG is investigated. Right-wrist radial blood pressure and left-wrist PPG were simultaneously recorded in five different days. An industry-standard applanation tonometer was employed for recording radial blood pressure. The CBP waveform was amplitude-normalized to determine CBPN. A total of fifteen second-order autoregressive models with exogenous input were investigated using system identification techniques. Among these 15 models, the model producing the lowest coefficient of variation (CV of the fitness during the five days was selected as the reference model. Results show that the proposed model is able to faithfully reproduce CBPN (mean fitness = 85.2% ± 2.5% from the radial PPG for all 15 segments during the five recording days. The low CV value of 3.35% suggests a stable model valid for different recording days.
An estimation of population doses from a nuclear power plant during normal operation
International Nuclear Information System (INIS)
Nowicki, K.
1975-07-01
A model is presented for estimation of the potential submersion and inhalation radiation doses to people located within a distance of 1000 km from a nuclear power plant during normal operation. The model was used to calculate doses for people living 200-1000 km from hypothetical nuclear power facility sited near the geographical centre of Denmark. Two kinds of sources are considered for this situation: - unit release of 15 isotopes of noble gases and iodines, - effluent releases from two types of 1000 MWe Light Water Power Reactors: PWR and BWR. Parameter variations were made and analyzed in order to obtain a better understanding of the mechanisms of the model. (author)
Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo
2017-01-01
The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.
A note on monotone real circuits
Czech Academy of Sciences Publication Activity Database
Hrubeš, Pavel; Pudlák, Pavel
2018-01-01
Roč. 131, March (2018), s. 15-19 ISSN 0020-0190 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : computational complexity * monotone real circuit * Karchmer-Wigderson game Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.748, year: 2016 http ://www.sciencedirect.com/science/article/pii/S0020019017301965?via%3Dihub
A note on monotone real circuits
Czech Academy of Sciences Publication Activity Database
Hrubeš, Pavel; Pudlák, Pavel
2018-01-01
Roč. 131, March (2018), s. 15-19 ISSN 0020-0190 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : computational complexity * monotone real circuit * Karchmer-Wigderson game Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.748, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020019017301965?via%3Dihub
Directory of Open Access Journals (Sweden)
Rodrigo Moura Pereira
2016-06-01
Full Text Available Large farmland areas and the knowledge on the interaction between solar radiation and vegetation canopies have increased the use of data from orbital remote sensors in sugarcane monitoring. However, the constituents of the atmosphere affect the reflectance values obtained by imaging sensors. This study aimed at improving a sugarcane Leaf Area Index (LAI estimation model, concerning the Normalized Difference Vegetation Index (NDVI subjected to atmospheric correction. The model generated by the NDVI with atmospheric correction showed the best results (R2 = 0.84; d = 0.95; MAE = 0.44; RMSE = 0.55, in relation to the other models compared. LAI estimation with this model, during the sugarcane plant cycle, reached a maximum of 4.8 at the vegetative growth phase and 2.3 at the end of the maturation phase. Thus, the use of atmospheric correction to estimate the sugarcane LAI is recommended, since this procedure increases the correlations between the LAI estimated by image and by plant parameters.
International Nuclear Information System (INIS)
Lee, Myung Uk
1979-01-01
The radiological measurement of the interpedicular disease using a routine antero-posterior view of the spine gives important clinical criteria in evaluation of the intraspinal tumor and stenosis of the spinal canal, and aids for diagnosis of the lesions. In 1934 Elsberg and Dyke reported values of interpedicular distance as determined on roentgenograms for spine of white adult, and in 1968 Song prepared normal values of interpedicular distance for Korean adult. The present investigation was undertaken to provide normal interpedicular distance of Korean teenagers. The author observed the antero-posterior films of the spine of 200 normal teenagers which were composed of 100 male and 100 female. The normal values of the interpedicular distance of Korean teenagers were obtained, as well as 90% tolerance range for clinical use. In this statistical analysis, there were noted significant differences between male and female, and each age groups. It was observed that average male measurement were consistently larger than female by about 1 mm and the growth of the spinal canal appeared to be continued.
Directory of Open Access Journals (Sweden)
Thomson Peter C
2003-05-01
Full Text Available Abstract To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits.
Estimation of normal chromium-51 ethylene diamine tetra-acetic acid clearance in children
International Nuclear Information System (INIS)
Piepsz, A.; Pintelon, H.; Ham, H.R.
1994-01-01
In order to estimate the normal range of chromium-51 ethylene diamine tetra-acetic acid (EDTA) clearance in children, we selected a series of 256 patients with past or present urinary tract infection who showed, at the time of the clearance determination, normal technetium-99m dimercaptosuccinic acid (DMSA) scintigraphy and normal left to right DMSA relative uptake. The clearance was calculated by means of either the simplified second exponential method or the 120-min single blood sample; Chantler's correction was used in order to correct for having neglected the first exponential. There was a progressive increase in clearance from the first weeks of life (mean value around 1 month: 55 ml/min/1.73 m 2 ), with a plateau at around 18 months. Between 2 and 17 years of age, the clearance values remained constant, with a mean value of 114 ml/min/1.73 m 2 (SD: 24 ml/min); this is similar to the level described for inulin clearance. No significant differences were observed between boys and girls, or between clearance values calculated with one or with two blood samples. Taking into account the hour of intravenous injection of the tracer, we did not observe any influence of the lunchtime meal on the distribution of the 51 Cr-EDTA clearance values. (orig.)
Stepsize Restrictions for Boundedness and Monotonicity of Multistep Methods
Hundsdorfer, W.; Mozartova, A.; Spijker, M. N.
2011-01-01
In this paper nonlinear monotonicity and boundedness properties are analyzed for linear multistep methods. We focus on methods which satisfy a weaker boundedness condition than strict monotonicity for arbitrary starting values. In this way, many
Directory of Open Access Journals (Sweden)
YOU Haotian
2018-02-01
Full Text Available The intensity data of airborne light detection and ranging (LiDAR are affected by many factors during the acquisition process. It is of great significance for the normalization and application of LiDAR intensity data to study the effective quantification and normalization of the effect from each factor. In this paper, the LiDAR data were normalized with range, angel of incidence, range and angle of incidence based on radar equation, respectively. Then two metrics, including canopy intensity sum and ratio of intensity, were extracted and used to estimate forest LAI, which was aimed at quantifying the effects of intensity normalization on forest LAI estimation. It was found that the range intensity normalization could improve the accuracy of forest LAI estimation. While the angle of incidence intensity normalization did not improve the accuracy and made the results worse. Although the range and incidence angle normalized intensity data could improve the accuracy, the improvement was less than the result of range intensity normalization. Meanwhile, the differences between the results of forest LAI estimation from raw intensity data and normalized intensity data were relatively big for canopy intensity sum metrics. However, the differences were relatively small for the ratio of intensity metrics. The results demonstrated that the effects of intensity normalization on forest LAI estimation were depended on the choice of affecting factor, and the influential level is closely related to the characteristics of metrics used. Therefore, the appropriate method of intensity normalization should be chosen according to the characteristics of metrics used in the future research, which could avoid the waste of cost and the reduction of estimation accuracy caused by the introduction of inappropriate affecting factors into intensity normalization.
Assessment of ANN and SVM models for estimating normal direct irradiation (H_b)
International Nuclear Information System (INIS)
Santos, Cícero Manoel dos; Escobedo, João Francisco; Teramoto, Érico Tadao; Modenese Gorla da Silva, Silvia Helena
2016-01-01
Highlights: • The performance of SVM and ANN in estimating Normal Direct Irradiation (H_b) was evaluated. • 12 models using different input variables are developed (hourly and daily partitions). • The most relevant input variables for DNI are kt, H_s_c and insolation ratio (r′ = n/N). • Support Vector Machine (SVM) provides accurate estimates and outperforms the Artificial Neural Network (ANN). - Abstract: This study evaluates the estimation of hourly and daily normal direct irradiation (H_b) using machine learning techniques (ML): Artificial Neural Network (ANN) and Support Vector Machine (SVM). Time series of different meteorological variables measured over thirteen years in Botucatu were used for training and validating ANN and SVM. Seven different sets of input variables were tested and evaluated, which were chosen based on statistical models reported in the literature. Relative Mean Bias Error (rMBE), Relative Root Mean Square Error (rRMSE), determination coefficient (R"2) and “d” Willmott index were used to evaluate ANN and SVM models. When compared to statistical models which use the same set of input variables (R"2 between 0.22 and 0.78), ANN and SVM show higher values of R"2 (hourly models between 0.52 and 0.88; daily models between 0.42 and 0.91). Considering the input variables, atmospheric transmissivity of global radiation (kt), integrated solar constant (H_s_c) and insolation ratio (n/N, n is sunshine duration and N is photoperiod) were the most relevant in ANN and SVM models. The rMBE and rRMSE values in the two time partitions of SVM models are lower than those obtained with ANN. Hourly ANN and SVM models have higher rRMSE values than daily models. Optimal performance with hourly models was obtained with ANN4"h (rMBE = 12.24%, rRMSE = 23.99% and “d” = 0.96) and SVM4"h (rMBE = 1.75%, rRMSE = 20.10% and “d” = 0.96). Optimal performance with daily models was obtained with ANN2"d (rMBE = −3.09%, rRMSE = 18.95% and “d” = 0
Testing Manifest Monotonicity Using Order-Constrained Statistical Inference
Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas
2013-01-01
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores,…
Estimation of polyclonal IgG4 hybrids in normal human serum.
Young, Elizabeth; Lock, Emma; Ward, Douglas G; Cook, Alexander; Harding, Stephen; Wallis, Gregg L F
2014-07-01
The in vivo or in vitro formation of IgG4 hybrid molecules, wherein the immunoglobulins have exchanged half molecules, has previously been reported under experimental conditions. Here we estimate the incidence of polyclonal IgG4 hybrids in normal human serum and comment on the existence of IgG4 molecules with different immunoglobulin light chains. Polyclonal IgG4 was purified from pooled or individual donor human sera and sequentially fractionated using light-chain affinity and size exclusion chromatography. Fractions were analysed by SDS-PAGE, immunoblotting, ELISA, immunodiffusion and matrix-assisted laser-desorption mass spectrometry. Polyclonal IgG4 purified from normal serum contained IgG4κ, IgG4λ and IgG4κ/λ molecules. Size exclusion chromatography showed that IgG4 was principally present in monomeric form (150 000 MW). SDS-PAGE, immunoblotting and ELISA showed the purity of the three IgG4 samples. Immunodiffusion, light-chain sandwich ELISA and mass spectrometry demonstrated that both κ and λ light chains were present on only the IgG4κ/λ molecules. The amounts of IgG4κ/λ hybrid molecules ranged from 21 to 33% from the five sera analysed. Based on the molecular weight these molecules were formed of two IgG4 heavy chains plus one κ and one λ light chain. Polyclonal IgG (IgG4-depleted) was similarly fractionated according to light-chain specificity. No evidence of hybrid IgG κ/λ antibodies was observed. These results indicate that hybrid IgG4κ/λ antibodies compose a substantial portion of IgG4 from normal human serum. © 2014 John Wiley & Sons Ltd.
Monotone Comparative Statics for the Industry Composition
DEFF Research Database (Denmark)
Laugesen, Anders Rosenstand; Bache, Peter Arendorf
2015-01-01
We let heterogeneous firms face decisions on a number of complementary activities in a monopolistically-competitive industry. The endogenous level of competition and selection regarding entry and exit of firms introduces a wedge between monotone comparative statics (MCS) at the firm level and MCS...... for the industry composition. The latter phenomenon is defined as first-order stochastic dominance shifts in the equilibrium distributions of all activities across active firms. We provide sufficient conditions for MCS at both levels of analysis and show that we may have either type of MCS without the other...
Wedemeyer, Gary A.; Nelson, Nancy C.
1975-01-01
Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.
Earth's Outer Core Properties Estimated Using Bayesian Inversion of Normal Mode Eigenfrequencies
Irving, J. C. E.; Cottaar, S.; Lekic, V.
2016-12-01
The outer core is arguably Earth's most dynamic region, and consists of an iron-nickel liquid with an unknown combination of lighter alloying elements. Frequencies of Earth's normal modes provide the strongest constraints on the radial profiles of compressional wavespeed, VΦ, and density, ρ, in the outer core. Recent great earthquakes have yielded new normal mode measurements; however, mineral physics experiments and calculations are often compared to the Preliminary reference Earth model (PREM), which is 35 years old and does not provide uncertainties. Here we investigate the thermo-elastic properties of the outer core using Earth's free oscillations and a Bayesian framework. To estimate radial structure of the outer core and its uncertainties, we choose to exploit recent datasets of normal mode centre frequencies. Under the self-coupling approximation, centre frequencies are unaffected by lateral heterogeneities in the Earth, for example in the mantle. Normal modes are sensitive to both VΦ and ρ in the outer core, with each mode's specific sensitivity depending on its eigenfunctions. We include a priori bounds on outer core models that ensure compatibility with measurements of mass and moment of inertia. We use Bayesian Monte Carlo Markov Chain techniques to explore different choices in parameterizing the outer core, each of which represents different a priori constraints. We test how results vary (1) assuming a smooth polynomial parametrization, (2) allowing for structure close to the outer core's boundaries, (3) assuming an Equation-of-State and adiabaticity and inverting directly for thermo-elastic parameters. In the second approach we recognize that the outer core may have distinct regions close to the core-mantle and inner core boundaries and investigate models which parameterize the well mixed outer core separately from these two layers. In the last approach we seek to map the uncertainties directly into thermo-elastic parameters including the bulk
The Monotonicity Puzzle: An Experimental Investigation of Incentive Structures
Directory of Open Access Journals (Sweden)
Jeannette Brosig
2010-05-01
Full Text Available Non-monotone incentive structures, which - according to theory - are able to induce optimal behavior, are often regarded as empirically less relevant for labor relationships. We compare the performance of a theoretically optimal non-monotone contract with a monotone one under controlled laboratory conditions. Implementing some features relevant to real-world employment relationships, our paper demonstrates that, in fact, the frequency of income-maximizing decisions made by agents is higher under the monotone contract. Although this observed behavior does not change the superiority of the non-monotone contract for principals, they do not choose this contract type in a significant way. This is what we call the monotonicity puzzle. Detailed investigations of decisions provide a clue for solving the puzzle and a possible explanation for the popularity of monotone contracts.
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Chen, Baojiang; Qin, Jing
2014-05-10
In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.
Generalized convexity, generalized monotonicity recent results
Martinez-Legaz, Juan-Enrique; Volle, Michel
1998-01-01
A function is convex if its epigraph is convex. This geometrical structure has very strong implications in terms of continuity and differentiability. Separation theorems lead to optimality conditions and duality for convex problems. A function is quasiconvex if its lower level sets are convex. Here again, the geo metrical structure of the level sets implies some continuity and differentiability properties for quasiconvex functions. Optimality conditions and duality can be derived for optimization problems involving such functions as well. Over a period of about fifty years, quasiconvex and other generalized convex functions have been considered in a variety of fields including economies, man agement science, engineering, probability and applied sciences in accordance with the need of particular applications. During the last twenty-five years, an increase of research activities in this field has been witnessed. More recently generalized monotonicity of maps has been studied. It relates to generalized conve...
Almost monotonicity formulas for elliptic and parabolic operators with variable coefficients
Matevosyan, Norayr; Petrosyan, Arshak
2010-01-01
In this paper we extend the results of Caffarelli, Jerison, and Kenig [Ann. of Math. (2)155 (2002)] and Caffarelli and Kenig [Amer. J. Math.120 (1998)] by establishing an almost monotonicity estimate for pairs of continuous functions satisfying u
Type monotonic allocation schemes for multi-glove games
Brânzei, R.; Solymosi, T.; Tijs, S.H.
2007-01-01
Multiglove markets and corresponding games are considered.For this class of games we introduce the notion of type monotonic allocation scheme.Allocation rules for multiglove markets based on weight systems are introduced and characterized.These allocation rules generate type monotonic allocation schemes for multiglove games and are also helpful in proving that each core element of the corresponding game is extendable to a type monotonic allocation scheme.The T-value turns out to generate a ty...
Stability of dynamical systems on the role of monotonic and non-monotonic Lyapunov functions
Michel, Anthony N; Liu, Derong
2015-01-01
The second edition of this textbook provides a single source for the analysis of system models represented by continuous-time and discrete-time, finite-dimensional and infinite-dimensional, and continuous and discontinuous dynamical systems. For these system models, it presents results which comprise the classical Lyapunov stability theory involving monotonic Lyapunov functions, as well as corresponding contemporary stability results involving non-monotonicLyapunov functions.Specific examples from several diverse areas are given to demonstrate the applicability of the developed theory to many important classes of systems, including digital control systems, nonlinear regulator systems, pulse-width-modulated feedback control systems, and artificial neural networks. The authors cover the following four general topics: - Representation and modeling of dynamical systems of the types described above - Presentation of Lyapunov and Lagrange stability theory for dynamical sy...
Stepsize Restrictions for Boundedness and Monotonicity of Multistep Methods
Hundsdorfer, W.
2011-04-29
In this paper nonlinear monotonicity and boundedness properties are analyzed for linear multistep methods. We focus on methods which satisfy a weaker boundedness condition than strict monotonicity for arbitrary starting values. In this way, many linear multistep methods of practical interest are included in the theory. Moreover, it will be shown that for such methods monotonicity can still be valid with suitable Runge-Kutta starting procedures. Restrictions on the stepsizes are derived that are not only sufficient but also necessary for these boundedness and monotonicity properties. © 2011 Springer Science+Business Media, LLC.
Directory of Open Access Journals (Sweden)
Xianglin Meng
2018-03-01
Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.
Directory of Open Access Journals (Sweden)
Yerriswamy Wooluru
2016-06-01
Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.
Wenying, Wei; Jinyu, Han; Wen, Xu
2004-01-01
The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.
Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A
1988-12-01
Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.
Rates of convergence and asymptotic normality of curve estimators for ergodic diffusion processes
J.H. van Zanten (Harry)
2000-01-01
textabstractFor ergodic diffusion processes, we study kernel-type estimators for the invariant density, its derivatives and the drift function. We determine rates of convergence and find the joint asymptotic distribution of the estimators at different points.
2009-01-01
Abstract A kernel estimator of the conditional quantile is defined for a scalar response variable given a covariate taking values in a semi-metric space. The approach generalizes the median?s L1-norm estimator. The almost complete consistency and asymptotic normality are stated. correspondance: Corresponding author. Tel: +33 320 964 933; fax: +33 320 964 704. (Lemdani, Mohamed) (Laksaci, Ali) mohamed.lemdani@univ-lill...
Monotone measures of ergodicity for Markov chains
Directory of Open Access Journals (Sweden)
J. Keilson
1998-01-01
Full Text Available The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t below Equation (1.7]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.
Estimation of the Mean of a Univariate Normal Distribution When the Variance is not Known
Danilov, D.L.; Magnus, J.R.
2002-01-01
We consider the problem of estimating the first k coeffcients in a regression equation with k + 1 variables.For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002).We investigate properties of this estimator in
Estimation of the mean of a univariate normal distribution when the variance is not known
Danilov, Dmitri
2005-01-01
We consider the problem of estimating the first k coefficients in a regression equation with k+1 variables. For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002). We generalize this estimator to the case
DEFF Research Database (Denmark)
Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K
2009-01-01
exponentially distributed, representing background noise and signal, respectively. Using a saddle-point approximation, Ritchie and others (2007) found normexp to be the best background correction method for 2-color microarray data. This article develops the normexp method further by improving the estimation...... is developed for exact maximum likelihood estimation (MLE) using high-quality optimization software and using the saddle-point estimates as starting values. "MLE" is shown to outperform heuristic estimators proposed by other authors, both in terms of estimation accuracy and in terms of performance on real data...
Logarithmically completely monotonic functions involving the Generalized Gamma Function
Directory of Open Access Journals (Sweden)
Faton Merovci
2010-12-01
Full Text Available By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain problems of traffic flow are proved to be logarithmically completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.
Logarithmically completely monotonic functions involving the Generalized Gamma Function
Faton Merovci; Valmir Krasniqi
2010-01-01
By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain problems of traffic flow are proved to be logarithmically completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.
Testing manifest monotonicity using order-constrained statistical inference
Tijmstra, J.; Hessen, D.J.; van der Heijden, P.G.M.; Sijtsma, K.
2013-01-01
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest
2015-01-01
The recent availability of high frequency data has permitted more efficient ways of computing volatility. However, estimation of volatility from asset price observations is challenging because observed high frequency data are generally affected by noise-microstructure effects. We address this issue by using the Fourier estimator of instantaneous volatility introduced in Malliavin and Mancino 2002. We prove a central limit theorem for this estimator with optimal rate and asymptotic variance. An extensive simulation study shows the accuracy of the spot volatility estimates obtained using the Fourier estimator and its robustness even in the presence of different microstructure noise specifications. An empirical analysis on high frequency data (U.S. S&P500 and FIB 30 indices) illustrates how the Fourier spot volatility estimates can be successfully used to study intraday variations of volatility and to predict intraday Value at Risk. PMID:26421617
DEFF Research Database (Denmark)
Sørensen, Flemming Brandt; Müller, J
1990-01-01
Carcinoma in situ of the testis may appear many years prior to the development of an invasive tumour. Using point-sampled intercepts, base-line data concerning unbiased stereological estimates of the volume-weighted mean nuclear volume (nuclear vV) were obtained in 50 retrospective serial...... testicular biopsies from 10 patients with carcinoma in situ. All but two patients eventually developed an invasive growth. Testicular biopsies from 10 normal adult individuals and five prepubertal boys were included as controls. Nuclear vV in testicular carcinoma in situ was significantly larger than...... that of morphologically normal spermatogonia (2P = 1.0 x 10(-19)), with only minor overlap. Normal spermatogonia from controls had, on average, smaller nuclear vV than morphologically normal spermatogonia in biopsies with ipsi- or contra-lateral carcinoma in situ (2P = 5.2 x 10(-3)). No difference in nuclear vV was found...
Estimating Subglottal Pressure from Neck-Surface Acceleration during Normal Voice Production
Fryd, Amanda S.; Van Stan, Jarrad H.; Hillman, Robert E.; Mehta, Daryush D.
2016-01-01
Purpose: The purpose of this study was to evaluate the potential for estimating subglottal air pressure using a neck-surface accelerometer and to compare the accuracy of predicting subglottal air pressure relative to predicting acoustic sound pressure level (SPL). Method: Indirect estimates of subglottal pressure (P[subscript sg]') were obtained…
Pattern Matching Framework to Estimate the Urgency of Off-Normal Situations in NPPs
Energy Technology Data Exchange (ETDEWEB)
Shin, Jin Soo; Park, Sang Jun; Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Hyo Jin; Park, Soon Yeol [Korea Hydro and Nuclear Power, Yeonggwang (Korea, Republic of)
2010-10-15
According to power plant operators, it was said that they could quite well recognize off-normal situations from an incipient stage and also anticipate the possibility of upcoming trips in case of skilled operators, even though it is difficult to clarify the cause of the off-normal situation. From the interview, we could assure the feasibility of two assumptions for the diagnosis of off-normal conditions: One is that we can predict whether an accidental shutdown happens or not if we observe the early stage when an off-normal starts to grow. The other is the observation at the early stage can provide the remaining time to a trip as well as the cause of such an off-normal situation. For this purpose, the development of on-line monitoring systems using various data processing techniques in nuclear power plants (NPPs) has been the subject of increasing attention and becomes important contributor to improve performance and economics. Many of studies have suggested the diagnostic methodologies. One of representative methods was to use the distance discrimination as a similarity measure, for example, such as the Euclidean distance. A variety of artificial intelligence techniques such as a neural network have been developed as well. In addition, some of these methodologies were to reduce the data dimensions for more effectively work. While sharing the same motivation with the previous achievements, this study proposed non-parametric pattern matching techniques to reduce the uncertainty in pursuance of selection of models and modeling processes. This could be characterized by the following two aspects: First, for overcoming considering only a few typical scenarios in the most of the studies, this study is getting the entire sets of off-normal situations which are anticipated in NPPs, which are created by a full-scope simulator. Second, many of the existing researches adopted the process of forming a diagnosis model which is so-called a training technique or a parametric
Pattern Matching Framework to Estimate the Urgency of Off-Normal Situations in NPPs
International Nuclear Information System (INIS)
Shin, Jin Soo; Park, Sang Jun; Heo, Gyun Young; Park, Jin Kyun; Kim, Hyo Jin; Park, Soon Yeol
2010-01-01
According to power plant operators, it was said that they could quite well recognize off-normal situations from an incipient stage and also anticipate the possibility of upcoming trips in case of skilled operators, even though it is difficult to clarify the cause of the off-normal situation. From the interview, we could assure the feasibility of two assumptions for the diagnosis of off-normal conditions: One is that we can predict whether an accidental shutdown happens or not if we observe the early stage when an off-normal starts to grow. The other is the observation at the early stage can provide the remaining time to a trip as well as the cause of such an off-normal situation. For this purpose, the development of on-line monitoring systems using various data processing techniques in nuclear power plants (NPPs) has been the subject of increasing attention and becomes important contributor to improve performance and economics. Many of studies have suggested the diagnostic methodologies. One of representative methods was to use the distance discrimination as a similarity measure, for example, such as the Euclidean distance. A variety of artificial intelligence techniques such as a neural network have been developed as well. In addition, some of these methodologies were to reduce the data dimensions for more effectively work. While sharing the same motivation with the previous achievements, this study proposed non-parametric pattern matching techniques to reduce the uncertainty in pursuance of selection of models and modeling processes. This could be characterized by the following two aspects: First, for overcoming considering only a few typical scenarios in the most of the studies, this study is getting the entire sets of off-normal situations which are anticipated in NPPs, which are created by a full-scope simulator. Second, many of the existing researches adopted the process of forming a diagnosis model which is so-called a training technique or a parametric
Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel
2015-01-01
Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.
Directory of Open Access Journals (Sweden)
Jizheng Yi
Full Text Available Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1 we optimize the surround function; (2 we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.
Sandberg, Mattias
2015-01-01
log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible
Normalization and gene p-value estimation: issues in microarray data processing.
Fundel, Katrin; Küffner, Robert; Aigner, Thomas; Zimmer, Ralf
2008-05-28
Numerous methods exist for basic processing, e.g. normalization, of microarray gene expression data. These methods have an important effect on the final analysis outcome. Therefore, it is crucial to select methods appropriate for a given dataset in order to assure the validity and reliability of expression data analysis. Furthermore, biological interpretation requires expression values for genes, which are often represented by several spots or probe sets on a microarray. How to best integrate spot/probe set values into gene values has so far been a somewhat neglected problem. We present a case study comparing different between-array normalization methods with respect to the identification of differentially expressed genes. Our results show that it is feasible and necessary to use prior knowledge on gene expression measurements to select an adequate normalization method for the given data. Furthermore, we provide evidence that combining spot/probe set p-values into gene p-values for detecting differentially expressed genes has advantages compared to combining expression values for spots/probe sets into gene expression values. The comparison of different methods suggests to use Stouffer's method for this purpose. The study has been conducted on gene expression experiments investigating human joint cartilage samples of osteoarthritis related groups: a cDNA microarray (83 samples, four groups) and an Affymetrix (26 samples, two groups) data set. The apparently straight forward steps of gene expression data analysis, e.g. between-array normalization and detection of differentially regulated genes, can be accomplished by numerous different methods. We analyzed multiple methods and the possible effects and thereby demonstrate the importance of the single decisions taken during data processing. We give guidelines for evaluating normalization outcomes. An overview of these effects via appropriate measures and plots compared to prior knowledge is essential for the biological
Strong monotonicity in mixed-state entanglement manipulation
International Nuclear Information System (INIS)
Ishizaka, Satoshi
2006-01-01
A strong entanglement monotone, which never increases under local operations and classical communications (LOCC), restricts quantum entanglement manipulation more strongly than the usual monotone since the usual one does not increase on average under LOCC. We propose strong monotones in mixed-state entanglement manipulation under LOCC. These are related to the decomposability and one-positivity of an operator constructed from a quantum state, and reveal geometrical characteristics of entangled states. These are lower bounded by the negativity or generalized robustness of entanglement
Monotonicity-based electrical impedance tomography for lung imaging
Zhou, Liangdong; Harrach, Bastian; Seo, Jin Keun
2018-04-01
This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e. the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used these monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.
International Nuclear Information System (INIS)
Zvereva, S.V.; Mutovina, G.R.; Khandogina, E.K.; Marchenko, L.F.; Neudakhin, E.V.; Artamonov, R.G.; Akif'ev, A.P.
1993-01-01
In studying the radioprotective action of natural and synthesised antioxydants a decreased yield of chromosome aberrations with respect to those in untreated cells was noted in normal cells irradiated in phase G 1 whereas no radioprotective effect was found in cells irradiated in G 0 . The addition of antioxydants into the cell cultures from patients with Turner's syndrome did not change their radiosensitivity. No adaptive response was induced in lymphocytes from patients with Down's syndrome cultivated with vitamine E
The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.
Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica
2014-05-01
The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.
Normalization Ridge Regression in Practice II: The Estimation of Multiple Feedback Linkages.
Bulcock, J. W.
The use of the two-stage least squares (2 SLS) procedure for estimating nonrecursive social science models is often impractical when multiple feedback linkages are required. This is because 2 SLS is extremely sensitive to multicollinearity. The standard statistical solution to the multicollinearity problem is a biased, variance reduced procedure…
Goegebeur, Y.; de Boeck, P.; Molenberghs, G.
2010-01-01
The local influence diagnostics, proposed by Cook (1986), provide a flexible way to assess the impact of minor model perturbations on key model parameters’ estimates. In this paper, we apply the local influence idea to the detection of test speededness in a model describing nonresponse in test data,
International Nuclear Information System (INIS)
Wang, Jingjing; Redmond, Stephen J; Narayanan, Michael R; Wang, Ning; Lovell, Nigel H; Voleno, Matteo; Cerutti, Sergio
2012-01-01
Energy expenditure (EE) is an important parameter in the assessment of physical activity. Most reliable techniques for EE estimation are too impractical for deployment in unsupervised free-living environments; those which do prove practical for unsupervised use often poorly estimate EE when the subject is working to change their altitude by walking up or down stairs or inclines. This study evaluates the augmentation of a standard triaxial accelerometry waist-worn wearable sensor with a barometric pressure sensor (as a surrogate measure for altitude) to improve EE estimates, particularly when the subject is ascending or descending stairs. Using a number of features extracted from the accelerometry and barometric pressure signals, a state space model is trained for EE estimation. An activity classification algorithm is also presented, and this activity classification output is also investigated as a model input parameter when estimating EE. This EE estimation model is compared against a similar model which solely utilizes accelerometry-derived features. A protocol (comprising lying, sitting, standing, walking, walking up stairs, walking down stairs and transitioning between activities) was performed by 13 healthy volunteers (8 males and 5 females; age: 23.8 ± 3.7 years; weight: 70.5 ± 14.9 kg), whose instantaneous oxygen uptake was measured by means of an indirect calorimetry system (K4b 2 , COSMED, Italy). Activity classification improves from 81.65% to 90.91% when including barometric pressure information; when analyzing walking activities alone the accuracy increases from 70.23% to 98.54%. Using features derived from both accelerometry and barometry signals, combined with features relating to the activity classification in a state space model, resulted in a .VO 2 estimation bias of −0.00 095 and precision (1.96SD) of 3.54 ml min −1 kg −1 . Using only accelerometry features gives a relatively worse performance, with a bias of −0.09 and precision (1.96SD
Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G
2012-10-01
Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.
A Mathematical Model for Non-monotonic Deposition Profiles in Deep Bed Filtration Systems
DEFF Research Database (Denmark)
Yuan, Hao; Shapiro, Alexander
2011-01-01
A mathematical model for suspension/colloid flow in porous media and non-monotonic deposition is proposed. It accounts for the migration of particles associated with the pore walls via the second energy minimum (surface associated phase). The surface associated phase migration is characterized...... by advection and diffusion/dispersion. The proposed model is able to produce a nonmonotonic deposition profile. A set of methods for estimating the modeling parameters is provided in the case of minimal particle release. The estimation can be easily performed with available experimental information....... The numerical modeling results highly agree with the experimental observations, which proves the ability of the model to catch a non-monotonic deposition profile in practice. An additional equation describing a mobile population behaving differently from the injected population seems to be a sufficient...
Estimation of normal hydration in dialysis patients using whole body and calf bioimpedance analysis.
Zhu, Fansan; Kotanko, Peter; Handelman, Garry J; Raimann, Jochen G; Liu, Li; Carter, Mary; Kuhlmann, Martin K; Seibert, Eric; Leonard, Edward F; Levin, Nathan W
2011-07-01
Prescription of an appropriate dialysis target weight (dry weight) requires accurate evaluation of the degree of hydration. The aim of this study was to investigate whether a state of normal hydration (DW(cBIS)) as defined by calf bioimpedance spectroscopy (cBIS) and conventional whole body bioimpedance spectroscopy (wBIS) could be characterized in hemodialysis (HD) patients and normal subjects (NS). wBIS and cBIS were performed in 62 NS (33 m/29 f) and 30 HD patients (16 m/14 f) pre- and post-dialysis treatments to measure extracellular resistance and fluid volume (ECV) by the whole body and calf bioimpedance methods. Normalized calf resistivity (ρ(N)(,5)) was defined as resistivity at 5 kHz divided by the body mass index. The ratio of wECV to total body water (wECV/TBW) was calculated. Measurements were made at baseline (BL) and at DW(cBIS) following the progressive reduction of post-HD weight over successive dialysis treatments until the curve of calf extracellular resistance is flattened (stabilization) and the ρ(N)(,5) was in the range of NS. Blood pressures were measured pre- and post-HD treatment. ρ(N)(,5) in males and females differed significantly in NS. In patients, ρ(N)(,5) notably increased with progressive decrease in body weight, and systolic blood pressure significantly decreased pre- and post-HD between BL and DW(cBIS) respectively. Although wECV/TBW decreased between BL and DW(cBIS), the percentage of change in wECV/TBW was significantly less than that in ρ(N)(,5) (-5.21 ± 3.2% versus 28 ± 27%, p hydration between BL and DW(cBIS).
Risk-Sensitive Control with Near Monotone Cost
International Nuclear Information System (INIS)
Biswas, Anup; Borkar, V. S.; Suresh Kumar, K.
2010-01-01
The infinite horizon risk-sensitive control problem for non-degenerate controlled diffusions is analyzed under a 'near monotonicity' condition on the running cost that penalizes large excursions of the process.
An Examination of Cooper's Test for Monotonic Trend
Hsu, Louis
1977-01-01
A statistic for testing monotonic trend that has been presented in the literature is shown not to be the binomial random variable it is contended to be, but rather it is linearly related to Kendall's tau statistic. (JKS)
A Survey on Operator Monotonicity, Operator Convexity, and Operator Means
Directory of Open Access Journals (Sweden)
Pattrawut Chansangiam
2015-01-01
Full Text Available This paper is an expository devoted to an important class of real-valued functions introduced by Löwner, namely, operator monotone functions. This concept is closely related to operator convex/concave functions. Various characterizations for such functions are given from the viewpoint of differential analysis in terms of matrix of divided differences. From the viewpoint of operator inequalities, various characterizations and the relationship between operator monotonicity and operator convexity are given by Hansen and Pedersen. In the viewpoint of measure theory, operator monotone functions on the nonnegative reals admit meaningful integral representations with respect to Borel measures on the unit interval. Furthermore, Kubo-Ando theory asserts the correspondence between operator monotone functions and operator means.
Directory of Open Access Journals (Sweden)
Feng Liu
2017-10-01
Full Text Available Abstract Background It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA, a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC trial for a follow-on compound using the lessons learnt from the lead compound. Methods The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid “S” shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. Results It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response
Liu, Feng; Walters, Stephen J; Julious, Steven A
2017-10-02
It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA), a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC) trial for a follow-on compound using the lessons learnt from the lead compound. The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid "S" shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM) is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response follows a placebo like curve, an Emax like curve, or log
Completely monotonic functions related to logarithmic derivatives of entire functions
DEFF Research Database (Denmark)
Pedersen, Henrik Laurberg
2011-01-01
The logarithmic derivative l(x) of an entire function of genus p and having only non-positive zeros is represented in terms of a Stieltjes function. As a consequence, (-1)p(xml(x))(m+p) is a completely monotonic function for all m ≥ 0. This generalizes earlier results on complete monotonicity...... of functions related to Euler's psi-function. Applications to Barnes' multiple gamma functions are given....
Monotonic Loading of Circular Surface Footings on Clay
DEFF Research Database (Denmark)
Ibsen, Lars Bo; Barari, Amin
2011-01-01
Appropriate modeling of offshore foundations under monotonic loading is a significant challenge in geotechnical engineering. This paper reports experimental and numerical analyses, specifically investigating the response of circular surface footings during monotonic loading and elastoplastic...... behavior during reloading. By using the findings presented in this paper, it is possible to extend the model to simulate the vertical-load displacement response of offshore bucket foundations....
On-line learning of non-monotonic rules by simple perceptron
Inoue, Jun-ichi; Nishimori, Hidetoshi; Kabashima, Yoshiyuki
1997-01-01
We study the generalization ability of a simple perceptron which learns unlearnable rules. The rules are presented by a teacher perceptron with a non-monotonic transfer function. The student is trained in the on-line mode. The asymptotic behaviour of the generalization error is estimated under various conditions. Several learning strategies are proposed and improved to obtain the theoretical lower bound of the generalization error.
Directory of Open Access Journals (Sweden)
Yunpeng Song
2015-03-01
Full Text Available Measurement of force on a micro- or nano-Newton scale is important when exploring the mechanical properties of materials in the biophysics and nanomechanical fields. The atomic force microscope (AFM is widely used in microforce measurement. The cantilever probe works as an AFM force sensor, and the spring constant of the cantilever is of great significance to the accuracy of the measurement results. This paper presents a normal spring constant calibration method with the combined use of an electromagnetic balance and a homemade AFM head. When the cantilever presses the balance, its deflection is detected through an optical lever integrated in the AFM head. Meanwhile, the corresponding bending force is recorded by the balance. Then the spring constant can be simply calculated using Hooke’s law. During the calibration, a feedback loop is applied to control the deflection of the cantilever. Errors that may affect the stability of the cantilever could be compensated rapidly. Five types of commercial cantilevers with different shapes, stiffness, and operating modes were chosen to evaluate the performance of our system. Based on the uncertainty analysis, the expanded relative standard uncertainties of the normal spring constant of most measured cantilevers are believed to be better than 2%.
Moduli and Characteristics of Monotonicity in Some Banach Lattices
Directory of Open Access Journals (Sweden)
Miroslav Krbec
2010-01-01
Full Text Available First the characteristic of monotonicity of any Banach lattice X is expressed in terms of the left limit of the modulus of monotonicity of X at the point 1. It is also shown that for Köthe spaces the classical characteristic of monotonicity is the same as the characteristic of monotonicity corresponding to another modulus of monotonicity δ^m,E. The characteristic of monotonicity of Orlicz function spaces and Orlicz sequence spaces equipped with the Luxemburg norm are calculated. In the first case the characteristic is expressed in terms of the generating Orlicz function only, but in the sequence case the formula is not so direct. Three examples show why in the sequence case so direct formula is rather impossible. Some other auxiliary and complemented results are also presented. By the results of Betiuk-Pilarska and Prus (2008 which establish that Banach lattices X with ε0,m(X<1 and weak orthogonality property have the weak fixed point property, our results are related to the fixed point theory (Kirk and Sims (2001.
Specific non-monotonous interactions increase persistence of ecological networks.
Yan, Chuan; Zhang, Zhibin
2014-03-22
The relationship between stability and biodiversity has long been debated in ecology due to opposing empirical observations and theoretical predictions. Species interaction strength is often assumed to be monotonically related to population density, but the effects on stability of ecological networks of non-monotonous interactions that change signs have not been investigated previously. We demonstrate that for four kinds of non-monotonous interactions, shifting signs to negative or neutral interactions at high population density increases persistence (a measure of stability) of ecological networks, while for the other two kinds of non-monotonous interactions shifting signs to positive interactions at high population density decreases persistence of networks. Our results reveal a novel mechanism of network stabilization caused by specific non-monotonous interaction types through either increasing stable equilibrium points or reducing unstable equilibrium points (or both). These specific non-monotonous interactions may be important in maintaining stable and complex ecological networks, as well as other networks such as genes, neurons, the internet and human societies.
Estimation of normal hydration in dialysis patients using whole body and calf bioimpedance analysis
International Nuclear Information System (INIS)
Zhu, Fansan; Kotanko, Peter; Handelman, Garry J; Raimann, Jochen G; Liu, Li; Carter, Mary; Kuhlmann, Martin K; Seibert, Eric; Levin, Nathan W; Leonard, Edward F
2011-01-01
Prescription of an appropriate dialysis target weight (dry weight) requires accurate evaluation of the degree of hydration. The aim of this study was to investigate whether a state of normal hydration (DW cBIS ) as defined by calf bioimpedance spectroscopy (cBIS) and conventional whole body bioimpedance spectroscopy (wBIS) could be characterized in hemodialysis (HD) patients and normal subjects (NS). wBIS and cBIS were performed in 62 NS (33 m/29 f) and 30 HD patients (16 m/14 f) pre- and post-dialysis treatments to measure extracellular resistance and fluid volume (ECV) by the whole body and calf bioimpedance methods. Normalized calf resistivity (ρ N,5 ) was defined as resistivity at 5 kHz divided by the body mass index. The ratio of wECV to total body water (wECV/TBW) was calculated. Measurements were made at baseline (BL) and at DW cBIS following the progressive reduction of post-HD weight over successive dialysis treatments until the curve of calf extracellular resistance is flattened (stabilization) and the ρ N,5 was in the range of NS. Blood pressures were measured pre- and post-HD treatment. ρ N,5 in males and females differed significantly in NS. In patients, ρ N,5 notably increased with progressive decrease in body weight, and systolic blood pressure significantly decreased pre- and post-HD between BL and DW cBIS respectively. Although wECV/TBW decreased between BL and DW cBIS , the percentage of change in wECV/TBW was significantly less than that in ρ N,5 (−5.21 ± 3.2% versus 28 ± 27%, p < 0.001). This establishes the use of ρ N,5 as a new comparator allowing a clinician to incrementally monitor removal of extracellular fluid from patients over the course of dialysis treatments. The conventional whole body technique using wECV/TBW was less sensitive than the use of ρ N,5 to measure differences in body hydration between BL and DW cBIS
Almost monotonicity formulas for elliptic and parabolic operators with variable coefficients
Matevosyan, Norayr
2010-10-21
In this paper we extend the results of Caffarelli, Jerison, and Kenig [Ann. of Math. (2)155 (2002)] and Caffarelli and Kenig [Amer. J. Math.120 (1998)] by establishing an almost monotonicity estimate for pairs of continuous functions satisfying u± ≥ 0 Lu± ≥ -1, u+ · u_ = 0 ;in an infinite strip (global version) or a finite parabolic cylinder (localized version), where L is a uniformly parabolic operator Lu = LA,b,cu := div(A(x, s)∇u) + b(x,s) · ∇u + c(x,s)u - δsu with double Dini continuous A and uniformly bounded b and c. We also prove the elliptic counterpart of this estimate.This closes the gap between the known conditions in the literature (both in the elliptic and parabolic case) imposed on u± in order to obtain an almost monotonicity estimate.At the end of the paper, we demonstrate how to use this new almost monotonicity formula to prove the optimal C1,1-regularity in a fairly general class of quasi-linear obstacle-type free boundary problems. © 2010 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
N. A. Siddiqui
2011-06-01
Full Text Available Underground concrete barriers are frequently used to protect strategic structures like Nuclear power plants (NPP, deep under the soil against any possible high velocity missile impact. For a given range and type of missile (or projectile it is of paramount importance to examine the reliability of underground concrete barriers under expected uncertainties involved in the missile, concrete, and soil parameters. In this paper, a simple procedure for the reliability assessment of underground concrete barriers against normal missile impact has been presented using the First Order Reliability Method (FORM. The presented procedure is illustrated by applying it to a concrete barrier that lies at a certain depth in the soil. Some parametric studies are also conducted to obtain the design values which make the barrier as reliable as desired.
DEFF Research Database (Denmark)
Franco, Antonio; Trapp, Stefan
2008-01-01
The sorption of organic electrolytes to soil was investigated. A dataset consisting of 164 electrolytes, composed of 93 acids, 65 bases, and six amphoters, was collected from literature and databases. The partition coefficient log KOW of the neutral molecule and the dissociation constant pKa were...... calculated by the software ACD/Labs®. The Henderson-Hasselbalch equation was applied to calculate dissociation. Regressions were developed to predict separately for the neutral and the ionic molecule species the distribution coefficient (Kd) normalized to organic carbon (KOC) from log KOW and pKa. The log...... KOC of strong acids (pKa correlated to these parameters. The regressions derived for weak acids and bases (undissociated at environmental pH) were similar. The highest sorption was found for strong bases (pKa > 7.5), probably due to electrical interactions. Nonetheless, their log KOC...
Energy efficiency estimation of a steam powered LNG tanker using normal operating data
Directory of Open Access Journals (Sweden)
Sinha Rajendra Prasad
2016-01-01
Full Text Available A ship’s energy efficiency performance is generally estimated by conducting special sea trials of few hours under very controlled environmental conditions of calm sea, standard draft and optimum trim. This indicator is then used as the benchmark for future reference of the ship’s Energy Efficiency Performance (EEP. In practice, however, for greater part of operating life the ship operates in conditions which are far removed from original sea trial conditions and therefore comparing energy performance with benchmark performance indicator is not truly valid. In such situations a higher fuel consumption reading from the ship fuel meter may not be a true indicator of poor machinery performance or dirty underwater hull. Most likely, the reasons for higher fuel consumption may lie in factors other than the condition of hull and machinery, such as head wind, current, low load operations or incorrect trim [1]. Thus a better and more accurate approach to determine energy efficiency of the ship attributable only to main machinery and underwater hull condition will be to filter out the influence of all spurious and non-standard operating conditions from the ship’s fuel consumption [2]. The author in this paper identifies parameters of a suitable filter to be used on the daily report data of a typical LNG tanker of 33000 kW shaft power to remove effects of spurious and non-standard ship operations on its fuel consumption. The filtered daily report data has been then used to estimate actual fuel efficiency of the ship and compared with the sea trials benchmark performance. Results obtained using data filter show closer agreement with the benchmark EEP than obtained from the monthly mini trials . The data filtering method proposed in this paper has the advantage of using the actual operational data of the ship and thus saving cost of conducting special sea trials to estimate ship EEP. The agreement between estimated results and special sea trials EEP is
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
Concentrations of proanthocyanidins in common foods and estimations of normal consumption.
Gu, Liwei; Kelm, Mark A; Hammerstone, John F; Beecher, Gary; Holden, Joanne; Haytowitz, David; Gebhardt, Susan; Prior, Ronald L
2004-03-01
Proanthocyanidins (PAs) have been shown to have potential health benefits. However, no data exist concerning their dietary intake. Therefore, PAs in common and infant foods from the U.S. were analyzed. On the bases of our data and those from the USDA's Continuing Survey of Food Intakes by Individuals (CSFII) of 1994-1996, the mean daily intake of PAs in the U.S. population (>2 y old) was estimated to be 57.7 mg/person. Monomers, dimers, trimers, and those above trimers contribute 7.1, 11.2, 7.8, and 73.9% of total PAs, respectively. The major sources of PAs in the American diet are apples (32.0%), followed by chocolate (17.9%) and grapes (17.8%). The 2- to 5-y-old age group (68.2 mg/person) and men >60 y old (70.8 mg/person) consume more PAs daily than other groups because they consume more fruit. The daily intake of PAs for 4- to 6-mo-old and 6- to 10-mo-old infants was estimated to be 1.3 mg and 26.9 mg, respectively, based on the recommendations of the American Academy of Pediatrics. This study supports the concept that PAs account for a major fraction of the total flavonoids ingested in Western diets.
Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J
Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (page than CMAP amplitude ( 0.5002 and 0.4142, respectively pphisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does.
[Estimation of the atrioventricular time interval by pulse Doppler in the normal fetal heart].
Hamela-Olkowska, Anita; Dangel, Joanna
2009-08-01
To assess normative values of the fetal atrioventricular (AV) time interval by pulse-wave Doppler methods on 5-chamber view. Fetal echocardiography exams were performed using Acuson Sequoia 512 in 140 singleton fetuses at 18 to 40 weeks of gestation with sinus rhythm and normal cardiac and extracardiac anatomy. Pulsed Doppler derived AV intervals were measured from left ventricular inflow/outflow view using transabdominal convex 3.5-6 MHz probe. The values of AV time interval ranged from 100 to 150 ms (mean 123 +/- 11.2). The AV interval was negatively correlated with the heart rhythm (page of gestation (p=0.007). However, in the same subgroup of the fetal heart rate there was no relation between AV intervals and gestational age. Therefore, the AV intervals showed only the heart rate dependence. The 95th percentiles of AV intervals according to FHR ranged from 135 to 148 ms. 1. The AV interval duration was negatively correlated with the heart rhythm. 2. Measurement of AV time interval is easy to perform and has a good reproducibility. It may be used for the fetal heart block screening in anti-Ro and anti-La positive pregnancies. 3. Normative values established in the study may help obstetricians in assessing fetal abnormalities of the AV conduction.
Automated estimation of abdominal effective diameter for body size normalization of CT dose.
Cheng, Phillip M
2013-06-01
Most CT dose data aggregation methods do not currently adjust dose values for patient size. This work proposes a simple heuristic for reliably computing an effective diameter of a patient from an abdominal CT image. Evaluation of this method on 106 patients scanned on Philips Brilliance 64 and Brilliance Big Bore scanners demonstrates close correspondence between computed and manually measured patient effective diameters, with a mean absolute error of 1.0 cm (error range +2.2 to -0.4 cm). This level of correspondence was also demonstrated for 60 patients on Siemens, General Electric, and Toshiba scanners. A calculated effective diameter in the middle slice of an abdominal CT study was found to be a close approximation of the mean calculated effective diameter for the study, with a mean absolute error of approximately 1.0 cm (error range +3.5 to -2.2 cm). Furthermore, the mean absolute error for an adjusted mean volume computed tomography dose index (CTDIvol) using a mid-study calculated effective diameter, versus a mean per-slice adjusted CTDIvol based on the calculated effective diameter of each slice, was 0.59 mGy (error range 1.64 to -3.12 mGy). These results are used to calculate approximate normalized dose length product values in an abdominal CT dose database of 12,506 studies.
Exact, time-independent estimation of clone size distributions in normal and mutated cells.
Roshan, A; Jones, P H; Greenman, C D
2014-10-06
Biological tools such as genetic lineage tracing, three-dimensional confocal microscopy and next-generation DNA sequencing are providing new ways to quantify the distribution of clones of normal and mutated cells. Understanding population-wide clone size distributions in vivo is complicated by multiple cell types within observed tissues, and overlapping birth and death processes. This has led to the increased need for mathematically informed models to understand their biological significance. Standard approaches usually require knowledge of clonal age. We show that modelling on clone size independent of time is an alternative method that offers certain analytical advantages; it can help parametrize these models, and obtain distributions for counts of mutated or proliferating cells, for example. When applied to a general birth-death process common in epithelial progenitors, this takes the form of a gambler's ruin problem, the solution of which relates to counting Motzkin lattice paths. Applying this approach to mutational processes, alternative, exact, formulations of classic Luria-Delbrück-type problems emerge. This approach can be extended beyond neutral models of mutant clonal evolution. Applications of these approaches are twofold. First, we resolve the probability of progenitor cells generating proliferating or differentiating progeny in clonal lineage tracing experiments in vivo or cell culture assays where clone age is not known. Second, we model mutation frequency distributions that deep sequencing of subclonal samples produce.
International Nuclear Information System (INIS)
Stenmark, Matthew H.; Cao, Yue; Wang, Hesheng; Jackson, Andrew; Ben-Josef, Edgar; Ten Haken, Randall K.; Lawrence, Theodore S.; Feng, Mary
2014-01-01
Purpose: To estimate the limit of functional liver reserve for safe application of hepatic irradiation using changes in indocyanine green, an established assay of liver function. Materials and methods: From 2005 to 2011, 60 patients undergoing hepatic irradiation were enrolled in a prospective study assessing the plasma retention fraction of indocyanine green at 15-min (ICG-R15) prior to, during (at 60% of planned dose), and after radiotherapy (RT). The limit of functional liver reserve was estimated from the damage fraction of functional liver (DFL) post-RT [1 − (ICG-R15 pre-RT /ICG-R15 post-RT )] where no toxicity was observed using a beta distribution function. Results: Of 48 evaluable patients, 3 (6%) developed RILD, all within 2.5 months of completing RT. The mean ICG-R15 for non-RILD patients pre-RT, during-RT and 1-month post-RT was 20.3%(SE 2.6), 22.0%(3.0), and 27.5%(2.8), and for RILD patients was 6.3%(4.3), 10.8%(2.7), and 47.6%(8.8). RILD was observed at post-RT damage fractions of ⩾78%. Both DFL assessed by during-RT ICG and MLD predicted for DFL post-RT (p < 0.0001). Limiting the post-RT DFL to 50%, predicted a 99% probability of a true complication rate <15%. Conclusion: The DFL as assessed by changes in ICG during treatment serves as an early indicator of a patient’s tolerance to hepatic irradiation
Suction caissons subjected to monotonic combined loading
Penzes, P.; Jensen, M.R.; Zania, Varvara
2016-01-01
Suction caissons are being increasingly used as offshore foundation solutions in shallow and intermediate water depths. The convenient installation method through the application of suction has rendered this type of foundation as an attractive alternative to the more traditional monopile foundation for offshore wind turbines. The combined loading imposed typically to a suction caisson has led to the estimation of their bearing capacity by means of 3D failure envelopes. This study aims to anal...
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
de Almeida, Maurício Liberal; Saatkamp, Cassiano Junior; Fernandes, Adriana Barrinha; Pinheiro, Antonio Luiz Barbosa; Silveira, Landulfo
2016-09-01
Urea and creatinine are commonly used as biomarkers of renal function. Abnormal concentrations of these biomarkers are indicative of pathological processes such as renal failure. This study aimed to develop a model based on Raman spectroscopy to estimate the concentration values of urea and creatinine in human serum. Blood sera from 55 clinically normal subjects and 47 patients with chronic kidney disease undergoing dialysis were collected, and concentrations of urea and creatinine were determined by spectrophotometric methods. A Raman spectrum was obtained with a high-resolution dispersive Raman spectrometer (830 nm). A spectral model was developed based on partial least squares (PLS), where the concentrations of urea and creatinine were correlated with the Raman features. Principal components analysis (PCA) was used to discriminate dialysis patients from normal subjects. The PLS model showed r = 0.97 and r = 0.93 for urea and creatinine, respectively. The root mean square errors of cross-validation (RMSECV) for the model were 17.6 and 1.94 mg/dL, respectively. PCA showed high discrimination between dialysis and normality (95 % accuracy). The Raman technique was able to determine the concentrations with low error and to discriminate dialysis from normal subjects, consistent with a rapid and low-cost test.
Age- and sex-dependent model for estimating radioiodine dose to a normal thyroid
International Nuclear Information System (INIS)
Killough, G.G.; Eckerman, K.F.
1985-01-01
This paper describes the derivation of an age- and sex-dependent model of radioiodine dosimetry in the thyroid and the application of the model to estimating the thyroid dose for each of 4215 patients who were exposed to 131 I in diagnostic and therapeutic procedures. The model was made to conform to these data requirements by the use of age-specific estimates of the biological half-time of iodine in the thyroid and an age- and sex-dependent representation of the mass of the thyroid. Also, it was assumed that the thyroid burden was maximum 24 hours after administration (the 131 I dose is not critically sensitive to this assumption). The metabolic model is of the form A(t) = K[exp(-μ 1 t) - exp(-μ 2 t)] (μCi), where μ 1 = lambda/sub r/ + lambda/sub i//sup b/ (i = 1, 2), lambda/sub r/ is the radiological decay-rate coefficient, and lambda/sub i//sup b/ are biological removal rate coefficients. The values of lambda/sub i//sup b/ are determined by solving a nonlinear equation that depends on assumptions about the time of maximum uptake and the eventual biological loss rate (through which age dependence enters). The value of K may then be calculated from knowledge of the uptake at a particular time. The dosimetric S-factor (rad/μCi-day) is based on specific absorbed fractions for photons of energy ranging from 0.01 to 4.0 MeV for thyroid masses from 1.29 to 19.6 g; the functional form of the S-factor also involves the thyroid mass explicitly, through which the dependence on age and sex enters. An analysis of sensitivity of the model to uncertainties in the thyroid mass and the biological removal rate for several age groups is reported. The model could prove useful in the dosimetry of very short-lived radioiodines. Tables of age- and sex-dependent coefficients are provided to enable readers to make their own calculations. 12 refs., 5 figs., 4 tabs
International Nuclear Information System (INIS)
Veroustraete, F.; Patyn, J.; Myneni, R.B.
1996-01-01
The evaluation and prediction of changes in carbon dynamics at the ecosystem level is a key issue in studies of global change. An operational concept for the determination of carbon fluxes for the Belgian territory is the goal of the presented study. The approach is based on the integration of remotely sensed data into ecosystem models in order to evaluate photosynthetic assimilation and net ecosystem exchange (NEE). Remote sensing can be developed as an operational tool to determine the fraction of absorbed photosynthetically active radiation (feAR). A review of the methodological approach of mapping fPAR dynamics at the regional scale by means of NOAA11-A VHRR / 2 data for the year 1990 is given. The processing sequence from raw radiance values to fPAR is presented. An interesting aspect of incorporating remote sensing derived fPAR in ecosystem models is the potential for modeling actual as opposed to potential vegetation. Further work should prove whether the concepts presented and the assumptions made in this study are valid. (NEE). Complex ecosystem models with a highly predictive value for a specific ecosystem are generally not suitable for global or regional applications, since they require a substantial set of ancillary data becoming increasingly larger with increasing complexity of the model. The ideal model for our purpose is one that is simple enough to be used in global scale modeling, and which can be adapted for different ecosystems or vegetation types. The fraction of absorbed photosynthetically active radiation (fPAR) during the growing season determines in part net photosynthesis and phytomass production (Ruimy, 1995). Remotely measured red and near-infrared spectral reflectances can be used to estimate fPAR. Therefore, a possible approach is to estimate net photosynthesis, phytomass, and NEE from a combination of satellite data and an ecosystem model that includes carbon dynamics. It has to be stated that some parts of the work presented in this
Cannon, Alex
2017-04-01
Estimating historical trends in short-duration rainfall extremes at regional and local scales is challenging due to low signal-to-noise ratios and the limited availability of homogenized observational data. In addition to being of scientific interest, trends in rainfall extremes are of practical importance, as their presence calls into question the stationarity assumptions that underpin traditional engineering and infrastructure design practice. Even with these fundamental challenges, increasingly complex questions are being asked about time series of extremes. For instance, users may not only want to know whether or not rainfall extremes have changed over time, they may also want information on the modulation of trends by large-scale climate modes or on the nonstationarity of trends (e.g., identifying hiatus periods or periods of accelerating positive trends). Efforts have thus been devoted to the development and application of more robust and powerful statistical estimators for regional and local scale trends. While a standard nonparametric method like the regional Mann-Kendall test, which tests for the presence of monotonic trends (i.e., strictly non-decreasing or non-increasing changes), makes fewer assumptions than parametric methods and pools information from stations within a region, it is not designed to visualize detected trends, include information from covariates, or answer questions about the rate of change in trends. As a remedy, monotone quantile regression (MQR) has been developed as a nonparametric alternative that can be used to estimate a common monotonic trend in extremes at multiple stations. Quantile regression makes efficient use of data by directly estimating conditional quantiles based on information from all rainfall data in a region, i.e., without having to precompute the sample quantiles. The MQR method is also flexible and can be used to visualize and analyze the nonlinearity of the detected trend. However, it is fundamentally a
Estimating Recovery Failure Probabilities in Off-normal Situations from Full-Scope Simulator Data
Energy Technology Data Exchange (ETDEWEB)
Kim, Yochan; Park, Jinkyun; Kim, Seunghwan; Choi, Sun Yeong; Jung, Wondea [Korea Atomic Research Institute, Daejeon (Korea, Republic of)
2016-10-15
As part of this effort, KAERI developed the Human Reliability data EXtraction (HuREX) framework and is collecting full-scope simulator-based human reliability data into the OPERA (Operator PErformance and Reliability Analysis) database. In this study, with the series of estimation research for HEPs or PSF effects, significant information for a quantitative HRA analysis, recovery failure probabilities (RFPs), were produced from the OPERA database. Unsafe acts can occur at any time in safety-critical systems and the operators often manage the systems by discovering their errors and eliminating or mitigating them. To model the recovery processes or recovery strategies, there were several researches that categorize the recovery behaviors. Because the recent human error trends are required to be considered during a human reliability analysis, Jang et al. can be seen as an essential effort of the data collection. However, since the empirical results regarding soft controls were produced from a controlled laboratory environment with student participants, it is necessary to analyze a wide range of operator behaviors using full-scope simulators. This paper presents the statistics related with human error recovery behaviors obtained from the full-scope simulations that in-site operators participated in. In this study, the recovery effects by shift changes or technical support centers were not considered owing to a lack of simulation data.
Human neural tuning estimated from compound action potentials in normal hearing human volunteers
Verschooten, Eric; Desloovere, Christian; Joris, Philip X.
2015-12-01
The sharpness of cochlear frequency tuning in humans is debated. Evoked otoacoustic emissions and psychophysical measurements suggest sharper tuning in humans than in laboratory animals [15], but this is disputed based on comparisons of behavioral and electrophysiological measurements across species [14]. Here we used evoked mass potentials to electrophysiologically quantify tuning (Q10) in humans. We combined a notched noise forward masking paradigm [9] with the recording of trans tympanic compound action potentials (CAP) from masked probe tones in awake human and anesthetized monkey (Macaca mulatta). We compare our results to data obtained with the same paradigm in cat and chinchilla [16], and find that CAP-Q10values in human are ˜1.6x higher than in cat and chinchilla and ˜1.3x higher than in monkey. To estimate frequency tuning of single auditory nerve fibers (ANFs) in humans, we derive conversion functions from ANFs in cat, chinchilla, and monkey and apply these to the human CAP measurements. The data suggest that sharp cochlear tuning is a feature of old-world primates.
Suction caissons subjected to monotonic combined loading
DEFF Research Database (Denmark)
Penzes, P.; Jensen, M.R.; Zania, Varvara
2016-01-01
Suction caissons are being increasingly used as offshore foundation solutions in shallow and intermediate water depths. The convenient installation method through the application of suction has rendered this type of foundation as an attractive alternative to the more traditional monopile foundation...... for offshore wind turbines. The combined loading imposed typically to a suction caisson has led to the estimation of their bearing capacity by means of 3D failure envelopes. This study aims to analyse the behaviour of suction caissons for offshore wind turbines subjected to combined loading. Finite element...
Age- and sex-dependent model for estimating radioiodine dose to a normal thyroid
International Nuclear Information System (INIS)
Killough, G.G.; Eckerman, K.F.
1986-01-01
This paper describes the derivation of an age- and sex-dependent model of radioiodine dosimetry in the thyroid and the application of the model to estimating the thyroid dose for each of 4215 patients who were exposed to 131 I in diagnostic and therapeutic procedures. In most cases, the available data consisted of the patient's age at the time of administration, the patient's sex, the quantity of activity administered, the clinically-determined uptake of radioiodine by the thyroid, and the time after administration at which the uptake was determined. The metabolic model is of the form A(t) = K[exp(-μ 1 t) -exp(-μ 2 t)] (μCi), where μ 1 = λ/sub r/ - λ/sub i//sup b/ (i = 1, 2), λ/sub r/ is the radiological decay-rate coefficient, and λ/sub i//sup b/ are biological removal rate coefficients. The values of λ/sub i//sup b/ are determined by solving a nonlinear equation that depends on assumptions about the time or maximum uptake an the eventual biological loss rate (through which age dependence enters). The value of K may then be calculated from knowledge of the uptakes at a particular time. The dosimetric S-factor (rad/μCi-day) is based on specific absorbed fractions for photons of energy ranging from 0.01 to 4.0 MeV for thyroid masses from 1.29 to 19.6 g; the functional form of the S-factor also involves the thyroid mass explicitly, through which the dependence on age and sex enters. An analysis of sensitivity of the model to uncertainties in the thyroid mass and the biological removal rate for several age groups is reported. 12 references, 5 figures, 5 tables
Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.
2017-12-01
Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.
Information flow in layered networks of non-monotonic units
Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.
2015-07-01
Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.
Information flow in layered networks of non-monotonic units
International Nuclear Information System (INIS)
Neves, Fabio Schittler; Schubert, Benno Martim; Erichsen, Rubem Jr
2015-01-01
Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information. (paper)
International Nuclear Information System (INIS)
Tyagi, K.; Jain, S.C.; Jain, P.C.
2001-01-01
ICRP Publications 53, 62 and 80 give organ dose coefficients and effective doses to ICRP Reference Man and Child from established nuclear medicine procedures. However, an average Indian adult differs significantly from the ICRP Reference Man as regards anatomical, physiological and metabolic characteristics, and is also considered to have different tissue weighting factors (called here risk factors). The masses of total body and most organs are significantly lower for the Indian adult than for his ICRP counterpart (e.g. body mass 52 and 70 kg respectively). Similarly, the risk factors are lower by 20-30% for 8 out of the 13 organs and 30-60% higher for 3 organs. In the present study, available anatomical data of Indians and their risk factors have been utilised to estimate the radiation doses from administration of commonly used 99 Tc m -labelled radiopharmaceuticals under normal and certain pathological conditions. The following pathological conditions have been considered for phosphates/phosphonates - high bone uptake and severely impaired kidney function; IDA - parenchymal liver disease, occlusion of cystic duct, and occlusion of bile duct; DTPA - abnormal renal function; large colloids - early to intermediate diffuse parenchymal liver disease, intermediate to advanced parenchymal liver disease; small colloids - early to intermediate parenchymal liver disease, intermediate to advanced parenchymal liver disease; and MAG3 - abnormal renal function, acute unilateral renal blockage. The estimated 'effective doses' to Indian adults are 14-21% greater than the ICRP value from administration of the same activity of radiopharmaceutical under normal physiological conditions based on anatomical considerations alone, because of the smaller organ masses for the Indian; for some pathological conditions the effective doses are 11-22% more. When tissue risk factors are considered in addition to anatomical considerations, the estimated effective doses are still found to be
Energy Technology Data Exchange (ETDEWEB)
Blackwell, L.H.; Ledney, G.D.
1982-07-01
Nucleated bone marrow cell numbers in normal and polycythemic mice were determined using /sup 3/H-thymidine (/sup 3/H-TdR). The cellularities were estimated by extrapolating the exponential disappearance of labeled cells after a single injection of /sup 3/H-TdR to the time of injection. Dermestid beetles (Anthrenus piceus) were used to prepare tissue-free skeletons labeled with /sup 3/H-TdR. The correlation between tritium activity in bone marrow DNA and tritium derived from the combusted skeleton was determined. The total skeletal cellularity determined by isotope dilution analysis in both normal and polycythemic mice was 2.6 x 10(8) cells/mouse or 17.6 x 10(9) cells/kg body weight. Although the red cell component of the marrow was reduced in the polycythemic mouse, the total numbers of nucleated cells in both types of animals were similar. The differential distribution of cells in the polycythemic animal showed a twofold increase in granulocytic cells, which may explain the identical nucleated cell count in normal and in polycythemic mice.
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
International Nuclear Information System (INIS)
Storto, G.; Gallicchio, R.; Maddalena, F.; Pellegrino, T.; Petretta, M.; Fiumara, G.; Cuocolo, A.
2015-01-01
Patients with hypertension may exhibit abnormal vasodilator capacity during pharmacological vasodilatation. We assessed coronary flow reserve (CFR) by sestamibi imaging in hypertensive patients with normal coronary vessels. Twenty-five patients with untreated mild essential hypertension and normal coronary vessels and 10 control subjects underwent dipyridamole-rest Tc-99m sestamibi imaging. Myocardial blood flow (MBF) was estimated by measuring first transit counts in pulmonary artery and myocardial counts from tomograhic images. CFR was expressed as the ratio of stress to rest MBF. Coronary vascular resistances (CVR) were computed as the ratio between mean arterial pressure and MBF. Estimated MBF at rest was not different in patients and controls (1.11±0.59 vs. 1.14±0.28 counts/pixel/s; P=0.87). Conversely, stress MBF was lower in patients than in controls (1.55±0.47 vs. 2.68±0.53 counts/pixel/s; P<0.001). Thus, CFR was reduced in patients compared to controls (1.61±0.58 vs. 2.43±0.62; P<0.001). Rest and stress CVR values were higher in patients (P<0.001), while stress-induced changes in CVR were not different (P=0.08) between patients (-51%) and controls (-62%). In the overall study population, a significant relation between CFR and stress-induced changes in CVR was observed (r=-0.86; P<0.001). Sestamibi imaging may detect impaired coronary vascular function in response to dipyridamole in patients with untreated mild essential hypertension and normal coronary arteries. A mild increase in arterial blood pressure does not affect baseline MBF, but impairs coronary reserve due to the amplified resting coronary resistances.
Iterates of piecewise monotone mappings on an interval
Preston, Chris
1988-01-01
Piecewise monotone mappings on an interval provide simple examples of discrete dynamical systems whose behaviour can be very complicated. These notes are concerned with the properties of the iterates of such mappings. The material presented can be understood by anyone who has had a basic course in (one-dimensional) real analysis. The account concentrates on the topological (as opposed to the measure theoretical) aspects of the theory of piecewise monotone mappings. As well as offering an elementary introduction to this theory, these notes also contain a more advanced treatment of the problem of classifying such mappings up to topological conjugacy.
Pachhai, S.; Masters, G.; Laske, G.
2017-12-01
Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic
Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.
2018-05-01
The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.
International Nuclear Information System (INIS)
Janjai, S.; Sricharoen, K.; Pattarapanitchai, S.
2011-01-01
Highlights: → New semi-empirical models for predicting clear sky irradiance were developed. → The proposed models compare favorably with other empirical models. → Performance of proposed models is comparable with that of widely used physical models. → The proposed models have advantage over the physical models in terms of simplicity. -- Abstract: This paper presents semi-empirical models for estimating global and direct normal solar irradiances under clear sky conditions in the tropics. The models are based on a one-year period of clear sky global and direct normal irradiances data collected at three solar radiation monitoring stations in Thailand: Chiang Mai (18.78 o N, 98.98 o E) located in the North of the country, Nakhon Pathom (13.82 o N, 100.04 o E) in the Centre and Songkhla (7.20 o N, 100.60 o E) in the South. The models describe global and direct normal irradiances as functions of the Angstrom turbidity coefficient, the Angstrom wavelength exponent, precipitable water and total column ozone. The data of Angstrom turbidity coefficient, wavelength exponent and precipitable water were obtained from AERONET sunphotometers, and column ozone was retrieved from the OMI/AURA satellite. Model validation was accomplished using data from these three stations for the data periods which were not included in the model formulation. The models were also validated against an independent data set collected at Ubon Ratchathani (15.25 o N, 104.87 o E) in the Northeast. The global and direct normal irradiances calculated from the models and those obtained from measurements are in good agreement, with the root mean square difference (RMSD) of 7.5% for both global and direct normal irradiances. The performance of the models was also compared with that of other models. The performance of the models compared favorably with that of empirical models. Additionally, the accuracy of irradiances predicted from the proposed model are comparable with that obtained from some
The regularized monotonicity method: detecting irregular indefinite inclusions
DEFF Research Database (Denmark)
Garde, Henrik; Staboulis, Stratos
2018-01-01
inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...
Generalized monotonicity from global minimization in fourth-order ODEs
M.A. Peletier (Mark)
2000-01-01
textabstractWe consider solutions of the stationary Extended Fisher-Kolmogorov equation with general potential that are global minimizers of an associated variational problem. We present results that relate the global minimization property to a generalized concept of monotonicity of the solutions.
Monotone difference schemes for weakly coupled elliptic and parabolic systems
P. Matus (Piotr); F.J. Gaspar Lorenz (Franscisco); L. M. Hieu (Le Minh); V.T.K. Tuyen (Vo Thi Kim)
2017-01-01
textabstractThe present paper is devoted to the development of the theory of monotone difference schemes, approximating the so-called weakly coupled system of linear elliptic and quasilinear parabolic equations. Similarly to the scalar case, the canonical form of the vector-difference schemes is
Pathwise duals of monotone and additive Markov processes
Czech Academy of Sciences Publication Activity Database
Sturm, A.; Swart, Jan M.
-, - (2018) ISSN 0894-9840 R&D Projects: GA ČR GAP201/12/2613 Institutional support: RVO:67985556 Keywords : pathwise duality * monotone Markov process * additive Markov process * interacting particle system Subject RIV: BA - General Mathematics Impact factor: 0.854, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0465436.pdf
Interval Routing and Minor-Monotone Graph Parameters
Bakker, E.M.; Bodlaender, H.L.; Tan, R.B.; Leeuwen, J. van
2006-01-01
We survey a number of minor-monotone graph parameters and their relationship to the complexity of routing on graphs. In particular we compare the interval routing parameters κslir(G) and κsir(G) with Colin de Verdi`ere’s graph invariant μ(G) and its variants λ(G) and κ(G). We show that for all the
On monotonic solutions of an integral equation of Abel type
International Nuclear Information System (INIS)
Darwish, Mohamed Abdalla
2007-08-01
We present an existence theorem of monotonic solutions for a quadratic integral equation of Abel type in C[0, 1]. The famous Chandrasekhar's integral equation is considered as a special case. The concept of measure of noncompactness and a fi xed point theorem due to Darbo are the main tools in carrying out our proof. (author)
Rational functions with maximal radius of absolute monotonicity
Loczi, Lajos; Ketcheson, David I.
2014-01-01
-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend
POLARIZED LINE FORMATION IN NON-MONOTONIC VELOCITY FIELDS
Energy Technology Data Exchange (ETDEWEB)
Sampoorna, M.; Nagendra, K. N., E-mail: sampoorna@iiap.res.in, E-mail: knn@iiap.res.in [Indian Institute of Astrophysics, Koramangala, Bengaluru 560034 (India)
2016-12-10
For a correct interpretation of the observed spectro-polarimetric data from astrophysical objects such as the Sun, it is necessary to solve the polarized line transfer problems taking into account a realistic temperature structure, the dynamical state of the atmosphere, a realistic scattering mechanism (namely, the partial frequency redistribution—PRD), and the magnetic fields. In a recent paper, we studied the effects of monotonic vertical velocity fields on linearly polarized line profiles formed in isothermal atmospheres with and without magnetic fields. However, in general the velocity fields that prevail in dynamical atmospheres of astrophysical objects are non-monotonic. Stellar atmospheres with shocks, multi-component supernova atmospheres, and various kinds of wave motions in solar and stellar atmospheres are examples of non-monotonic velocity fields. Here we present studies on the effect of non-relativistic non-monotonic vertical velocity fields on the linearly polarized line profiles formed in semi-empirical atmospheres. We consider a two-level atom model and PRD scattering mechanism. We solve the polarized transfer equation in the comoving frame (CMF) of the fluid using a polarized accelerated lambda iteration method that has been appropriately modified for the problem at hand. We present numerical tests to validate the CMF method and also discuss the accuracy and numerical instabilities associated with it.
Modelling Embedded Systems by Non-Monotonic Refinement
Mader, Angelika H.; Marincic, J.; Wupper, H.
2008-01-01
This paper addresses the process of modelling embedded sys- tems for formal verification. We propose a modelling process built on non-monotonic refinement and a number of guidelines. The outcome of the modelling process is a model, together with a correctness argument that justifies our modelling
Dondurur, Derman
2005-11-01
The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.
International Nuclear Information System (INIS)
Liu, Fu-dong; Chen, Lu; Pan, Zi-qiang; Liu, Sen-lin; Chen, Ling; Wang, Chun-hong
2017-01-01
Due to the improvement of production technology and the adjustment of energy structure, as well as the town-ownership and private-ownership coal mines (TPCM) were closed or merged by national policy, the number of underground miner has changed comparing with 2004 in China, so collective dose and normalization collective dose in different type of coal mine should be changed at the same time. In this paper, according to radiation exposure by different ventilation condition and the annual output, the coal mines in China are divided into three types, which are named as national key coal mines (NKCM), station-owned local coal mines (SLCM) and TPCM. The number of underground coal miner, collective dose and normalization collective dose are estimated at present base on surveying annual output and production efficiency of raw coal in 2005-2014. The typical total value of the underground coal miners recommended in China is 5.1 million in 2005-2009, and in which there are respectively included 1 million, 0.9 million and 3.2 million for NKCM, SLCM and TPCM. There are total of 4.7 million underground coal miner in 2010-2014, and the respectively number for NKCM, SLCM and TPCM are 1.4 million, 1.2 million and 2.1 million. The collective dose in 2005-2009 is 11 335 man.Sv.y"-"1, and in which there are respectively included 280, 495 and 10 560 man.Sv.y"-"1 for NKCM, SLCM and TPCM. As far as 2010-2014, there are total of 7982 man.Sv.y"-"1, and 392, 660 and 6930 man.Sv.y"-"1 for each type of coal mines. Therefore, the main contributor of collective dose is from TPCM. The normalization collective dose in 2005-2009 is 0.0025, 0.015 and 0.117 man.Sv per 10 kt for NKCM, SLCM and TPCM, respectively. As far as 2010-2014, there are 0.0018, 0.010 and 0.107 man.Sv per 10 kt for each type of coal mines. The trend of normalization collective dose is decreased year by year. (authors)
Monotonicity properties of keff with shape change and with nesting
International Nuclear Information System (INIS)
Arzhanov, V.
2002-01-01
It was found that, contrary to expectations based on physical intuition, k eff can both increase and decrease when changing the shape of an initially regular critical system, while preserving its volume. Physical intuition would only allow for a decrease of k eff when the surface/volume ratio increases. The unexpected behaviour of increasing k eff was found through numerical investigation. For a convincing demonstration of the possibility of the non-monotonic behaviour, a simple geometrical proof was constructed. This latter proof, in turn, is based on the assumption that k eff can only increase (or stay constant) in the case of nesting, i.e. when adding extra volume to a system. Since we found no formal proof of the nesting theorem for the general case, we close the paper by a simple formal proof of the monotonic behaviour of k eff by nesting
A Hybrid Approach to Proving Memory Reference Monotonicity
Oancea, Cosmin E.
2013-01-01
Array references indexed by non-linear expressions or subscript arrays represent a major obstacle to compiler analysis and to automatic parallelization. Most previous proposed solutions either enhance the static analysis repertoire to recognize more patterns, to infer array-value properties, and to refine the mathematical support, or apply expensive run time analysis of memory reference traces to disambiguate these accesses. This paper presents an automated solution based on static construction of access summaries, in which the reference non-linearity problem can be solved for a large number of reference patterns by extracting arbitrarily-shaped predicates that can (in)validate the reference monotonicity property and thus (dis)prove loop independence. Experiments on six benchmarks show that our general technique for dynamic validation of the monotonicity property can cover a large class of codes, incurs minimal run-time overhead and obtains good speedups. © 2013 Springer-Verlag.
Rational functions with maximal radius of absolute monotonicity
Loczi, Lajos
2014-05-19
We study the radius of absolute monotonicity R of rational functions with numerator and denominator of degree s that approximate the exponential function to order p. Such functions arise in the application of implicit s-stage, order p Runge-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend and Kraaijevanger. We determine the maximum attainable radius for functions in several one-parameter families of rational functions. Moreover, we prove earlier conjectured optimal radii in some families with 2 or 3 parameters via uniqueness arguments for systems of polynomial inequalities. Our results also prove the optimality of some strong stability preserving implicit and singly diagonally implicit Runge-Kutta methods. Whereas previous results in this area were primarily numerical, we give all constants as exact algebraic numbers.
Computation of Optimal Monotonicity Preserving General Linear Methods
Ketcheson, David I.
2009-07-01
Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.
Monotonicity and Logarithmic Concavity of Two Functions Involving Exponential Function
Liu, Ai-Qi; Li, Guo-Fu; Guo, Bai-Ni; Qi, Feng
2008-01-01
The function 1 divided by "x"[superscript 2] minus "e"[superscript"-x"] divided by (1 minus "e"[superscript"-x"])[superscript 2] for "x" greater than 0 is proved to be strictly decreasing. As an application of this monotonicity, the logarithmic concavity of the function "t" divided by "e"[superscript "at"] minus "e"[superscript"(a-1)""t"] for "a"…
Sampling from a Discrete Distribution While Preserving Monotonicity.
1982-02-01
in a table beforehand, this procedure, known as the inverse transform method, requires n storage spaces and EX comparisons on average, which may prove...limitations that deserve attention: a. In general, the alias method does not preserve a monotone relationship between U and X as does the inverse transform method...uses the inverse transform approach but with more information computed beforehand, as in the alias method. The proposed method is not new having been
On a strong law of large numbers for monotone measures
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.
2013-01-01
Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf
Monotonous braking of high energy hadrons in nuclear matter
International Nuclear Information System (INIS)
Strugalski, Z.
1979-01-01
Propagation of high energy hadrons in nuclear matter is discussed. The possibility of the existence of the monotonous energy losses of hadrons in nuclear matter is considered. In favour of this hypothesis experimental facts such as pion-nucleus interactions (proton emission spectra, proton multiplicity distributions in these interactions) and other data are presented. The investigated phenomenon in the framework of the hypothesis is characterized in more detail
International Nuclear Information System (INIS)
Grant, T.F.; Harris, M.S.
1989-01-01
The Nuclear Regulatory Commission's TMI Action Plan calls for a long-term plan to upgrade operating procedures in nuclear power plants. The scope of Generic Issue Human Factors 4.4, which stems from this requirement, includes the recommendation of improvements in nuclear power plant normal and abnormal operating procedures (NOPs and AOPs) and the implementation of appropriate regulatory action. This paper will describe the objectives, methodologies, and results of a Battelle-conducted value impact assessment to determine the costs and benefits of having the NRC implement regulatory action that would specify requirements for the preparation of acceptable NOPs and AOPs by the Commission's nuclear power plant licensees. The results of this value impact assessment are expressed in terms of ten cost/benefit attributes that can be affected by the NRC regulatory action. Five of these attributes require the calculation of change in public risk that could be expected to result from the action which, in this case, required determining the safety significance of NOPs and AOPs. In order to estimate this safety significance, a multi-step methodology was created that relies on an existing Probabilistic Risk Assessment (PRA) to provide a quantitative framework for modeling the role of operating procedures. The purpose of this methodology is to determine what impact the improvement of NOPs and AOPs would have on public health and safety
Energy Technology Data Exchange (ETDEWEB)
Kim, Jun Hwee; Kim, Myung Joon; Lim, Sok Hwan; Lee, Mi Jung [Dept. of Radiology and Research Institute of Radiological Science, Severance Children' s Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Kim, Ji Eun [Biostatistics Collaboration Unit, Yonsei University College of Medicine, Seoul (Korea, Republic of)
2013-08-15
To evaluate the relationship between anthropometric measurements and renal length and volume measured with ultrasound in Korean children who have morphologically normal kidneys, and to create simple equations to estimate the renal sizes using the anthropometric measurements. We examined 794 Korean children under 18 years of age including a total of 394 boys and 400 girls without renal problems. The maximum renal length (L) (cm), orthogonal anterior-posterior diameter (D) (cm) and width (W) (cm) of each kidney were measured on ultrasound. Kidney volume was calculated as 0.523 x L x D x W (cm{sup 3}). Anthropometric indices including height (cm), weight (kg) and body mass index (m{sup 2}/kg) were collected through a medical record review. We used linear regression analysis to create simple equations to estimate the renal length and the volume with those anthropometric indices that were mostly correlated with the US-measured renal sizes. Renal length showed the strongest significant correlation with patient height (R2, 0.874 and 0.875 for the right and left kidneys, respectively, p < 0.001). Renal volume showed the strongest significant correlation with patient weight (R2, 0.842 and 0.854 for the right and left kidneys, respectively, p < 0.001). The following equations were developed to describe these relationships with an estimated 95% range of renal length and volume (R2, 0.826-0.884, p < 0.001): renal length = 2.383 + 0.045 x Height (± 1.135) and = 2.374 + 0.047 x Height (± 1.173) for the right and left kidneys, respectively; and renal volume 7.941 + 1.246 x Weight (± 15.920) and = 7.303 + 1.532 x Weight (± 18.704) for the right and left kidneys, respectively. Scatter plots between height and renal length and between weight and renal volume have been established from Korean children and simple equations between them have been developed for use in clinical practice.
International Nuclear Information System (INIS)
Kim, Jun Hwee; Kim, Myung Joon; Lim, Sok Hwan; Lee, Mi Jung; Kim, Ji Eun
2013-01-01
To evaluate the relationship between anthropometric measurements and renal length and volume measured with ultrasound in Korean children who have morphologically normal kidneys, and to create simple equations to estimate the renal sizes using the anthropometric measurements. We examined 794 Korean children under 18 years of age including a total of 394 boys and 400 girls without renal problems. The maximum renal length (L) (cm), orthogonal anterior-posterior diameter (D) (cm) and width (W) (cm) of each kidney were measured on ultrasound. Kidney volume was calculated as 0.523 x L x D x W (cm 3 ). Anthropometric indices including height (cm), weight (kg) and body mass index (m 2 /kg) were collected through a medical record review. We used linear regression analysis to create simple equations to estimate the renal length and the volume with those anthropometric indices that were mostly correlated with the US-measured renal sizes. Renal length showed the strongest significant correlation with patient height (R2, 0.874 and 0.875 for the right and left kidneys, respectively, p < 0.001). Renal volume showed the strongest significant correlation with patient weight (R2, 0.842 and 0.854 for the right and left kidneys, respectively, p < 0.001). The following equations were developed to describe these relationships with an estimated 95% range of renal length and volume (R2, 0.826-0.884, p < 0.001): renal length = 2.383 + 0.045 x Height (± 1.135) and = 2.374 + 0.047 x Height (± 1.173) for the right and left kidneys, respectively; and renal volume 7.941 + 1.246 x Weight (± 15.920) and = 7.303 + 1.532 x Weight (± 18.704) for the right and left kidneys, respectively. Scatter plots between height and renal length and between weight and renal volume have been established from Korean children and simple equations between them have been developed for use in clinical practice.
Semiparametric approach for non-monotone missing covariates in a parametric regression model
Sinha, Samiran
2014-02-26
Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.
Directory of Open Access Journals (Sweden)
Chowdhury Molhammad SR
2000-01-01
Full Text Available Results are obtained on existence theorems of generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators in both compact and non-compact settings. We shall use the concept of escaping sequences introduced by Border (Fixed Point Theorem with Applications to Economics and Game Theory, Cambridge University Press, Cambridge, 1985 to obtain results in non-compact settings. Existence theorems on non-compact generalized bi-complementarity problems for quasi-semi-monotone and bi-quasi-semi-monotone operators are also obtained. Moreover, as applications of some results of this paper on generalized bi-quasi-variational inequalities, we shall obtain existence of solutions for some kind of minimization problems with quasi- semi-monotone and bi-quasi-semi-monotone operators.
Commutative $C^*$-algebras and $\\sigma$-normal morphisms
de Jeu, Marcel
2003-01-01
We prove in an elementary fashion that the image of a commutative monotone $\\sigma$-complete $C^*$-algebra under a $\\sigma$-normal morphism is again monotone $\\sigma$-complete and give an application of this result in spectral theory.
A non-parametric test for partial monotonicity in multiple regression
van Beek, M.; Daniëls, H.A.M.
Partial positive (negative) monotonicity in a dataset is the property that an increase in an independent variable, ceteris paribus, generates an increase (decrease) in the dependent variable. A test for partial monotonicity in datasets could (1) increase model performance if monotonicity may be
In some symmetric spaces monotonicity properties can be reduced to the cone of rearrangements
Czech Academy of Sciences Publication Activity Database
Hudzik, H.; Kaczmarek, R.; Krbec, Miroslav
2016-01-01
Roč. 90, č. 1 (2016), s. 249-261 ISSN 0001-9054 Institutional support: RVO:67985840 Keywords : symmetric spaces * K-monotone symmetric Banach spaces * strict monotonicity * lower local uniform monotonicity Subject RIV: BA - General Mathematics Impact factor: 0.826, year: 2016 http://link.springer.com/article/10.1007%2Fs00010-015-0379-6
Touyarou, Peio; Sulmont-Rossé, Claire; Gagnaire, Aude; Issanchou, Sylvie; Brondel, Laurent
2012-04-01
This study aimed to observe the influence of the monotonous consumption of two types of fibre-enriched bread at breakfast on hedonic liking for the bread, subsequent hunger and energy intake. Two groups of unrestrained normal weight participants were given either white sandwich bread (WS) or multigrain sandwich bread (MG) at breakfast (the sensory properties of the WS were more similar to the usual bread eaten by the participants than those of the MG). In each group, two 15-day cross-over conditions were set up. During the experimental condition the usual breakfast of each participant was replaced by an isocaloric portion of plain bread (WS or MG). During the control condition, participants consumed only 10 g of the corresponding bread and completed their breakfast with other foods they wanted. The results showed that bread appreciation did not change over exposure even in the experimental condition. Hunger was lower in the experimental condition than in the control condition. The consumption of WS decreased energy intake while the consumption of MG did not in the experimental condition compared to the corresponding control one. In conclusion, a monotonous breakfast composed solely of a fibre-enriched bread may decrease subsequent hunger and, when similar to a familiar bread, food intake. Copyright Â© 2011. Published by Elsevier Ltd.
Energy Technology Data Exchange (ETDEWEB)
Kinoshita, Kanji; Murayama, Kouichi; Ogata, Hiroyuki [and others
1997-04-01
The fracture behavior for Japanese carbon steel pipe STS410 was examined under dynamic monotonic and cyclic loading through a research program of International Piping Integrity Research Group (EPIRG-2), in order to evaluate the strength of pipe during the seismic event The tensile test and the fracture toughness test were conducted for base metal and TIG weld metal. Three base metal pipe specimens, 1,500mm in length and 6-inch diameter sch.120, were employed for a quasi-static monotonic, a dynamic monotonic and a dynamic cyclic loading pipe fracture tests. One weld joint pipe specimen was also employed for a dynamic cyclic loading test In the dynamic cyclic loading test, the displacement was controlled as applying the fully reversed load (R=-1). The pipe specimens with a circumferential through-wall crack were subjected four point bending load at 300C in air. Japanese STS410 carbon steel pipe material was found to have high toughness under dynamic loading condition through the CT fracture toughness test. As the results of pipe fracture tests, the maximum moment to pipe fracture under dynamic monotonic and cyclic loading condition, could be estimated by plastic collapse criterion and the effect of dynamic monotonic loading and cyclic loading was a little on the maximum moment to pipe fracture of the STS410 carbon steel pipe. The STS410 carbon steel pipe seemed to be less sensitive to dynamic and cyclic loading effects than the A106Gr.B carbon steel pipe evaluated in IPIRG-1 program.
International Nuclear Information System (INIS)
Kinoshita, Kanji; Murayama, Kouichi; Ogata, Hiroyuki
1997-01-01
The fracture behavior for Japanese carbon steel pipe STS410 was examined under dynamic monotonic and cyclic loading through a research program of International Piping Integrity Research Group (EPIRG-2), in order to evaluate the strength of pipe during the seismic event The tensile test and the fracture toughness test were conducted for base metal and TIG weld metal. Three base metal pipe specimens, 1,500mm in length and 6-inch diameter sch.120, were employed for a quasi-static monotonic, a dynamic monotonic and a dynamic cyclic loading pipe fracture tests. One weld joint pipe specimen was also employed for a dynamic cyclic loading test In the dynamic cyclic loading test, the displacement was controlled as applying the fully reversed load (R=-1). The pipe specimens with a circumferential through-wall crack were subjected four point bending load at 300C in air. Japanese STS410 carbon steel pipe material was found to have high toughness under dynamic loading condition through the CT fracture toughness test. As the results of pipe fracture tests, the maximum moment to pipe fracture under dynamic monotonic and cyclic loading condition, could be estimated by plastic collapse criterion and the effect of dynamic monotonic loading and cyclic loading was a little on the maximum moment to pipe fracture of the STS410 carbon steel pipe. The STS410 carbon steel pipe seemed to be less sensitive to dynamic and cyclic loading effects than the A106Gr.B carbon steel pipe evaluated in IPIRG-1 program
Non-monotonic behaviour in relaxation dynamics of image restoration
International Nuclear Information System (INIS)
Ozeki, Tomoko; Okada, Masato
2003-01-01
We have investigated the relaxation dynamics of image restoration through a Bayesian approach. The relaxation dynamics is much faster at zero temperature than at the Nishimori temperature where the pixel-wise error rate is minimized in equilibrium. At low temperature, we observed non-monotonic development of the overlap. We suggest that the optimal performance is realized through premature termination in the relaxation processes in the case of the infinite-range model. We also performed Markov chain Monte Carlo simulations to clarify the underlying mechanism of non-trivial behaviour at low temperature by checking the local field distributions of each pixel
An iterative method for nonlinear demiclosed monotone-type operators
International Nuclear Information System (INIS)
Chidume, C.E.
1991-01-01
It is proved that a well known fixed point iteration scheme which has been used for approximating solutions of certain nonlinear demiclosed monotone-type operator equations in Hilbert spaces remains applicable in real Banach spaces with property (U, α, m+1, m). These Banach spaces include the L p -spaces, p is an element of [2,∞]. An application of our results to the approximation of a solution of a certain linear operator equation in this general setting is also given. (author). 19 refs
Using exogenous variables in testing for monotonic trends in hydrologic time series
Alley, William M.
1988-01-01
One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.
Ergodic averages for monotone functions using upper and lower dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....
Ergodic averages for monotone functions using upper and lower dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2007-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....
Assessing the Health of LiFePO4 Traction Batteries through Monotonic Echo State Networks
Anseán, David; Otero, José; Couso, Inés
2017-01-01
A soft sensor is presented that approximates certain health parameters of automotive rechargeable batteries from on-vehicle measurements of current and voltage. The sensor is based on a model of the open circuit voltage curve. This last model is implemented through monotonic neural networks and estimate over-potentials arising from the evolution in time of the Lithium concentration in the electrodes of the battery. The proposed soft sensor is able to exploit the information contained in operational records of the vehicle better than the alternatives, this being particularly true when the charge or discharge currents are between moderate and high. The accuracy of the neural model has been compared to different alternatives, including data-driven statistical models, first principle-based models, fuzzy observers and other recurrent neural networks with different topologies. It is concluded that monotonic echo state networks can outperform well established first-principle models. The algorithms have been validated with automotive Li-FePO4 cells. PMID:29267219
Assessing the Health of LiFePO4 Traction Batteries through Monotonic Echo State Networks
Directory of Open Access Journals (Sweden)
Luciano Sánchez
2017-12-01
Full Text Available A soft sensor is presented that approximates certain health parameters of automotive rechargeable batteries from on-vehicle measurements of current and voltage. The sensor is based on a model of the open circuit voltage curve. This last model is implemented through monotonic neural networks and estimate over-potentials arising from the evolution in time of the Lithium concentration in the electrodes of the battery. The proposed soft sensor is able to exploit the information contained in operational records of the vehicle better than the alternatives, this being particularly true when the charge or discharge currents are between moderate and high. The accuracy of the neural model has been compared to different alternatives, including data-driven statistical models, first principle-based models, fuzzy observers and other recurrent neural networks with different topologies. It is concluded that monotonic echo state networks can outperform well established first-principle models. The algorithms have been validated with automotive Li-FePO4 cells.
Experimental quantum control landscapes: Inherent monotonicity and artificial structure
International Nuclear Information System (INIS)
Roslund, Jonathan; Rabitz, Herschel
2009-01-01
Unconstrained searches over quantum control landscapes are theoretically predicted to generally exhibit trap-free monotonic behavior. This paper makes an explicit experimental demonstration of this intrinsic monotonicity for two controlled quantum systems: frequency unfiltered and filtered second-harmonic generation (SHG). For unfiltered SHG, the landscape is randomly sampled and interpolation of the data is found to be devoid of landscape traps up to the level of data noise. In the case of narrow-band-filtered SHG, trajectories are taken on the landscape to reveal a lack of traps. Although the filtered SHG landscape is trap free, it exhibits a rich local structure. A perturbation analysis around the top of these landscapes provides a basis to understand their topology. Despite the inherent trap-free nature of the landscapes, practical constraints placed on the controls can lead to the appearance of artificial structure arising from the resultant forced sampling of the landscape. This circumstance and the likely lack of knowledge about the detailed local landscape structure in most quantum control applications suggests that the a priori identification of globally successful (un)constrained curvilinear control variables may be a challenging task.
Positivity and monotonicity properties of C0-semigroups. Pt. 1
International Nuclear Information System (INIS)
Bratteli, O.; Kishimoto, A.; Robinson, D.W.
1980-01-01
If exp(-tH), exp(-tK), are self-adjoint, positivity preserving, contraction semigroups on a Hilbert space H = L 2 (X;dμ) we write esup(-tH) >= esup(-tK) >= 0 whenever exp(-tH) - exp(-tK) is positivity preserving for all t >= 0 and then we characterize the class of positive functions for which (*) always implies esup(-tf(H)) >= esup(-tf(K)) >= 0. This class consists of the f epsilon Csup(infinitely)(0, infinitely) with (-1)sup(n)fsup((n + 1))(x) >= 0, x epsilon(0, infinitely), n = 0, 1, 2, ... In particular it contains the class of monotone operator functions. Furthermore if exp(-tH) is Lsup(P)(X;dμ) contractive for all p epsilon[1, infinitely] and all t > 0 (or, equivalently, for p = infinitely and t > 0) then exp(-tf(H)) has the same property. Various applications to monotonicity properties of Green's functions are given. (orig.)
Theoretical and experimental study of non-monotonous effects
International Nuclear Information System (INIS)
Delforge, J.
1977-01-01
In recent years, the study of the effects of low dose rates has expanded considerably, especially in connection with current problems concerning the environment and health physics. After having made a precise definition of the different types of non-monotonous effect which may be encountered, for each the main experimental results known are indicated, as well as the principal consequences which may be expected. One example is the case of radiotherapy, where there is a chance of finding irradiation conditions such that the ratio of destructive action on malignant cells to healthy cells is significantly improved. In the second part of the report, the appearance of these phenomena, especially at low dose rates are explained. For this purpose, the theory of transformation systems of P. Delattre is used as a theoretical framework. With the help of a specific example, it is shown that non-monotonous effects are frequently encountered, especially when the overall effect observed is actually the sum of several different elementary effects (e.g. in survival curves, where death may be due to several different causes), or when the objects studied possess inherent kinetics not limited to restoration phenomena alone (e.g. cellular cycle) [fr
The Monotonic Lagrangian Grid for Rapid Air-Traffic Evaluation
Kaplan, Carolyn; Dahm, Johann; Oran, Elaine; Alexandrov, Natalia; Boris, Jay
2010-01-01
The Air Traffic Monotonic Lagrangian Grid (ATMLG) is presented as a tool to evaluate new air traffic system concepts. The model, based on an algorithm called the Monotonic Lagrangian Grid (MLG), can quickly sort, track, and update positions of many aircraft, both on the ground (at airports) and in the air. The underlying data structure is based on the MLG, which is used for sorting and ordering positions and other data needed to describe N moving bodies and their interactions. Aircraft that are close to each other in physical space are always near neighbors in the MLG data arrays, resulting in a fast nearest-neighbor interaction algorithm that scales as N. Recent upgrades to ATMLG include adding blank place-holders within the MLG data structure, which makes it possible to dynamically change the MLG size and also improves the quality of the MLG grid. Additional upgrades include adding FAA flight plan data, such as way-points and arrival and departure times from the Enhanced Traffic Management System (ETMS), and combining the MLG with the state-of-the-art strategic and tactical conflict detection and resolution algorithms from the NASA-developed Stratway software. In this paper, we present results from our early efforts to couple ATMLG with the Stratway software, and we demonstrate that it can be used to quickly simulate air traffic flow for a very large ETMS dataset.
Kalayeh, H. M.; Landgrebe, D. A.
1983-01-01
A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109
Jacobs, Rianne; Bekker, Andriëtte A; van der Voet, Hilko; Ter Braak, Cajo J F
2015-01-01
Estimating the risk, P(X > Y), in probabilistic environmental risk assessment of nanoparticles is a problem when confronted by potentially small risks and small sample sizes of the exposure concentration X and/or the effect concentration Y. This is illustrated in the motivating case study of aquatic risk assessment of nano-Ag. A non-parametric estimator based on data alone is not sufficient as it is limited by sample size. In this paper, we investigate the maximum gain possible when making strong parametric assumptions as opposed to making no parametric assumptions at all. We compare maximum likelihood and Bayesian estimators with the non-parametric estimator and study the influence of sample size and risk on the (interval) estimators via simulation. We found that the parametric estimators enable us to estimate and bound the risk for smaller sample sizes and small risks. Also, the Bayesian estimator outperforms the maximum likelihood estimators in terms of coverage and interval lengths and is, therefore, preferred in our motivating case study.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI
DEFF Research Database (Denmark)
Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N
2017-01-01
available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time...... the quantitative results are compared against ground-truth histology, they seem to reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing......-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures – such as axons and extra-axonal spaces, which we here used in a simple model for the microstructure – and that, for axons parallel to the main magnetic field...
Multipartite entangled quantum states: Transformation, Entanglement monotones and Application
Cui, Wei
Entanglement is one of the fundamental features of quantum information science. Though bipartite entanglement has been analyzed thoroughly in theory and shown to be an important resource in quantum computation and communication protocols, the theory of entanglement shared between more than two parties, which is called multipartite entanglement, is still not complete. Specifically, the classification of multipartite entanglement and the transformation property between different multipartite states by local operators and classical communications (LOCC) are two fundamental questions in the theory of multipartite entanglement. In this thesis, we present results related to the LOCC transformation between multipartite entangled states. Firstly, we investigate the bounds on the LOCC transformation probability between multipartite states, especially the GHZ class states. By analyzing the involvement of 3-tangle and other entanglement measures under weak two-outcome measurement, we derive explicit upper and lower bound on the transformation probability between GHZ class states. After that, we also analyze the transformation between N-party W type states, which is a special class of multipartite entangled states that has an explicit unique expression and a set of analytical entanglement monotones. We present a necessary and sufficient condition for a known upper bound of transformation probability between two N-party W type states to be achieved. We also further investigate a novel entanglement transformation protocol, the random distillation, which transforms multipartite entanglement into bipartite entanglement ii shared by a non-deterministic pair of parties. We find upper bounds for the random distillation protocol for general N-party W type states and find the condition for the upper bounds to be achieved. What is surprising is that the upper bounds correspond to entanglement monotones that can be increased by Separable Operators (SEP), which gives the first set of
Rock, N. M. S.
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures
Bulcock, J. W.; And Others
Multicollinearity refers to the presence of highly intercorrelated independent variables in structural equation models, that is, models estimated by using techniques such as least squares regression and maximum likelihood. There is a problem of multicollinearity in both the natural and social sciences where theory formulation and estimation is in…
DEFF Research Database (Denmark)
Larsen, Karen B
2017-01-01
abnormal development. Furthermore, many studies of brain cell numbers have employed biased counting methods, whereas innovations in stereology during the past 20-30 years enable reliable and efficient estimates of cell numbers. However, estimates of cell volumes and densities in fetal brain samples...
DEFF Research Database (Denmark)
Jepsen, Morten Løve; Dau, Torsten
To partly characterize the function of cochlear processing in humans, the basilar membrane (BM) input-output function can be estimated. In recent studies, forward masking has been used to estimate BM compression. If an on-frequency masker is processed compressively, while an off-frequency masker...... is transformed more linearly, the ratio between the slopes of growth of masking (GOM) functions provides an estimate of BM compression at the signal frequency. In this study, this paradigm is extended to also estimate the knee-point of the I/O-function between linear rocessing at low levels and compressive...... processing at medium levels. If a signal can be masked by a low-level on-frequency masker such that signal and masker fall in the linear region of the I/O-function, then a steeper GOM function is expected. The knee-point can then be estimated in the input level region where the GOM changes significantly...
Sampling dynamics: an alternative to payoff-monotone selection dynamics
DEFF Research Database (Denmark)
Berkemer, Rainer
payoff-monotone nor payoff-positive which has interesting consequences. This can be demonstrated by application to the travelers dilemma, a deliberately constructed social dilemma. The game has just one symmetric Nash equilibrium which is Pareto inefficient. Especially when the travelers have many......'' of the standard game theory result. Both, analytical tools and agent based simulation are used to investigate the dynamic stability of sampling equilibria in a generalized travelers dilemma. Two parameters are of interest: the number of strategy options (m) available to each traveler and an experience parameter...... (k), which indicates the number of samples an agent would evaluate before fixing his decision. The special case (k=1) can be treated analytically. The stationary points of the dynamics must be sampling equilibria and one can calculate that for m>3 there will be an interior solution in addition...
Modeling non-monotonic properties under propositional argumentation
Wang, Geng; Lin, Zuoquan
2013-03-01
In the field of knowledge representation, argumentation is usually considered as an abstract framework for nonclassical logic. In this paper, however, we'd like to present a propositional argumentation framework, which can be used to closer simulate a real-world argumentation. We thereby argue that under a dialectical argumentation game, we can allow non-monotonic reasoning even under classical logic. We introduce two methods together for gaining nonmonotonicity, one by giving plausibility for arguments, the other by adding "exceptions" which is similar to defaults. Furthermore, we will give out an alternative definition for propositional argumentation using argumentative models, which is highly related to the previous reasoning method, but with a simple algorithm for calculation.
Monotonic childhoods: representations of otherness in research writing
Directory of Open Access Journals (Sweden)
Denise Marcos Bussoletti
2011-12-01
Full Text Available This paper is part of a doctoral thesis entitled “Monotonic childhoods – a rhapsody of hope”. It follows the perspective of a critical psychosocial and cultural study, and aims at discussing the other’s representation in research writing, electing childhood as an allegorical and refl ective place. It takes into consideration, by means of analysis, the drawings and poems of children from the Terezin ghetto during the Second World War. The work is mostly based on Serge Moscovici’s Social Representation Theory, but it is also in constant dialogue with other theories and knowledge fi elds, especially Walter Benjamin’s and Mikhail Bakhtin’s contributions. At the end, the paper supports the thesis that conceives poetics as one of the translation axes of childhood cultures.
Convex analysis and monotone operator theory in Hilbert spaces
Bauschke, Heinz H
2017-01-01
This reference text, now in its second edition, offers a modern unifying presentation of three basic areas of nonlinear analysis: convex analysis, monotone operator theory, and the fixed point theory of nonexpansive operators. Taking a unique comprehensive approach, the theory is developed from the ground up, with the rich connections and interactions between the areas as the central focus, and it is illustrated by a large number of examples. The Hilbert space setting of the material offers a wide range of applications while avoiding the technical difficulties of general Banach spaces. The authors have also drawn upon recent advances and modern tools to simplify the proofs of key results making the book more accessible to a broader range of scholars and users. Combining a strong emphasis on applications with exceptionally lucid writing and an abundance of exercises, this text is of great value to a large audience including pure and applied mathematicians as well as researchers in engineering, data science, ma...
Expert system for failures detection and non-monotonic reasoning
International Nuclear Information System (INIS)
Assis, Abilio de; Schirru, Roberto
1997-01-01
This paper presents the development of a shell denominated TIGER that has the purpose to serve as environment to the development of expert systems in diagnosis of faults in industrial complex plants. A model of knowledge representation and an inference engine based on non monotonic reasoning has been developed in order to provide flexibility in the representation of complex plants as well as performance to satisfy restrictions of real time. The TIGER is able to provide both the occurred fault and a hierarchical view of the several reasons that caused the fault to happen. As a validation of the developed shell a monitoring system of the critical safety functions of Angra-1 has been developed. 7 refs., 7 figs., 2 tabs
Monotonicity of fitness landscapes and mutation rate control.
Belavkin, Roman V; Channon, Alastair; Aston, Elizabeth; Aston, John; Krašovec, Rok; Knight, Christopher G
2016-12-01
A common view in evolutionary biology is that mutation rates are minimised. However, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Much biological theory in this area is based on Ronald Fisher's work, who used Euclidean geometry to study the relation between mutation size and expected fitness of the offspring in infinite phenotypic spaces. Here we reconsider this theory based on the alternative geometry of discrete and finite spaces of DNA sequences. First, we consider the geometric case of fitness being isomorphic to distance from an optimum, and show how problems of optimal mutation rate control can be solved exactly or approximately depending on additional constraints of the problem. Then we consider the general case of fitness communicating only partial information about the distance. We define weak monotonicity of fitness landscapes and prove that this property holds in all landscapes that are continuous and open at the optimum. This theoretical result motivates our hypothesis that optimal mutation rate functions in such landscapes will increase when fitness decreases in some neighbourhood of an optimum, resembling the control functions derived in the geometric case. We test this hypothesis experimentally by analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. Our findings support the hypothesis and find that the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We discuss the relevance of these findings to living organisms.
Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin
2015-12-01
Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.
Log-supermodularity of weight functions and the loading monotonicity of weighted insurance premiums
Hristo S. Sendov; Ying Wang; Ricardas Zitikis
2010-01-01
The paper is motivated by a problem concerning the monotonicity of insurance premiums with respect to their loading parameter: the larger the parameter, the larger the insurance premium is expected to be. This property, usually called loading monotonicity, is satisfied by premiums that appear in the literature. The increased interest in constructing new insurance premiums has raised a question as to what weight functions would produce loading-monotonic premiums. In this paper we demonstrate a...
Yim, Ji-Hye; Yun, Jung Mi; Kim, Ji Young; Nam, Seon Young; Kim, Cha Soon
2017-11-01
Low-dose radiation has various biological effects such as adaptive responses, low-dose hypersensitivity, as well as beneficial effects. However, little is known about the particular proteins involved in these effects. Here, we sought to identify low-dose radiation-responsive phosphoproteins in normal fibroblast cells. We assessed genomic instability and proliferation of fibroblast cells after γ-irradiation by γ-H2AX foci and micronucleus formation analyses and BrdU incorporation assay, respectively. We screened fibroblast cells 8 h after low-dose (0.05 Gy) γ-irradiation using Phospho Explorer Antibody Microarray and validated two differentially expressed phosphoproteins using Western blotting. Cell proliferation proceeded normally in the absence of genomic instability after low-dose γ-irradiation. Phospho antibody microarray analysis and Western blotting revealed increased expression of two phosphoproteins, phospho-NFκB (Ser536) and phospho-P70S6K (Ser418), 8 h after low-dose radiation. Our findings suggest that low-dose radiation of normal fibroblast cells activates the expression of phospho-NFκB (Ser536) and phospho-P70S6K (Ser418) in the absence of genomic instability. Therefore, these proteins may be involved in DNA damage repair processes.
International Nuclear Information System (INIS)
Bengel, F.M.; Nekolla, S.; Schwaiger, M.; Ungerer, M.
2000-01-01
We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with 11 C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A ''stroke work index'' (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a ''work-metabolic index'' (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for 11 C-acetate derived from mono-exponential fitting. In DCM patients, left ventricular ejection fraction was 19%±10% and end-diastolic volume was 92±28 ml/m 2 (vs 64%±7% and 55±8 ml/m 2 in normals, P 2 ; P 6 mmHg x ml/m 2 ; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart was not able to respond with an increase in efficiency to increasing ventricular volume.(orig./MG) (orig.)
A System of Generalized Variational Inclusions Involving a New Monotone Mapping in Banach Spaces
Directory of Open Access Journals (Sweden)
Jinlin Guan
2013-01-01
Full Text Available We introduce a new monotone mapping in Banach spaces, which is an extension of the -monotone mapping studied by Nazemi (2012, and we generalize the variational inclusion involving the -monotone mapping. Based on the new monotone mapping, we propose a new proximal mapping which combines the proximal mapping studied by Nazemi (2012 with the mapping studied by Lan et al. (2011 and show its Lipschitz continuity. Based on the new proximal mapping, we give an iterative algorithm. Furthermore, we prove the convergence of iterative sequences generated by the algorithm under some appropriate conditions. Our results improve and extend corresponding ones announced by many others.
Obliquely Propagating Non-Monotonic Double Layer in a Hot Magnetized Plasma
International Nuclear Information System (INIS)
Kim, T.H.; Kim, S.S.; Hwang, J.H.; Kim, H.Y.
2005-01-01
Obliquely propagating non-monotonic double layer is investigated in a hot magnetized plasma, which consists of a positively charged hot ion fluid and trapped, as well as free electrons. A model equation (modified Korteweg-de Vries equation) is derived by the usual reductive perturbation method from a set of basic hydrodynamic equations. A time stationary obliquely propagating non-monotonic double layer solution is obtained in a hot magnetized-plasma. This solution is an analytic extension of the monotonic double layer and the solitary hole. The effects of obliqueness, external magnetic field and ion temperature on the properties of the non-monotonic double layer are discussed
Lee, Si Hyung; Kwak, Seung Woo; Kang, Eun Min; Kim, Gyu Ah; Lee, Sang Yeop; Bae, Hyoung Won; Seong, Gong Je; Kim, Chan Yun
2016-01-01
Background To investigate the association between estimated trans-lamina cribrosa pressure difference (TLCPD) and prevalence of normal tension glaucoma (NTG) with low-teen and high-teen intraocular pressure (IOP) using a population-based study design. Methods A total of 12,743 adults (? 40 years of age) who participated in the Korean National Health and Nutrition Examination Survey (KNHANES) from 2009 to 2012 were included. Using a previously developed formula, cerebrospinal fluid pressure (C...
DEFF Research Database (Denmark)
Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon
2014-01-01
An on-state collector-emitter voltage (Vce) measurement and thereby an estimation of average temperature in space for high power IGBT module is presented while power converter is in operation. The proposed measurement circuit is able to measure both high and low side IGBT and anti parallel diode...
International Nuclear Information System (INIS)
Lugt, G. van der; Wijker, H.; Kema, N.V.
1977-01-01
In the Netherlands discussions are going on about the installation of three nuclear power plants, leading with the two existing plants to a total capacity of 3500 MWe. To have an impression of the radiological impact of this program, calculations were carried out concerning the population doses due to the discharge of radioactivity from the plants during normal operation. The discharge via the ventilation stack gives doses due to noble gases, halogens and particulate material. The population dose due to the halogens in the grass-milk-man chain is estimated using the real distribution of grass-land around the reactor sites. It could be concluded that the population dose due to the contamination of crops and fruit is negligeable. A conservative estimation is made for the dose due to the discharge of tritium. The population dose due to the discharge in the cooling water is calculated using the following pathways: drinking water; consumption of fish; consumption of meat from animals fed with fish products. The individual doses caused by the normal discharge of a 1000 MWe plant appeared to be very low, mostly below 1 mrem/year. The population dose is in the order of some tens manrems. The total dose of the 5 nuclear power plants to the dutch population is not more than 70 manrem. Using a linear dose-effect relationship the health effects to the population are estimated and compared with the normal frequency
Bedogni, Giorgio; Bertoli, Simona; Leone, Alessandro; De Amicis, Ramona; Lucchetti, Elisa; Agosti, Fiorenza; Marazzi, Nicoletta; Battezzati, Alberto; Sartorio, Alessandro
2017-11-24
We cross-validated 28 equations to estimate resting energy expenditure (REE) in a very large sample of adults with overweight or obesity. 14952 Caucasian men and women with overweight or obesity and 1498 with normal weight were studied. REE was measured using indirect calorimetry and estimated using two meta-regression equations and 26 other equations. The correct classification fraction (CCF) was defined as the fraction of subjects whose estimated REE was within 10% of measured REE. The highest CCF was 79%, 80%, 72%, 64%, and 63% in subjects with normal weight, overweight, class 1 obesity, class 2 obesity, and class 3 obesity, respectively. The Henry weight and height and Mifflin equations performed equally well with CCFs of 77% vs. 77% for subjects with normal weight, 80% vs. 80% for those with overweight, 72% vs. 72% for those with class 1 obesity, 64% vs. 63% for those with class 2 obesity, and 61% vs. 60% for those with class 3 obesity. The Sabounchi meta-regression equations offered an improvement over the above equations only for class 3 obesity (63%). The accuracy of REE equations decreases with increasing values of body mass index. The Henry weight & height and Mifflin equations are similarly accurate and the Sabounchi equations offer an improvement only in subjects with class 3 obesity. Copyright © 2017 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Surfactants non-monotonically modify the onset of Faraday waves
Strickland, Stephen; Shearer, Michael; Daniels, Karen
2017-11-01
When a water-filled container is vertically vibrated, subharmonic Faraday waves emerge once the driving from the vibrations exceeds viscous dissipation. In the presence of an insoluble surfactant, a viscous boundary layer forms at the contaminated surface to balance the Marangoni and Boussinesq stresses. For linear gravity-capillary waves in an undriven fluid, the surfactant-induced boundary layer increases the amount of viscous dissipation. In our analysis and experiments, we consider whether similar effects occur for nonlinear Faraday (gravity-capillary) waves. Assuming a finite-depth, infinite-breadth, low-viscosity fluid, we derive an analytic expression for the onset acceleration up to second order in ɛ =√{ 1 / Re } . This expression allows us to include fluid depth and driving frequency as parameters, in addition to the Marangoni and Boussinesq numbers. For millimetric fluid depths and driving frequencies of 30 to 120 Hz, our analysis recovers prior numerical results and agrees with our measurements of NBD-PC surfactant on DI water. In both case, the onset acceleration increases non-monotonically as a function of Marangoni and Boussinesq numbers. For shallower systems, our model predicts that surfactants could decrease the onset acceleration. DMS-0968258.
Dynamical zeta functions for piecewise monotone maps of the interval
Ruelle, David
2004-01-01
Consider a space M, a map f:M\\to M, and a function g:M \\to {\\mathbb C}. The formal power series \\zeta (z) = \\exp \\sum ^\\infty _{m=1} \\frac {z^m}{m} \\sum _{x \\in \\mathrm {Fix}\\,f^m} \\prod ^{m-1}_{k=0} g (f^kx) yields an example of a dynamical zeta function. Such functions have unexpected analytic properties and interesting relations to the theory of dynamical systems, statistical mechanics, and the spectral theory of certain operators (transfer operators). The first part of this monograph presents a general introduction to this subject. The second part is a detailed study of the zeta functions associated with piecewise monotone maps of the interval [0,1]. In particular, Ruelle gives a proof of a generalized form of the Baladi-Keller theorem relating the poles of \\zeta (z) and the eigenvalues of the transfer operator. He also proves a theorem expressing the largest eigenvalue of the transfer operator in terms of the ergodic properties of (M,f,g).
The resource theory of quantum reference frames: manipulations and monotones
International Nuclear Information System (INIS)
Gour, Gilad; Spekkens, Robert W
2008-01-01
Every restriction on quantum operations defines a resource theory, determining how quantum states that cannot be prepared under the restriction may be manipulated and used to circumvent the restriction. A superselection rule (SSR) is a restriction that arises through the lack of a classical reference frame and the states that circumvent it (the resource) are quantum reference frames. We consider the resource theories that arise from three types of SSRs, associated respectively with lacking: (i) a phase reference, (ii) a frame for chirality, and (iii) a frame for spatial orientation. Focusing on pure unipartite quantum states (and in some cases restricting our attention even further to subsets of these), we explore single-copy and asymptotic manipulations. In particular, we identify the necessary and sufficient conditions for a deterministic transformation between two resource states to be possible and, when these conditions are not met, the maximum probability with which the transformation can be achieved. We also determine when a particular transformation can be achieved reversibly in the limit of arbitrarily many copies and find the maximum rate of conversion. A comparison of the three resource theories demonstrates that the extent to which resources can be interconverted decreases as the strength of the restriction increases. Along the way, we introduce several measures of frameness and prove that these are monotonically non-increasing under various classes of operations that are permitted by the SSR
The Marotto Theorem on planar monotone or competitive maps
International Nuclear Information System (INIS)
Yu Huang
2004-01-01
In 1978, Marotto generalized Li-Yorke's results on the criterion for chaos from one-dimensional discrete dynamical systems to n-dimensional discrete dynamical systems, showing that the existence of a non-degenerate snap-back repeller implies chaos in the sense of Li-Yorke. This theorem is very useful in predicting and analyzing discrete chaos in multi-dimensional dynamical systems. Yet, besides it is well known that there exists an error in the conditions of the original Marotto Theorem, and several authors had tried to correct it in different way, Chen, Hsu and Zhou pointed out that the verification of 'non-degeneracy' of a snap-back repeller is the most difficult in general and expected, 'almost beyond reasonable doubt', that the existence of only degenerate snap-back repeller still implies chaotic, which was posed as a conjecture by them. In this paper, we shall give necessary and sufficient conditions of chaos in the sense of Li-Yorke for planar monotone or competitive discrete dynamical systems and solve Chen-Hsu-Zhou Conjecture for such kinds of systems
The Monotonic Lagrangian Grid for Fast Air-Traffic Evaluation
Alexandrov, Natalia; Kaplan, Carolyn; Oran, Elaine; Boris, Jay
2010-01-01
This paper describes the continued development of a dynamic air-traffic model, ATMLG, intended for rapid evaluation of rules and methods to control and optimize transport systems. The underlying data structure is based on the Monotonic Lagrangian Grid (MLG), which is used for sorting and ordering positions and other data needed to describe N moving bodies, and their interactions. In ATMLG, the MLG is combined with algorithms for collision avoidance and updating aircraft trajectories. Aircraft that are close to each other in physical space are always near neighbors in the MLG data arrays, resulting in a fast nearest-neighbor interaction algorithm that scales as N. In this paper, we use ATMLG to examine how the ability to maintain a required separation between aircraft decreases as the number of aircraft in the volume increases. This requires keeping track of the primary and subsequent collision avoidance maneuvers necessary to maintain a five mile separation distance between all aircraft. Simulation results show that the number of collision avoidance moves increases exponentially with the number of aircraft in the volume.
Directory of Open Access Journals (Sweden)
N. Tangdamrongsub
2018-03-01
Full Text Available An accurate estimation of soil moisture and groundwater is essential for monitoring the availability of water supply in domestic and agricultural sectors. In order to improve the water storage estimates, previous studies assimilated terrestrial water storage variation (ΔTWS derived from the Gravity Recovery and Climate Experiment (GRACE into land surface models (LSMs. However, the GRACE-derived ΔTWS was generally computed from the high-level products (e.g. time-variable gravity fields, i.e. level 2, and land grid from the level 3 product. The gridded data products are subjected to several drawbacks such as signal attenuation and/or distortion caused by a posteriori filters and a lack of error covariance information. The post-processing of GRACE data might lead to the undesired alteration of the signal and its statistical property. This study uses the GRACE least-squares normal equation data to exploit the GRACE information rigorously and negate these limitations. Our approach combines GRACE's least-squares normal equation (obtained from ITSG-Grace2016 product with the results from the Community Atmosphere Biosphere Land Exchange (CABLE model to improve soil moisture and groundwater estimates. This study demonstrates, for the first time, an importance of using the GRACE raw data. The GRACE-combined (GC approach is developed for optimal least-squares combination and the approach is applied to estimate the soil moisture and groundwater over 10 Australian river basins. The results are validated against the satellite soil moisture observation and the in situ groundwater data. Comparing to CABLE, we demonstrate the GC approach delivers evident improvement of water storage estimates, consistently from all basins, yielding better agreement on seasonal and inter-annual timescales. Significant improvement is found in groundwater storage while marginal improvement is observed in surface soil moisture estimates.
Tangdamrongsub, Natthachet; Han, Shin-Chan; Decker, Mark; Yeo, In-Young; Kim, Hyungjun
2018-03-01
An accurate estimation of soil moisture and groundwater is essential for monitoring the availability of water supply in domestic and agricultural sectors. In order to improve the water storage estimates, previous studies assimilated terrestrial water storage variation (ΔTWS) derived from the Gravity Recovery and Climate Experiment (GRACE) into land surface models (LSMs). However, the GRACE-derived ΔTWS was generally computed from the high-level products (e.g. time-variable gravity fields, i.e. level 2, and land grid from the level 3 product). The gridded data products are subjected to several drawbacks such as signal attenuation and/or distortion caused by a posteriori filters and a lack of error covariance information. The post-processing of GRACE data might lead to the undesired alteration of the signal and its statistical property. This study uses the GRACE least-squares normal equation data to exploit the GRACE information rigorously and negate these limitations. Our approach combines GRACE's least-squares normal equation (obtained from ITSG-Grace2016 product) with the results from the Community Atmosphere Biosphere Land Exchange (CABLE) model to improve soil moisture and groundwater estimates. This study demonstrates, for the first time, an importance of using the GRACE raw data. The GRACE-combined (GC) approach is developed for optimal least-squares combination and the approach is applied to estimate the soil moisture and groundwater over 10 Australian river basins. The results are validated against the satellite soil moisture observation and the in situ groundwater data. Comparing to CABLE, we demonstrate the GC approach delivers evident improvement of water storage estimates, consistently from all basins, yielding better agreement on seasonal and inter-annual timescales. Significant improvement is found in groundwater storage while marginal improvement is observed in surface soil moisture estimates.
Kris, M G; Yeh, S D; Gralla, R J; Young, C W
1986-01-01
To develop an additional method for the measurement of gastric emptying in supine subjects, 10 normal subjects were given a test meal containing 99Tc-labelled scrambled egg as the "solid" phase marker and 111In in tapwater as the marker for the "liquid" phase. The mean time for emptying 50% of the "solid" phase (t1/2) was 85 min and 29 min for the "liquid" phase. Three individuals were restudied with a mean difference between the two determinations of 10.8% for the "solid" phase and 6.5% for the "liquid" phase. Twenty-six additional studies attempted have been successfully completed in symptomatic patients with advanced cancer. This method provides a simple and reproducible procedure for the determination of gastric emptying that yields results similar to those reported for other test meals and can be used in debilitated patients.
International Nuclear Information System (INIS)
Moroz, J.; Regieli, A.; Karski, J.; Witkowska, R.; Golabek, A.
1982-01-01
Two modifications of radioimmunoassay of pregnancy-specific beta-1-glycoprotein are described which differ in their sensitivity and duration of assay and thus in the possibility of their clinical application. Using these methods the concentration of SP-1 was determined in 180 serum samples of healthy pregnant women in different periods of normal pregnancy, 15-non-pregnant women, 16 healthy men, and in 20 samples of amniotic fluid as well as in 15 samples of umbilical vein blood. The described technique of SP-1 radioimmunoassay is useful for assessing the concentration of this protein in the serum of pregnant women during the whole pregnancy. Selection of a proper modification of the method makes the adaptation of its sensitivity and time of the assay possible for the clinical needs. (author)
International Nuclear Information System (INIS)
Moulopoulos, S.; Mantzos, J.; Gyftaki, E.; Kesse-Elias, M.; Alevizou-Terzaki, V.; Souli-Tsimili, E.
1978-01-01
A method is described for measuring the total serum folate binding capacity (TBC) after treating the serum with urea at pH5.5, the unsaturated serum folate binding capacity (UBC) being determined without treatment with urea. The method was applied to 50 normal controls and 20 patients with homozygous β-thalassaemia. The results show an increase in folate binding capacity after treating the serum with urea in all cases studied. There is no correlation between serum folic acid level and total or unsaturated folate binding capacity or per cent saturation. The method described is a simple and rapid one for screening the different groups studied for saturated and unsaturated specific folate-binding proteins. (author)
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Tijs, S.H.; Moretti, S.; Brânzei, R.; Norde, H.W.
2005-01-01
A new way is presented to define for minimum cost spanning tree (mcst-) games the irreducible core, which is introduced by Bird in 1976.The Bird core correspondence turns out to have interesting monotonicity and additivity properties and each stable cost monotonic allocation rule for mcst-problems
An analysis of the stability and monotonicity of a kind of control models
Directory of Open Access Journals (Sweden)
LU Yifa
2013-06-01
Full Text Available The stability and monotonicity of control systems with parameters are considered.By the iterative relationship of the coefficients of characteristic polynomials and the Mathematica software,some sufficient conditions for the monotonicity and stability of systems are given.
A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-01-01
We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlogn) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376
A simple algorithm for computing positively weighted straight skeletons of monotone polygons.
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-02-01
We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri
of the evolution of the PDF of a stochastic process; hence an alternative to the FPK. The considerable advantage of the introduced method over FPK is that its solution does not require high computational cost which extends its range of applicability to high order structural dynamic problems. The problem...... an alternative approach for estimation of the first excursion probability of any system is based on calculating the evolution of the Probability Density Function (PDF) of the process and integrating it on the specified domain. Clearly this provides the most accurate results among the three classes of the methods....... The solution of the Fokker-Planck-Kolmogorov (FPK) equation for systems governed by a stochastic differential equation driven by Gaussian white noise will give the sought time variation of the probability density function. However the analytical solution of the FPK is available for only a few dynamic systems...
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.
2013-01-01
Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...
Boyte, Stephen; Wylie, Bruce K.; Rigge, Matthew B.; Dahal, Devendra
2018-01-01
Data fused from distinct but complementary satellite sensors mitigate tradeoffs that researchers make when selecting between spatial and temporal resolutions of remotely sensed data. We integrated data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor aboard the Terra satellite and the Operational Land Imager sensor aboard the Landsat 8 satellite into four regression-tree models and applied those data to a mapping application. This application produced downscaled maps that utilize the 30-m spatial resolution of Landsat in conjunction with daily acquisitions of MODIS normalized difference vegetation index (NDVI) that are composited and temporally smoothed. We produced four weekly, atmospherically corrected, and nearly cloud-free, downscaled 30-m synthetic MODIS NDVI predictions (maps) built from these models. Model results were strong with R2 values ranging from 0.74 to 0.85. The correlation coefficients (r ≥ 0.89) were strong for all predictions when compared to corresponding original MODIS NDVI data. Downscaled products incorporated into independently developed sagebrush ecosystem models yielded mixed results. The visual quality of the downscaled 30-m synthetic MODIS NDVI predictions were remarkable when compared to the original 250-m MODIS NDVI. These 30-m maps improve knowledge of dynamic rangeland seasonal processes in the central Great Basin, United States, and provide land managers improved resource maps.
International Nuclear Information System (INIS)
Killough, G.G.; Eckerman, K.F.
1986-09-01
This report describes the derivation of an age- and sex-dependent model of radioiodine dosimetry in the thyroid and the application of the model to estimating the thyroid dose for each of 4215 patients who were exposed to 131 I in diagnostic and therapeutic procedures. In most cases, the data available consisted of the patient's age at the time of administration, the patient's sex, the quantity of activity administered, the clinically determined uptake of radioiodine by the thyroid, and the time after administration at which the uptake was determined. The model was made to conform to these data requirements by the use of age-specific estimates of the biological half-time of iodine in the thyroid and an age- and sex-dependent representation of the mass of the thyroid. Also, it was assumed that the thyroid burden was maximum at 24 hours after administration (the 131 I dose is not critically sensitive to this assumption). The metabolic model is of the form A(t) = K x (exp(-μ 1 t) - exp(-μ 2 t)) μCi where μ/sub i/ = λ/sub r/ + λ/sub i//sup b/ (i = 1, 2), λ/sub r/ is the radiological decay-rate coefficient, and the λ/sub i//sup b/ are biological removal-rate coefficients. The values of λ/sub i//sup b/ are determined by solving a nonlinear equation that depends on assumptions about the time of maximum uptake and the eventual biological loss rate (through which age dependence enters). An addendum (Appendix C) extends the method to other radioiodines and gives age- and sex-dependent dose conversion factors for most isotopes
Generalized Yosida Approximations Based on Relatively A-Maximal m-Relaxed Monotonicity Frameworks
Directory of Open Access Journals (Sweden)
Heng-you Lan
2013-01-01
Full Text Available We introduce and study a new notion of relatively A-maximal m-relaxed monotonicity framework and discuss some properties of a new class of generalized relatively resolvent operator associated with the relatively A-maximal m-relaxed monotone operator and the new generalized Yosida approximations based on relatively A-maximal m-relaxed monotonicity framework. Furthermore, we give some remarks to show that the theory of the new generalized relatively resolvent operator and Yosida approximations associated with relatively A-maximal m-relaxed monotone operators generalizes most of the existing notions on (relatively maximal monotone mappings in Hilbert as well as Banach space and can be applied to study variational inclusion problems and first-order evolution equations as well as evolution inclusions.
Directory of Open Access Journals (Sweden)
José Guilherme Cecatti
2003-02-01
Full Text Available OBJETIVO: avaliar a concordância entre o peso fetal estimado (PFE por ultra-sonografia e o neonatal, o desempenho da curva normal de PFE por idade gestacional no diagnóstico de desvios do peso fetal/neonatal e fatores associados. MÉTODOS: participaram do estudo 186 grávidas atendidas de novembro de 1998 a janeiro de 2000, com avaliação ultra-sonográfica até 3 dias antes do parto, determinação do PFE e do índice de líquido amniótico e parto na instituição. O PFE foi calculado e classificado de acordo com a curva de valores normais de PFE em: pequeno para a idade gestacional (PIG, adequado para a idade gestacional (AIG e grande para a idade gestacional (GIG. A mesma classificação foi feita para o peso neonatal. A variabilidade das medidas e o grau de correlação linear entre o PFE e o peso neonatal foram calculados, bem como a sensibilidade, especificidade e valores preditivos para o uso da curva de valores normais de PFE para o diagnóstico dos desvios do peso neonatal. RESULTADOS: diferença entre o PFE e o peso neonatal variou entre -540 e +594 g, com média de +47,1 g, e as duas medidas apresentaram um coeficiente de correlação linear de 0,94. A curva normal de PFE teve sensibilidade de 100% e especificidade de 90,5% em detectar PIG ao nascimento, e de 94,4 e 92,8%, respectivamente, em detectar GIG, porém os valores preditivos positivos foram baixos para ambos. CONCLUSÕES: a estimativa ultra-sonográfica do peso fetal foi concordante com o peso neonatal, superestimando-o em apenas cerca de 47 g e a curva do PFE teve bom desempenho no rastreamento diagnóstico de recém-nascidos PIG e GIG.PURPOSE: tocompare the ultrasound estimation of fetal weight (EFW with neonatal weight and to evaluate the performance of the normal EFW curve according to gestational age for the diagnosis of fetal/neonatal weight deviation and associated factors. METHODS: one hundred and eighty-six pregnant women who delivered at the institution from
Local Monotonicity and Isoperimetric Inequality on Hypersurfaces in Carnot groups
Directory of Open Access Journals (Sweden)
Francesco Paolo Montefalcone
2010-12-01
Full Text Available Let G be a k-step Carnot group of homogeneous dimension Q. Later on we shall present some of the results recently obtained in [32] and, in particular, an intrinsic isoperimetric inequality for a C2-smooth compact hypersurface S with boundary @S. We stress that S and @S are endowed with the homogeneous measures n????1 H and n????2 H , respectively, which are actually equivalent to the intrinsic (Q - 1-dimensional and (Q - 2-dimensional Hausdor measures with respect to a given homogeneous metric % on G. This result generalizes a classical inequality, involving the mean curvature of the hypersurface, proven by Michael and Simon [29] and Allard [1], independently. One may also deduce some related Sobolev-type inequalities. The strategy of the proof is inspired by the classical one and will be discussed at the rst section. After reminding some preliminary notions about Carnot groups, we shall begin by proving a linear isoperimetric inequality. The second step is a local monotonicity formula. Then we may achieve the proof by a covering argument.We stress however that there are many dierences, due to our non-Euclidean setting.Some of the tools developed ad hoc are, in order, a \\blow-up" theorem, which holds true also for characteristic points, and a smooth Coarea Formula for the HS-gradient. Other tools are the horizontal integration by parts formula and the 1st variation formula for the H-perimeter n????1H already developed in [30, 31] and then generalized to hypersurfaces having non-empty characteristic set in [32]. These results can be useful in the study of minimal and constant horizontal mean curvature hypersurfaces in Carnot groups.
DEFF Research Database (Denmark)
Mocroft, Amanda; Lundgren, Jens D; Ross, Michael
2016-01-01
BACKGROUND: Whether or not the association between some antiretrovirals used in HIV infection and chronic kidney disease is cumulative is a controversial topic, especially in patients with initially normal renal function. In this study, we aimed to investigate the association between duration...... of exposure to antiretrovirals and the development of chronic kidney disease in people with initially normal renal function, as measured by estimated glomerular filtration rate (eGFR). METHODS: In this prospective international cohort study, HIV-positive adult participants (aged ≥16 years) from the D......:A:D study (based in Europe, the USA, and Australia) with first eGFR greater than 90 mL/min per 1·73 m(2) were followed from baseline (first eGFR measurement after Jan 1, 2004) until the occurrence of one of the following: chronic kidney disease; last eGFR measurement; Feb 1, 2014; or final visit plus 6...
International Nuclear Information System (INIS)
Naderi, S Mehdizadeh; Karimipourfard, M; Lotfalizadeh, F; Zamani, E; Molaeimanesh, Z; Sadeghi, M; Sina, S; Faghihi, R; Entezarmahdi, M
2015-01-01
Purpose: I-131 is one of the most frequent radionuclides used in nuclear medicine departments. The radiation workers, who manipulate the unsealed radio-toxic iodine, should be monitored for internal contamination. In this study a protocol was established for estimating I-131 activity absorbed in the thyroid glands of the nuclear medicine staff in normal working condition and also in accidents. Methods: I-131 with the activity of 10 μCi was injected inside the thyroid gland of a home-made anthropomorphic neck phantom. The phantom is made up of PMMA as soft tissue, and Aluminium as bone. The dose rate at different distances from the surface of the neck phantom was measured using a scintillator detector for duration of two months. Then, calibration factors were obtained, for converting the dose rate at each distance to the iodine activity inside the thyroid. Results: According to the results of this study, the calibration factors for converting the dose rates (nSv/h) at distances of 0cm, 1cm, 6cm, 11cm, and 16cm to the activity (kBq) inside the thyroid were found to be 0.03, 0.04, 0.14, 0.29, and 0.49 . Conclusion: This method can be effectively used for quick estimation of the I-131 concentration inside the thyroid of the staff for daily checks in normal working conditions and also in accidents
Energy Technology Data Exchange (ETDEWEB)
Naderi, S Mehdizadeh [Radiation Research Center, Shiraz university, Shiraz, Fars (Iran, Islamic Republic of); Karimipourfard, M; Lotfalizadeh, F [Radiation medicine department, school of mechanical engineering, Shiraz uni, Shiraz, Fars (Iran, Islamic Republic of); Zamani, E; Molaeimanesh, Z; Sadeghi, M; Sina, S; Faghihi, R [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Entezarmahdi, M [Shahid Beheshti University, Shiraz, Fars (Iran, Islamic Republic of)
2015-06-15
Purpose: I-131 is one of the most frequent radionuclides used in nuclear medicine departments. The radiation workers, who manipulate the unsealed radio-toxic iodine, should be monitored for internal contamination. In this study a protocol was established for estimating I-131 activity absorbed in the thyroid glands of the nuclear medicine staff in normal working condition and also in accidents. Methods: I-131 with the activity of 10 μCi was injected inside the thyroid gland of a home-made anthropomorphic neck phantom. The phantom is made up of PMMA as soft tissue, and Aluminium as bone. The dose rate at different distances from the surface of the neck phantom was measured using a scintillator detector for duration of two months. Then, calibration factors were obtained, for converting the dose rate at each distance to the iodine activity inside the thyroid. Results: According to the results of this study, the calibration factors for converting the dose rates (nSv/h) at distances of 0cm, 1cm, 6cm, 11cm, and 16cm to the activity (kBq) inside the thyroid were found to be 0.03, 0.04, 0.14, 0.29, and 0.49 . Conclusion: This method can be effectively used for quick estimation of the I-131 concentration inside the thyroid of the staff for daily checks in normal working conditions and also in accidents.
Directory of Open Access Journals (Sweden)
Si Hyung Lee
Full Text Available To investigate the association between estimated trans-lamina cribrosa pressure difference (TLCPD and prevalence of normal tension glaucoma (NTG with low-teen and high-teen intraocular pressure (IOP using a population-based study design.A total of 12,743 adults (≥ 40 years of age who participated in the Korean National Health and Nutrition Examination Survey (KNHANES from 2009 to 2012 were included. Using a previously developed formula, cerebrospinal fluid pressure (CSFP in mmHg was estimated as 0.55 × body mass index (kg/m2 + 0.16 × diastolic blood pressure (mmHg-0.18 × age (years-1.91. TLCPD was calculated as IOP-CSFP. The NTG subjects were divided into two groups according to IOP level: low-teen NTG (IOP ≤ 15 mmHg and high-teen NTG (15 mmHg < IOP ≤ 21 mmHg groups. The association between TLCPD and the prevalence of NTG was assessed in the low- and high-teen IOP groups.In the normal population (n = 12,069, the weighted mean estimated CSFP was 11.69 ± 0.04 mmHg and the weighted mean TLCPD 2.31 ± 0.06 mmHg. Significantly higher TLCPD (p < 0.001; 6.48 ± 0.27 mmHg was found in the high-teen NTG compared with the normal group. On the other hand, there was no significant difference in TLCPD between normal and low-teen NTG subjects (p = 0.395; 2.31 ± 0.06 vs. 2.11 ± 0.24 mmHg. Multivariate logistic regression analysis revealed that TLCPD was significantly associated with the prevalence of NTG in the high-teen IOP group (p = 0.006; OR: 1.09; 95% CI: 1.02, 1.15, but not the low-teen IOP group (p = 0.636. Instead, the presence of hypertension was significantly associated with the prevalence of NTG in the low-teen IOP group (p < 0.001; OR: 1.65; 95% CI: 1.26, 2.16.TLCPD was significantly associated with the prevalence of NTG in high-teen IOP subjects, but not low-teen IOP subjects, in whom hypertension may be more closely associated. This study suggests that the underlying mechanisms may differ between low-teen and high-teen NTG patients.
International Nuclear Information System (INIS)
Lepretre, C.; Millard, A.; Nahas, G.
1989-01-01
The structural analysis of reinforced concrete structures is usually performed either by means of simplified methods of strength of materials type i.e. global methods, or by means of detailed methods of continuum mechanics type, i.e. local methods. For this second type, some constitutive models are available for concrete and rebars in a certain number of finite element systems. These models are often validated on simple homogeneous tests. Therefore, it is important to appraise the validity of the results when applying them to the analysis of a reinforced concrete structure, in order to be able to make correct predictions of the actual behaviour, under normal and faulty conditions. For this purpose, some tests have been performed at I.N.S.A. de Lyon on reinforced concrete beams, subjected to monotonous and cyclic loadings, in order to generate reference solutions to be compared with the numerical predictions given by two finite element systems: - CASTEM, developed by C.E.A./.D.E.M.T. - ELEFINI, developed by I.N.S.A. de Lyon
Multistability and gluing bifurcation to butterflies in coupled networks with non-monotonic feedback
International Nuclear Information System (INIS)
Ma Jianfu; Wu Jianhong
2009-01-01
Neural networks with a non-monotonic activation function have been proposed to increase their capacity for memory storage and retrieval, but there is still a lack of rigorous mathematical analysis and detailed discussions of the impact of time lag. Here we consider a two-neuron recurrent network. We first show how supercritical pitchfork bifurcations and a saddle-node bifurcation lead to the coexistence of multiple stable equilibria (multistability) in the instantaneous updating network. We then study the effect of time delay on the local stability of these equilibria and show that four equilibria lose their stability at a certain critical value of time delay, and Hopf bifurcations of these equilibria occur simultaneously, leading to multiple coexisting periodic orbits. We apply centre manifold theory and normal form theory to determine the direction of these Hopf bifurcations and the stability of bifurcated periodic orbits. Numerical simulations show very interesting global patterns of periodic solutions as the time delay is varied. In particular, we observe that these four periodic solutions are glued together along the stable and unstable manifolds of saddle points to develop a butterfly structure through a complicated process of gluing bifurcations of periodic solutions
Directory of Open Access Journals (Sweden)
Paul S. Addison
2015-01-01
Full Text Available DPOP (ΔPOP or Delta-POP is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method. We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements.
Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D
2015-01-01
DPOP (ΔPOP or Delta-POP) is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method). We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N) is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements.
Directory of Open Access Journals (Sweden)
Mervan Pašić
2016-10-01
Full Text Available We study non-monotone positive solutions of the second-order linear differential equations: $(p(tx'' + q(t x = e(t$, with positive $p(t$ and $q(t$. For the first time, some criteria as well as the existence and nonexistence of non-monotone positive solutions are proved in the framework of some properties of solutions $\\theta (t$ of the corresponding integrable linear equation: $(p(t\\theta''=e(t$. The main results are illustrated by many examples dealing with equations which allow exact non-monotone positive solutions not necessarily periodic. Finally, we pose some open questions.
International Nuclear Information System (INIS)
Nagel, T.; Shao, H.; Roßkopf, C.; Linder, M.; Wörner, A.; Kolditz, O.
2014-01-01
Highlights: • Detailed analysis of cyclic and monotonic loading of thermochemical heat stores. • Fully coupled reactive heat and mass transport. • Reaction kinetics can be simplified in systems limited by heat transport. • Operating lines valid during monotonic and cyclic loading. • Local integral degree of conversion to capture heterogeneous material usage. - Abstract: Thermochemical reactions can be employed in heat storage devices. The choice of suitable reactive material pairs involves a thorough kinetic characterisation by, e.g., extensive thermogravimetric measurements. Before testing a material on a reactor level, simulations with models based on the Theory of Porous Media can be used to establish its suitability. The extent to which the accuracy of the kinetic model influences the results of such simulations is unknown yet fundamental to the validity of simulations based on chemical models of differing complexity. In this article we therefore compared simulation results on the reactor level based on an advanced kinetic characterisation of a calcium oxide/hydroxide system to those obtained by a simplified kinetic model. Since energy storage is often used for short term load buffering, the internal reactor behaviour is analysed under cyclic partial loading and unloading in addition to full monotonic charge/discharge operation. It was found that the predictions by both models were very similar qualitatively and quantitatively in terms of thermal power characteristics, conversion profiles, temperature output, reaction duration and pumping powers. Major differences were, however, observed for the reaction rate profiles themselves. We conclude that for systems not limited by kinetics the simplified model seems sufficient to estimate the reactor behaviour. The degree of material usage within the reactor was further shown to strongly vary under cyclic loading conditions and should be considered when designing systems for certain operating regimes
Logarithmically complete monotonicity of a function related to the Catalan-Qi function
Directory of Open Access Journals (Sweden)
Qi Feng
2016-08-01
Full Text Available In the paper, the authors find necessary and sufficient conditions such that a function related to the Catalan-Qi function, which is an alternative generalization of the Catalan numbers, is logarithmically complete monotonic.
Monotone matrix transformations defined by the group inverse and simultaneous diagonalizability
International Nuclear Information System (INIS)
Bogdanov, I I; Guterman, A E
2007-01-01
Bijective linear transformations of the matrix algebra over an arbitrary field that preserve simultaneous diagonalizability are characterized. This result is used for the characterization of bijective linear monotone transformations . Bibliography: 28 titles.
Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2013-01-01
In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization
Englander, Jacob A.; Englander, Arnold C.
2014-01-01
Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.
Critical undrained shear strength of sand-silt mixtures under monotonic loading
Directory of Open Access Journals (Sweden)
Mohamed Bensoula
2014-07-01
Full Text Available This study uses experimental triaxial tests with monotonic loading to develop empirical relationships to estimate undrained critical shear strength. The effect of the fines content on undrained shear strength is analyzed for different density states. The parametric analysis indicates that, based on the soil void ratio and fine content properties, the undrained critical shear strength first increases and then decreases as the proportion of fines increases, which demonstrates the influence of fine content on a soil’s vulnerability to liquefaction. A series of monotonic undrained triaxial tests were performed on reconstituted saturated sand-silt mixtures. Beyond 30% fines content, a fraction of the silt participates in the soil’s skeleton chain force. In this context, the concept of the equivalent intergranular void ratio may be an appropriate parameter to express the critical shear strength of the studied soil. This parameter is able to control the undrained shear strength of non-plastic silt and sand mixtures with different densities. Resumen Este estudio utiliza evaluaciones experimentales triaxiales con cargas repetitivas para desarrollar relaciones empíricas y estimar la tensión crítica de corte bajo condiciones no drenadas. El efecto de contenido de finos en la tensión de corte sin drenar se analizó en diferentes estados de densidad. El análisis paramétrico indica que, basado en la porosidad del suelo y las propiedades del material de finos, la tensión de corte sin drenar primero se incrementa y luego decrece mientras la proporción de finos aumenta, lo que demuestra la influencia de contenido de finos en la vulnerabilidad del suelo a la licuación. Una serie de las evaluaciones se realizó en mezclas rehidratadas y saturadas de arena y cieno. Más allá del 30 % de los contenidos finos, una fracción del cieno hace parte principal de la cadena de fuerza del suelo. En este contexto, el concepto de porosidad equivalente
Monotone methods for solving a boundary value problem of second order discrete system
Directory of Open Access Journals (Sweden)
Wang Yuan-Ming
1999-01-01
Full Text Available A new concept of a pair of upper and lower solutions is introduced for a boundary value problem of second order discrete system. A comparison result is given. An existence theorem for a solution is established in terms of upper and lower solutions. A monotone iterative scheme is proposed, and the monotone convergence rate of the iteration is compared and analyzed. The numerical results are given.
Global Attractivity Results for Mixed-Monotone Mappings in Partially Ordered Complete Metric Spaces
Directory of Open Access Journals (Sweden)
Kalabušić S
2009-01-01
Full Text Available We prove fixed point theorems for mixed-monotone mappings in partially ordered complete metric spaces which satisfy a weaker contraction condition than the classical Banach contraction condition for all points that are related by given ordering. We also give a global attractivity result for all solutions of the difference equation , where satisfies mixed-monotone conditions with respect to the given ordering.
Reduction theorems for weighted integral inequalities on the cone of monotone functions
International Nuclear Information System (INIS)
Gogatishvili, A; Stepanov, V D
2013-01-01
This paper surveys results related to the reduction of integral inequalities involving positive operators in weighted Lebesgue spaces on the real semi-axis and valid on the cone of monotone functions, to certain more easily manageable inequalities valid on the cone of non-negative functions. The case of monotone operators is new. As an application, a complete characterization for all possible integrability parameters is obtained for a number of Volterra operators. Bibliography: 118 titles
Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables
Chikalov, Igor
2013-01-01
In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization of decision trees and decision rules) to conduct experiments. We show that, for each monotone Boolean function with at most five variables, there exists a totally optimal decision tree which is optimal with respect to both depth and number of nodes.
Directory of Open Access Journals (Sweden)
Heinz Werner Höppel
2012-02-01
Full Text Available The monotonic and cyclic deformation behavior of ultrafine-grained metastable austenitic steel AISI 304L, produced by severe plastic deformation, was investigated. Under monotonic loading, the martensitic phase transformation in the ultrafine-grained state is strongly favored. Under cyclic loading, the martensitic transformation behavior is similar to the coarse-grained condition, but the cyclic stress response is three times larger for the ultrafine-grained condition.
International Nuclear Information System (INIS)
Duan Shukai; Liao Xiaofeng
2007-01-01
A new chaotic delayed neuron model with non-monotonously increasing transfer function, called as chaotic Liao's delayed neuron model, was recently reported and analyzed. An electronic implementation of this model is described in detail. At the same time, some methods in circuit design, especially for circuit with time delayed unit and non-monotonously increasing activation unit, are also considered carefully. We find that the dynamical behaviors of the designed circuits are closely similar to the results predicted by numerical experiments
A discrete wavelet spectrum approach for identifying non-monotonic trends in hydroclimate data
Sang, Yan-Fang; Sun, Fubao; Singh, Vijay P.; Xie, Ping; Sun, Jian
2018-01-01
The hydroclimatic process is changing non-monotonically and identifying its trends is a great challenge. Building on the discrete wavelet transform theory, we developed a discrete wavelet spectrum (DWS) approach for identifying non-monotonic trends in hydroclimate time series and evaluating their statistical significance. After validating the DWS approach using two typical synthetic time series, we examined annual temperature and potential evaporation over China from 1961-2013 and found that the DWS approach detected both the warming and the warming hiatus in temperature, and the reversed changes in potential evaporation. Further, the identified non-monotonic trends showed stable significance when the time series was longer than 30 years or so (i.e. the widely defined climate timescale). The significance of trends in potential evaporation measured at 150 stations in China, with an obvious non-monotonic trend, was underestimated and was not detected by the Mann-Kendall test. Comparatively, the DWS approach overcame the problem and detected those significant non-monotonic trends at 380 stations, which helped understand and interpret the spatiotemporal variability in the hydroclimatic process. Our results suggest that non-monotonic trends of hydroclimate time series and their significance should be carefully identified, and the DWS approach proposed has the potential for wide use in the hydrological and climate sciences.
A discrete wavelet spectrum approach for identifying non-monotonic trends in hydroclimate data
Directory of Open Access Journals (Sweden)
Y.-F. Sang
2018-01-01
Full Text Available The hydroclimatic process is changing non-monotonically and identifying its trends is a great challenge. Building on the discrete wavelet transform theory, we developed a discrete wavelet spectrum (DWS approach for identifying non-monotonic trends in hydroclimate time series and evaluating their statistical significance. After validating the DWS approach using two typical synthetic time series, we examined annual temperature and potential evaporation over China from 1961–2013 and found that the DWS approach detected both the warming and the warming hiatus in temperature, and the reversed changes in potential evaporation. Further, the identified non-monotonic trends showed stable significance when the time series was longer than 30 years or so (i.e. the widely defined climate timescale. The significance of trends in potential evaporation measured at 150 stations in China, with an obvious non-monotonic trend, was underestimated and was not detected by the Mann–Kendall test. Comparatively, the DWS approach overcame the problem and detected those significant non-monotonic trends at 380 stations, which helped understand and interpret the spatiotemporal variability in the hydroclimatic process. Our results suggest that non-monotonic trends of hydroclimate time series and their significance should be carefully identified, and the DWS approach proposed has the potential for wide use in the hydrological and climate sciences.
Kang, Hyeon-Ah; Su, Ya-Hui; Chang, Hua-Hua
2018-03-08
A monotone relationship between a true score (τ) and a latent trait level (θ) has been a key assumption for many psychometric applications. The monotonicity property in dichotomous response models is evident as a result of a transformation via a test characteristic curve. Monotonicity in polytomous models, in contrast, is not immediately obvious because item response functions are determined by a set of response category curves, which are conceivably non-monotonic in θ. The purpose of the present note is to demonstrate strict monotonicity in ordered polytomous item response models. Five models that are widely used in operational assessments are considered for proof: the generalized partial credit model (Muraki, 1992, Applied Psychological Measurement, 16, 159), the nominal model (Bock, 1972, Psychometrika, 37, 29), the partial credit model (Masters, 1982, Psychometrika, 47, 147), the rating scale model (Andrich, 1978, Psychometrika, 43, 561), and the graded response model (Samejima, 1972, A general model for free-response data (Psychometric Monograph no. 18). Psychometric Society, Richmond). The study asserts that the item response functions in these models strictly increase in θ and thus there exists strict monotonicity between τ and θ under certain specified conditions. This conclusion validates the practice of customarily using τ in place of θ in applied settings and provides theoretical grounds for one-to-one transformations between the two scales. © 2018 The British Psychological Society.
Baston, David S; Denison, Michael S
2011-02-15
The chemically activated luciferase expression (CALUX) system is a mechanistically based recombinant luciferase reporter gene cell bioassay used in combination with chemical extraction and clean-up methods for the detection and relative quantitation of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like halogenated aromatic hydrocarbons in a wide variety of sample matrices. While sample extracts containing complex mixtures of chemicals can produce a variety of distinct concentration-dependent luciferase induction responses in CALUX cells, these effects are produced through a common mechanism of action (i.e. the Ah receptor (AhR)) allowing normalization of results and sample potency determination. Here we describe the diversity in CALUX response to PCDD/Fs from sediment and soil extracts and not only report the occurrence of superinduction of the CALUX bioassay, but we describe a mechanistically based approach for normalization of superinduction data that results in a more accurate estimation of the relative potency of such sample extracts. Copyright © 2010 Elsevier B.V. All rights reserved.
A General Model for Repeated Audit Controls Using Monotone Subsampling
Raats, V.M.; van der Genugten, B.B.; Moors, J.J.A.
2002-01-01
In categorical repeated audit controls, fallible auditors classify sample elements in order to estimate the population fraction of elements in certain categories.To take possible misclassifications into account, subsequent checks are performed with a decreasing number of observations.In this paper a
Fukushima, Taku; Hasegawa, Hideyuki; Kanai, Hiroshi
2011-07-01
Red blood cell (RBC) aggregation, as one of the determinants of blood viscosity, plays an important role in blood rheology, including the condition of blood. RBC aggregation is induced by the adhesion of RBCs when the electrostatic repulsion between RBCs weakens owing to increases in protein and saturated fatty acid levels in blood, excessive RBC aggregation leads to various circulatory diseases. This study was conducted to establish a noninvasive quantitative method for assessment of RBC aggregation. The power spectrum of ultrasonic RF echoes from nonaggregating RBCs, which shows the frequency property of scattering, exhibits Rayleigh behavior. On the other hand, ultrasonic RF echoes from aggregating RBCs contain the components of reflection, which have no frequency dependence. By dividing the measured power spectrum of echoes from RBCs in the lumen by that of echoes from a posterior wall of the vein in the dorsum manus, the attenuation property of the propagating medium and the frequency responses of transmitting and receiving transducers are removed from the former spectrum. RBC aggregation was assessed by the diameter of a scatterer, which was estimated by minimizing the square difference between the measured normalized power spectrum and the theoretical power spectrum. In this study, spherical scatterers with diameters of 5, 11, 15, and 30 µm were measured in basic experiments. The estimated scatterer diameters were close to the actual diameters. Furthermore, the transient change of the scatterer diameters were measured in an in vivo experiment with respect to a 24-year-old healthy male during the avascularization using a cuff. The estimated diameters (12-22 µm) of RBCs during avascularization were larger than the diameters (4-8 µm) at rest and after recirculation. These results show the possibility of the use of the proposed method for noninvasive assessment of RBC aggregation.
A General Model for Repeated Audit Controls Using Monotone Subsampling
Raats, V.M.; van der Genugten, B.B.; Moors, J.J.A.
2002-01-01
In categorical repeated audit controls, fallible auditors classify sample elements in order to estimate the population fraction of elements in certain categories.To take possible misclassifications into account, subsequent checks are performed with a decreasing number of observations.In this paper a model is presented for a general repeated audit control system, where k subsequent auditors classify elements into r categories.Two different sub-sampling procedures will be discussed, named 'stra...
Transient Monotonic and Cyclic Load Effects on Mono Bucket Foundations
DEFF Research Database (Denmark)
Nielsen, Søren Dam
Today, 80 % of all European offshore wind turbines are installed on monopiles. A cost-effective alternative to the monopile is the mono bucket foundation. For an offshore wind turbine foundation in open seas, the dominant load is often coming from waves. During storms, large waves are formed...... the foundation is sucked to the seabed, creating extra capacity during the impact. Over the life-time of an offshore wind turbine foundation will be hit by millions of waves. Each wave might lead to a permanent rotation of the foundation. Therefore, it is important to be able to estimate the total deformation...
Failure mechanisms of closed-cell aluminum foam under monotonic and cyclic loading
International Nuclear Information System (INIS)
Amsterdam, E.; De Hosson, J.Th.M.; Onck, P.R.
2006-01-01
This paper concentrates on the differences in failure mechanisms of Alporas closed-cell aluminum foam under either monotonic or cyclic loading. The emphasis lies on aspects of crack nucleation and crack propagation in relation to the microstructure. The cell wall material consists of Al dendrites and an interdendritic network of Al 4 Ca and Al 22 CaTi 2 precipitates. In situ scanning electron microscopy monotonic tensile tests were performed on small samples to study crack nucleation and propagation. Digital image correlation was employed to map the strain in the cell wall on the characteristic microstructural length scale. Monotonic tensile tests and tension-tension fatigue tests were performed on larger samples to observe the overall fracture behavior and crack path in monotonic and cyclic loading. The crack nucleation and propagation path in both loading conditions are revealed and it can be concluded that during monotonic tension cracks nucleate in and propagate partly through the Al 4 Ca interdendritic network, whereas under cyclic loading cracks nucleate and propagate through the Al dendrites
Waldhäusl, W K; Bratusch-Marrain, P R; Francesconi, M; Nowotny, P; Kiss, A
1982-01-01
This study examines the feasibility of deriving the 24-h insulin requirement of insulin-dependent diabetic patients who were devoid of any endogenous insulin release (IDD) from the insulin-production rate (IPR) of healthy man (basal, 17 mU/min; stimulated 1.35 U/12.5 g glucose). To this end, continuous intravenous insulin infusion (CIVII) was initiated at a precalculated rate of 41.2 +/- 4.6 (SD) U/24 h in IDD (N - 12). Blood glucose profiles were compared with those obtained during intermittent subcutaneous (s.c.) insulin therapy (IIT) and those of healthy controls (N = 7). Regular insulin (Hoechst CS) was infused with an adapted Mill Hill Infuser at a basal infusion rate of 1.6 U/h (6:00 a.m. to 8:00 p.m.), and of 0.8 U/h from 8:00 p.m. to 6:00 a.m. Preprandial insulin (3.2-6.4 U) was added for breakfast, lunch, and dinner. Daily individual food intake totaled 7688 +/- 784 kJ (1836 +/- 187 kcal)/24 h including 184 +/- 37 g of glucose. Proper control of blood glucose (BG) (mean BG 105 +/- 10 mg/dl; mean amplitude of glycemic excursions 54 +/- 18 mg/dl; and 1 h postprandial BG levels not exceeding 160 mg/dl) and of plasma concentrations of beta-hydroxybutyrate and lactate was maintained by 41.4 +/- 4.4 U insulin/24 h. Although BG values only approximated the upper normal range as seen in healthy controls, they were well within the range reported by others during CIVII. Therefore, we conclude that in adult IDD completely devoid of endogenous insulin (1) the IPR of normal man can be used during CIVII as an estimate for the patient's minimal insulin requirement per 24 h, and (2) this approach allows for a blood glucose profile close to the upper range of a normal control group. Thus, deriving a patient's daily insulin dose from the insulin production rate of healthy man may add an additional experimental protocol which aids in making general calculations of a necessary insulin dose instead of using trial and error or a closed-loop insulin infusion system.
International Nuclear Information System (INIS)
Breuckmann, F.; Buhr, C.; Maderwald, S.; Bruder, O.; Schlosser, T.; Nassenstein, K.; Erbel, R.; Barkhausen, J.
2011-01-01
An increased normalized gadolinium accumulation (NGA) in the myocardium during early washout has been used for the diagnosis of acute myocarditis (AM). Due to the fact that the pharmacokinetics of contrast agents are complex, time-related changes in NGA after contrast injection are likely. Because knowledge about time-related changes of NGA may improve the diagnostic accuracy of MR, our study aimed to estimate the time course of NGA after contrast injection in patients as well as in healthy volunteers. An ECG-triggered inversion recovery SSFP sequence with incrementally increasing inversion times was repetitively acquired over the 15 minutes after injection of 0.2 Gd-DTPA per kg body weight in a 4-chamber view in 15 patients with AM and 20 volunteers. The T 1relaxation times and the longitudinal relaxation rates (R1) of the myocardium and skeletal musculature were calculated for each point in time after contrast injection. The time course of NGA was estimated based on the linear relationship between R 1 and tissue Gd concentration. NGA decreased over time in the form of a negative power function in patients with AM and in healthy controls. NGA in AM tended to be higher than in controls (p > 0.05). NGA rapidly changes after contrast injection, which must be considered when measuring NGA. Although we observed a trend towards higher NGA values in patients with AM with a maximum difference one minute after contrast injection, NGA did not allow us to differentiate patients with AM from healthy volunteers, because the observed differences did not reach a level of significance. (orig.)
Firstenberg, M. S.; Vandervoort, P. M.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.
2000-01-01
OBJECTIVES: We hypothesized that color M-mode (CMM) images could be used to solve the Euler equation, yielding regional pressure gradients along the scanline, which could then be integrated to yield the unsteady Bernoulli equation and estimate noninvasively both the convective and inertial components of the transmitral pressure difference. BACKGROUND: Pulsed and continuous wave Doppler velocity measurements are routinely used clinically to assess severity of stenotic and regurgitant valves. However, only the convective component of the pressure gradient is measured, thereby neglecting the contribution of inertial forces, which may be significant, particularly for nonstenotic valves. Color M-mode provides a spatiotemporal representation of flow across the mitral valve. METHODS: In eight patients undergoing coronary artery bypass grafting, high-fidelity left atrial and ventricular pressure measurements were obtained synchronously with transmitral CMM digital recordings. The instantaneous diastolic transmitral pressure difference was computed from the M-mode spatiotemporal velocity distribution using the unsteady flow form of the Bernoulli equation and was compared to the catheter measurements. RESULTS: From 56 beats in 16 hemodynamic stages, inclusion of the inertial term ([deltapI]max = 1.78+/-1.30 mm Hg) in the noninvasive pressure difference calculation significantly increased the temporal correlation with catheter-based measurement (r = 0.35+/-0.24 vs. 0.81+/-0.15, pforces are significant components of the maximal pressure drop across the normal mitral valve. These can be accurately estimated noninvasively using CMM recordings of transmitral flow, which should improve the understanding of diastolic filling and function of the heart.
Search for scalar-tensor gravity theories with a non-monotonic time evolution of the speed-up factor
Energy Technology Data Exchange (ETDEWEB)
Navarro, A [Dept Fisica, Universidad de Murcia, E30071-Murcia (Spain); Serna, A [Dept Fisica, Computacion y Comunicaciones, Universidad Miguel Hernandez, E03202-Elche (Spain); Alimi, J-M [Lab. de l' Univers et de ses Theories (LUTH, CNRS FRE2462), Observatoire de Paris-Meudon, F92195-Meudon (France)
2002-08-21
We present a method to detect, in the framework of scalar-tensor gravity theories, the existence of stationary points in the time evolution of the speed-up factor. An attractive aspect of this method is that, once the particular scalar-tensor theory has been specified, the stationary points are found through a simple algebraic equation which does not contain any integration. By applying this method to the three classes of scalar-tensor theories defined by Barrow and Parsons, we have found several new cosmological models with a non-monotonic evolution of the speed-up factor. The physical interest of these models is that, as previously shown by Serna and Alimi, they predict the observed primordial abundance of light elements for a very wide range of baryon density. These models are then consistent with recent CMB and Lyman-{alpha} estimates of the baryon content of the universe.
A locally adaptive normal distribution
DEFF Research Database (Denmark)
Arvanitidis, Georgios; Hansen, Lars Kai; Hauberg, Søren
2016-01-01
entropy distribution under the given metric. The underlying metric is, however, non-parametric. We develop a maximum likelihood algorithm to infer the distribution parameters that relies on a combination of gradient descent and Monte Carlo integration. We further extend the LAND to mixture models......The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric. We suggest to replace this metric with a locally adaptive, smoothly changing (Riemannian) metric that favors regions of high local density...
Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods
Mozartova, A.; Savostianov, I.; Hundsdorfer, W.
2015-01-01
© 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.
Energy Technology Data Exchange (ETDEWEB)
Erol, V. [Department of Computer Engineering, Institute of Science, Okan University, Istanbul (Turkey); Netas Telecommunication Inc., Istanbul (Turkey)
2016-04-21
Entanglement has been studied extensively for understanding the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known monotones for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. The study on these monotones has been a hot topic in quantum information [1-7] in order to understand the role of entanglement in this discipline. It can be observed that from any arbitrary quantum pure state a mixed state can obtained. A natural generalization of this observation would be to consider local operations classical communication (LOCC) transformations between general pure states of two parties. Although this question is a little more difficult, a complete solution has been developed using the mathematical framework of the majorization theory [8]. In this work, we analyze the relation between entanglement monotones concurrence and negativity with respect to majorization for general two-level quantum systems of two particles.
Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods
Mozartova, A.
2015-05-01
© 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.
Directory of Open Access Journals (Sweden)
Fabio Rueda Calier
2016-01-01
Full Text Available The productivity estimation sugar cane is very important for Colombian economy. The Net Primary Production (NPP model is applied on present investigation from Kumar & Monteith to regional scale. Analyzing spatiotemporal with geomantic techniques and edaphoclimatic environment characterization. Field surveys were conducted too, to acquire physiological information of plants evaluated and soil conditions of the plantation under study. The data acquired was input in ArcGIS10.1 software, to make processing these. A series thematic map was resulted from data processing from spatiotemporal distribution of plantation soil characteristics and biophysical characteristics. The variables fPAR, PAR, EUR was calculate from Kumar & Monteith efficiency model. Remote sensing and mathematic models related and fraction absorbed photosynthetically active radiation derivates from Normalized Difference Vegetation Index (NDVI and incident photosynthetically active radiation in land sensors recorded was calculated. Chemical and physical properties in laboratory tests were realized to soil, for relation knowledge between edaphoclimatic conditions and biophysical variables related with the sugar cane biomass gainer for Panela production. The information integrated from Geographic Information System (GIS and edaphic data and climatic data in country recorded, shows the behavior of the plantation as it develops.
Understanding the monotonous life of open vent mafic volcanoes
Costa Rodriguez, F.; Ruth, D. C. S.; Bornas, M.; Rivera, D. J. V. I.
2016-12-01
Mafic open vent volcanoes display prominent degassing plumes during quiescence but also erupt frequently, every few months or years. Their small and mildly explosive eruptions (volatile contents indicate that the magma reservoir system extends at least to 5 km depth. Mg/Fe pyroxene zoning and diffusion modeling suggests that mafic magma intrusion in a shallow, crystal-rich and more evolved reservoir has occurred repeatedly. The time scale for this process is the same for all 9 events, starting about 2 years prior and continuing up to eruption. We estimate the relative proportions of injecting to resident magma that vary from about 0.2 to 0.7, probably reflecting the local crystal-melt interaction during intrusion. The near constant magma composition is probably the result of buffering of new incoming magma by a crystal-rich upper reservoir, and erupted magmas are physical mixtures. However, we do not find evidence of large-scale crystal recycling from one eruption to another, implying the resetting of the system after each event. The recurrent eruptions and intrusions could be driven by the near continuous degassing of the volcano that induces a mass imbalance which leads to magma movement from depth to the shallow system [e.g., 1]. [1] Girona et al. (2016). Science Reports doi:10.1038/srep18212
Monotone numerical methods for finite-state mean-field games
Gomes, Diogo A.; Saude, Joao
2017-01-01
Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.
Existence, uniqueness, monotonicity and asymptotic behaviour of travelling waves for epidemic models
International Nuclear Information System (INIS)
Hsu, Cheng-Hsiung; Yang, Tzi-Sheng
2013-01-01
The purpose of this work is to investigate the existence, uniqueness, monotonicity and asymptotic behaviour of travelling wave solutions for a general epidemic model arising from the spread of an epidemic by oral–faecal transmission. First, we apply Schauder's fixed point theorem combining with a supersolution and subsolution pair to derive the existence of positive monotone monostable travelling wave solutions. Then, applying the Ikehara's theorem, we determine the exponential rates of travelling wave solutions which converge to two different equilibria as the moving coordinate tends to positive infinity and negative infinity, respectively. Finally, using the sliding method, we prove the uniqueness result provided the travelling wave solutions satisfy some boundedness conditions. (paper)
Monotone numerical methods for finite-state mean-field games
Gomes, Diogo A.
2017-04-29
Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.
Masuyama, Hiroyuki
2014-01-01
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...
International Nuclear Information System (INIS)
Lee, Tsair-Fwu; Chao, Pei-Ju; Wang, Hung-Yu; Hsu, Hsuan-Chih; Chang, PaoShu; Chen, Wen-Cheng
2012-01-01
With advances in modern radiotherapy (RT), many patients with head and neck (HN) cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman–Kutcher–Burman (LKB) model to derive parameters for the normal tissue complication probability (NTCP) for xerostomia based on scintigraphy assessments and quality of life (QoL) questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) guidelines against prospectively collected QoL and salivary scintigraphic data. Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs) measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3 + xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson’s chi-squared test, Nagelkerke’s R 2 , the area under the receiver operating characteristic curve, and the Hosmer–Lemeshow test. The negative predictive value (NPV) was checked for the rate of correctly predicting the lack of incidence. Pearson’s chi-squared test was used to test the goodness of fit and association. Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD 50 ) and the slope of the dose–response curve (m) were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD 50 =43.6 Gy and m=0.18 with the SEF data, and TD 50 =44.1 Gy and m=0.11 with the QoL data. The rate of grade 3 + xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Our study shows the agreement between the NTCP parameter modeling based on SEF and
2012-01-01
Background With advances in modern radiotherapy (RT), many patients with head and neck (HN) cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman–Kutcher–Burman (LKB) model to derive parameters for the normal tissue complication probability (NTCP) for xerostomia based on scintigraphy assessments and quality of life (QoL) questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) guidelines against prospectively collected QoL and salivary scintigraphic data. Methods Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs) measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3+ xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson’s chi-squared test, Nagelkerke’s R2, the area under the receiver operating characteristic curve, and the Hosmer–Lemeshow test. The negative predictive value (NPV) was checked for the rate of correctly predicting the lack of incidence. Pearson’s chi-squared test was used to test the goodness of fit and association. Results Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD50) and the slope of the dose–response curve (m) were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD50=43.6 Gy and m=0.18 with the SEF data, and TD50=44.1 Gy and m=0.11 with the QoL data. The rate of grade 3+ xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Conclusions Our study shows the agreement between the NTCP
Directory of Open Access Journals (Sweden)
Lee Tsair-Fwu
2012-12-01
Full Text Available Abstract Background With advances in modern radiotherapy (RT, many patients with head and neck (HN cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman–Kutcher–Burman (LKB model to derive parameters for the normal tissue complication probability (NTCP for xerostomia based on scintigraphy assessments and quality of life (QoL questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC guidelines against prospectively collected QoL and salivary scintigraphic data. Methods Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3+ xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson’s chi-squared test, Nagelkerke’s R2, the area under the receiver operating characteristic curve, and the Hosmer–Lemeshow test. The negative predictive value (NPV was checked for the rate of correctly predicting the lack of incidence. Pearson’s chi-squared test was used to test the goodness of fit and association. Results Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD50 and the slope of the dose–response curve (m were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD50=43.6 Gy and m=0.18 with the SEF data, and TD50=44.1 Gy and m=0.11 with the QoL data. The rate of grade 3+ xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Conclusions Our study shows the agreement
Bain, Peter A; Kumar, Anupama
2014-08-01
Predicting the effects of mixtures of environmental micropollutants is a priority research area. In this study, the cytotoxicity of ten pharmaceuticals to the rainbow trout cell line RTG-2 was determined using the neutral red uptake assay. Fluoxetine (FL), propranolol (PPN), and diclofenac (DCF) were selected for further study as binary mixtures. Biphasic concentration-response relationships were observed in cells exposed to FL and PPN. In the case of PPN, microscopic examination revealed lysosomal swelling indicative of direct uptake and accumulation of the compound. Three equations describing non-monotonic concentration-response relationships were evaluated and one was found to consistently provide more accurate estimates of the median and 10% effect concentrations compared with a sigmoidal concentration-response model. Predictive modeling of the effects of binary mixtures of FL, PPN, and DCF was undertaken using an implementation of the concentration addition (CA) conceptual model incorporating non-monotonic concentration-response relationships. The cytotoxicity of the all three binary combinations could be adequately predicted using CA, suggesting that the toxic mode of action in RTG-2 cells is unrelated to the therapeutic mode of action of these compounds. The approach presented here is widely applicable to the study of mixture toxicity in cases where non-monotonic concentration-response relationships are observed. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Gaihede, Michael; Liao Donghua; Gregersen, Hans
2007-01-01
The quasi-static elastic properties of the tympanic membrane system can be described by the areal modulus of elasticity determined by a middle ear model. The response of the tympanic membrane to quasi-static pressure changes is determined by its elastic properties. Several clinical problems are related to these, but studies are few and mostly not comparable. The elastic properties of membranes can be described by the areal modulus, and these may also be susceptible to age-related changes reflected by changes in the areal modulus. The areal modulus is determined by the relationship between membrane tension and change of the surface area relative to the undeformed surface area. A middle ear model determined the tension-strain relationship in vivo based on data from experimental pressure-volume deformations of the human tympanic membrane system. The areal modulus was determined in both a younger (n = 10) and an older (n = 10) group of normal subjects. The areal modulus for lateral and medial displacement of the tympanic membrane system was smaller in the older group (mean = 0.686 and 0.828 kN m -1 , respectively) compared to the younger group (mean = 1.066 and 1.206 kN m -1 , respectively), though not significantly (2p = 0.10 and 0.11, respectively). Based on the model the areal modulus was established describing the summated elastic properties of the tympanic membrane system. Future model improvements include exact determination of the tympanic membrane area accounting for its shape via 3D finite element analyses. In vivo estimates of Young's modulus in this study were a factor 2-3 smaller than previously found in vitro. No significant age-related differences were found in the elastic properties as expressed by the areal modulus
Energy Technology Data Exchange (ETDEWEB)
Gaihede, Michael [Department of Otolaryngology, Head and Neck Surgery, Aalborg Hospital, Aarhus University Hospital, Aalborg (Denmark); Liao Donghua [Centre of Excellence in Visceral Biomechanics and Pain, Aalborg Hospital, Aarhus University Hospital, Aalborg (Denmark); Gregersen, Hans [Centre of Excellence in Visceral Biomechanics and Pain, Aalborg Hospital, Aarhus University Hospital, Aalborg (Denmark)
2007-02-07
The quasi-static elastic properties of the tympanic membrane system can be described by the areal modulus of elasticity determined by a middle ear model. The response of the tympanic membrane to quasi-static pressure changes is determined by its elastic properties. Several clinical problems are related to these, but studies are few and mostly not comparable. The elastic properties of membranes can be described by the areal modulus, and these may also be susceptible to age-related changes reflected by changes in the areal modulus. The areal modulus is determined by the relationship between membrane tension and change of the surface area relative to the undeformed surface area. A middle ear model determined the tension-strain relationship in vivo based on data from experimental pressure-volume deformations of the human tympanic membrane system. The areal modulus was determined in both a younger (n = 10) and an older (n = 10) group of normal subjects. The areal modulus for lateral and medial displacement of the tympanic membrane system was smaller in the older group (mean = 0.686 and 0.828 kN m{sup -1}, respectively) compared to the younger group (mean = 1.066 and 1.206 kN m{sup -1}, respectively), though not significantly (2p = 0.10 and 0.11, respectively). Based on the model the areal modulus was established describing the summated elastic properties of the tympanic membrane system. Future model improvements include exact determination of the tympanic membrane area accounting for its shape via 3D finite element analyses. In vivo estimates of Young's modulus in this study were a factor 2-3 smaller than previously found in vitro. No significant age-related differences were found in the elastic properties as expressed by the areal modulus.
Luo, Shunlong; Sun, Yuan
2017-08-01
Quantifications of coherence are intensively studied in the context of completely decoherent operations (i.e., von Neuamnn measurements, or equivalently, orthonormal bases) in recent years. Here we investigate partial coherence (i.e., coherence in the context of partially decoherent operations such as Lüders measurements). A bona fide measure of partial coherence is introduced. As an application, we address the monotonicity problem of K -coherence (a quantifier for coherence in terms of Wigner-Yanase skew information) [Girolami, Phys. Rev. Lett. 113, 170401 (2014), 10.1103/PhysRevLett.113.170401], which is introduced to realize a measure of coherence as axiomatized by Baumgratz, Cramer, and Plenio [Phys. Rev. Lett. 113, 140401 (2014), 10.1103/PhysRevLett.113.140401]. Since K -coherence fails to meet the necessary requirement of monotonicity under incoherent operations, it is desirable to remedy this monotonicity problem. We show that if we modify the original measure by taking skew information with respect to the spectral decomposition of an observable, rather than the observable itself, as a measure of coherence, then the problem disappears, and the resultant coherence measure satisfies the monotonicity. Some concrete examples are discussed and related open issues are indicated.
On the Computation of Optimal Monotone Mean-Variance Portfolios via Truncated Quadratic Utility
Ales Cerný; Fabio Maccheroni; Massimo Marinacci; Aldo Rustichini
2008-01-01
We report a surprising link between optimal portfolios generated by a special type of variational preferences called divergence preferences (cf. [8]) and optimal portfolios generated by classical expected utility. As a special case we connect optimization of truncated quadratic utility (cf. [2]) to the optimal monotone mean-variance portfolios (cf. [9]), thus simplifying the computation of the latter.
Monotonous property of non-oscillations of the damped Duffing's equation
International Nuclear Information System (INIS)
Feng Zhaosheng
2006-01-01
In this paper, we give a qualitative study to the damped Duffing's equation by means of the qualitative theory of planar systems. Under certain parametric conditions, the monotonous property of the bounded non-oscillations is obtained. Explicit exact solutions are obtained by a direct method and application of this approach to a reaction-diffusion equation is presented
A note on profit maximization and monotonicity for inbound call centers
Koole, G.M.; Pot, S.A.
2011-01-01
We consider an inbound call center with a fixed reward per call and communication and agent costs. By controlling the number of lines and the number of agents, we can maximize the profit. Abandonments are included in our performance model. Monotonicity results for the maximization problem are
DEFF Research Database (Denmark)
Garde, Henrik
2018-01-01
. For a fair comparison, exact matrix characterizations are used when probing the monotonicity relations to avoid errors from numerical solution to PDEs and numerical integration. Using a special factorization of the Neumann-to-Dirichlet map also makes the non-linear method as fast as the linear method...
ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mesiar, Radko
2014-01-01
Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf
Monotonic Set-Extended Prefix Rewriting and Verification of Recursive Ping-Pong Protocols
DEFF Research Database (Denmark)
Delzanno, Giorgio; Esparza, Javier; Srba, Jiri
2006-01-01
of messages) some verification problems become decidable. In particular we give an algorithm to decide control state reachability, a problem related to security properties like secrecy and authenticity. The proof is via a reduction to a new prefix rewriting model called Monotonic Set-extended Prefix rewriting...
A note on monotone solutions for a nonconvex second-order functional differential inclusion
Directory of Open Access Journals (Sweden)
Aurelian Cernea
2011-12-01
Full Text Available The existence of monotone solutions for a second-order functional differential inclusion with Carath\\'{e}odory perturbation is obtained in the case when the multifunction that define the inclusion is upper semicontinuous compact valued and contained in the Fr\\'{e}chet subdifferential of a $\\phi $-convex function of order two.
Directory of Open Access Journals (Sweden)
Boubakari Ibrahimou
2013-01-01
maximal monotone with and . Using the topological degree theory developed by Kartsatos and Quarcoo we study the eigenvalue problem where the operator is a single-valued of class . The existence of continuous branches of eigenvectors of infinite length then could be easily extended to the case where the operator is multivalued and is investigated.
Characteristic of monotonicity of Orlicz function spaces equipped with the Orlicz norm
Czech Academy of Sciences Publication Activity Database
Foralewski, P.; Hudzik, H.; Kaczmarek, R.; Krbec, Miroslav
2013-01-01
Roč. 53, č. 2 (2013), s. 421-432 ISSN 0373-8299 R&D Projects: GA ČR GAP201/10/1920 Institutional support: RVO:67985840 Keywords : Orlicz space * Köthe space * characteristic of monotonicity Subject RIV: BA - General Mathematics
Non-monotonic reasoning in conceptual modeling and ontology design: A proposal
CSIR Research Space (South Africa)
Casini, G
2013-06-01
Full Text Available -1 2nd International Workshop on Ontologies and Conceptual Modeling (Onto.Com 2013), Valencia, Spain, 17-21 June 2013 Non-monotonic reasoning in conceptual modeling and ontology design: A proposal Giovanni Casini1 and Alessandro Mosca2 1...
CFD simulation of simultaneous monotonic cooling and surface heat transfer coefficient
International Nuclear Information System (INIS)
Mihálka, Peter; Matiašovský, Peter
2016-01-01
The monotonic heating regime method for determination of thermal diffusivity is based on the analysis of an unsteady-state (stabilised) thermal process characterised by an independence of the space-time temperature distribution on initial conditions. At the first kind of the monotonic regime a sample of simple geometry is heated / cooled at constant ambient temperature. The determination of thermal diffusivity requires the determination rate of a temperature change and simultaneous determination of the first eigenvalue. According to a characteristic equation the first eigenvalue is a function of the Biot number defined by a surface heat transfer coefficient and thermal conductivity of an analysed material. Knowing the surface heat transfer coefficient and the first eigenvalue the thermal conductivity can be determined. The surface heat transport coefficient during the monotonic regime can be determined by the continuous measurement of long-wave radiation heat flow and the photoelectric measurement of the air refractive index gradient in a boundary layer. CFD simulation of the cooling process was carried out to analyse local convective and radiative heat transfer coefficients more in detail. Influence of ambient air flow was analysed. The obtained eigenvalues and corresponding surface heat transfer coefficient values enable to determine thermal conductivity of the analysed specimen together with its thermal diffusivity during a monotonic heating regime.
Alternans by non-monotonic conduction velocity restitution, bistability and memory
International Nuclear Information System (INIS)
Kim, Tae Yun; Hong, Jin Hee; Heo, Ryoun; Lee, Kyoung J
2013-01-01
Conduction velocity (CV) restitution is a key property that characterizes any medium supporting traveling waves. It reflects not only the dynamics of the individual constituents but also the coupling mechanism that mediates their interaction. Recent studies have suggested that cardiac tissues, which have a non-monotonic CV-restitution property, can support alternans, a period-2 oscillatory response of periodically paced cardiac tissue. This study finds that single-hump, non-monotonic, CV-restitution curves are a common feature of in vitro cultures of rat cardiac cells. We also find that the Fenton–Karma model, one of the well-established mathematical models of cardiac tissue, supports a very similar non-monotonic CV restitution in a physiologically relevant parameter regime. Surprisingly, the mathematical model as well as the cell cultures support bistability and show cardiac memory that tends to work against the generation of an alternans. Bistability was realized by adopting two different stimulation protocols, ‘S1S2’, which produces a period-1 wave train, and ‘alternans-pacing’, which favors a concordant alternans. Thus, we conclude that the single-hump non-monotonicity in the CV-restitution curve is not sufficient to guarantee a cardiac alternans, since cardiac memory interferes and the way the system is paced matters. (paper)
On the Monotonicity and Log-Convexity of a Four-Parameter Homogeneous Mean
Directory of Open Access Journals (Sweden)
Yang Zhen-Hang
2008-01-01
Full Text Available Abstract A four-parameter homogeneous mean is defined by another approach. The criterion of its monotonicity and logarithmically convexity is presented, and three refined chains of inequalities for two-parameter mean values are deduced which contain many new and classical inequalities for means.
On utilization bounds for a periodic resource under rate monotonic scheduling
Renssen, van A.M.; Geuns, S.J.; Hausmans, J.P.H.M.; Poncin, W.; Bril, R.J.
2009-01-01
This paper revisits utilization bounds for a periodic resource under the rate monotonic (RM) scheduling algorithm. We show that the existing utilization bound, as presented in [8, 9], is optimistic. We subsequently show that by viewing the unavailability of the periodic resource as a deferrable
Directory of Open Access Journals (Sweden)
San-Yang Liu
2014-01-01
Full Text Available Two unified frameworks of some sufficient descent conjugate gradient methods are considered. Combined with the hyperplane projection method of Solodov and Svaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.
A Min-max Relation for Monotone Path Systems in Simple Regions
DEFF Research Database (Denmark)
Cameron, Kathleen
1996-01-01
A monotone path system (MPS) is a finite set of pairwise disjointpaths (polygonal arcs) in the plane such that every horizontal line intersectseach of the paths in at most one point. We consider a simple polygon in thexy-plane which bounds the simple polygonal (closed) region D. Let T and B betwo...
Monotonicity of the von Neumann entropy expressed as a function of R\\'enyi entropies
Fannes, Mark
2013-01-01
The von Neumann entropy of a density matrix of dimension d, expressed in terms of the first d-1 integer order R\\'enyi entropies, is monotonically increasing in R\\'enyi entropies of even order and decreasing in those of odd order.
Suzuki, K.; Takayama, T.; Fujii, T.; Yamamoto, K.
2014-12-01
Many geologists have discussed slope instability caused by gas-hydrate dissociation, which could make movable fluid in pore space of sediments. However, physical property changes caused by gas hydrate dissociation would not be so simple. Moreover, during the period of natural gas-production from gas-hydrate reservoir applying depressurization method would be completely different phenomena from dissociation processes in nature, because it could not be caused excess pore pressure, even though gas and water exist. Hence, in all cases, physical properties of gas-hydrate bearing sediments and that of their cover sediments are quite important to consider this phenomena, and to carry out simulation to solve focusing phenomena during gas hydrate dissociation periods. Daini-Atsumi knoll that was the first offshore gas-production test site from gas-hydrate is partially covered by slumps. Fortunately, one of them was penetrated by both Logging-While-Drilling (LWD) hole and pressure-coring hole. As a result of LWD data analyses and core analyses, we have understood density structure of sediments from seafloor to Bottom Simulating Reflector (BSR). The results are mentioned as following. ・Semi-confined slump showed high-density, relatively. It would be explained by over-consolidation that was result of layer-parallel compression caused by slumping. ・Bottom sequence of slump has relative high-density zones. It would be explained by shear-induced compaction along slide plane. ・Density below slump tends to increase in depth. It is reasonable that sediments below slump deposit have been compacting as normal consolidation. ・Several kinds of log-data for estimating physical properties of gas-hydrate reservoir sediments have been obtained. It will be useful for geological model construction from seafloor until BSR. We can use these results to consider geological model not only for slope instability at slumping, but also for slope stability during depressurized period of gas
International Nuclear Information System (INIS)
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-01-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear–quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18–30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8–30.9 Gy) and 22.0 Gy (range, 20.2–26.6 Gy), respectively. By use of conventional values for α/β, volume parameter n, 50% complication probability dose TD 50 , and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of α/β and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of α/β and n yielded better predictions (0.7 complications), with n = 0.023 and α/β = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high α/β value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models traditionally used to estimate spinal cord NTCP
DEFF Research Database (Denmark)
Foglia, Aligi; Gottardi, Guido; Govoni, Laura
2015-01-01
The response of bucket foundations on sand subjected to planar monotonic and cyclic loading is investigated in the paper. Thirteen monotonic and cyclic laboratory tests on a skirted footing model having a 0.3 m diameter and embedment ratio equal to 1 are presented. The loading regime reproduces t...
Energy Technology Data Exchange (ETDEWEB)
Tedjasari, R S; Lubis, E [Radioactive-Waste Management Technology Centre, National Atomic Energy Agency of Indonesia(Indonesia)
1996-07-01
Thyroid dose estimation at Radioisotope Production Centre workers using WBC and calculation based on I-131 concentration in working area has been done. The aim of this research is to get the relation between WBC result and calculation using I-131 concentration in working area. The result indicates differences in a range of 3,2% to 53,2%. These differences caused of parameters which influence the calculation are not accurate. These results also indicate that dose estimation using WBC is relatively batter and more accurate but need to have certain information about time of intake.
Thermal effects on the enhanced ductility in non-monotonic uniaxial tension of DP780 steel sheet
Majidi, Omid; Barlat, Frederic; Korkolis, Yannis P.; Fu, Jiawei; Lee, Myoung-Gyu
2016-11-01
To understand the material behavior during non-monotonic loading, uniaxial tension tests were conducted in three modes, namely, the monotonic loading, loading with periodic relaxation and periodic loading-unloadingreloading, at different strain rates (0.001/s to 0.01/s). In this study, the temperature gradient developing during each test and its contribution to increasing the apparent ductility of DP780 steel sheets were considered. In order to assess the influence of temperature, isothermal uniaxial tension tests were also performed at three temperatures (298 K, 313 K and 328 K (25 °C, 40 °C and 55 °C)). A digital image correlation system coupled with an infrared thermography was used in the experiments. The results show that the non-monotonic loading modes increased the apparent ductility of the specimens. It was observed that compared with the monotonic loading, the temperature gradient became more uniform when a non-monotonic loading was applied.
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2014-01-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results s...
Carpenter, Donald A.
2008-01-01
Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans
2012-01-01
uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...
Monotonic and Cyclic Behavior of DIN 34CrNiMo6 Tempered Alloy Steel
Directory of Open Access Journals (Sweden)
Ricardo Branco
2016-04-01
Full Text Available This paper aims at studying the monotonic and cyclic plastic deformation behavior of DIN 34CrNiMo6 high strength steel. Monotonic and low-cycle fatigue tests are conducted in ambient air, at room temperature, using standard 8-mm diameter specimens. The former tests are carried out under position control with constant displacement rate. The latter are performed under fully-reversed strain-controlled conditions, using the single-step test method, with strain amplitudes lying between ±0.4% and ±2.0%. After the tests, the fracture surfaces are examined by scanning electron microscopy in order to characterize the surface morphologies and identify the main failure mechanisms. Regardless of the strain amplitude, a softening behavior was observed throughout the entire life. Total strain energy density, defined as the sum of both tensile elastic and plastic strain energies, was revealed to be an adequate fatigue damage parameter for short and long lives.
Optimal Monotonicity-Preserving Perturbations of a Given Runge–Kutta Method
Higueras, Inmaculada
2018-02-14
Perturbed Runge–Kutta methods (also referred to as downwind Runge–Kutta methods) can guarantee monotonicity preservation under larger step sizes relative to their traditional Runge–Kutta counterparts. In this paper we study the question of how to optimally perturb a given method in order to increase the radius of absolute monotonicity (a.m.). We prove that for methods with zero radius of a.m., it is always possible to give a perturbation with positive radius. We first study methods for linear problems and then methods for nonlinear problems. In each case, we prove upper bounds on the radius of a.m., and provide algorithms to compute optimal perturbations. We also provide optimal perturbations for many known methods.
Directory of Open Access Journals (Sweden)
SAEID ZAHEDI VAHID
2013-08-01
Full Text Available Recently steel extended end plate connections are commonly used in rigid steel frame due to its good ductility and ability for energy dissipation. This connection system is recommended to be widely used in special moment-resisting frame subjected to vertical monotonic and cyclic loads. However improper design of beam to column connection can leads to collapses and fatalities. Therefore extensive study of beam to column connection design must be carried out, particularly when the connection is exposed to cyclic loadings. This paper presents a Finite Element Analysis (FEA approach as an alternative method in studying the behavior of such connections. The performance of castellated beam-column end plate connections up to failure was investigated subjected to monotonic and cyclic loading in vertical and horizontal direction. The study was carried out through a finite element analysis using the multi-purpose software package LUSAS. The effect of arranging the geometry and location of openings were also been investigated.
Optimal Monotonicity-Preserving Perturbations of a Given Runge–Kutta Method
Higueras, Inmaculada; Ketcheson, David I.; Kocsis, Tihamé r A.
2018-01-01
Perturbed Runge–Kutta methods (also referred to as downwind Runge–Kutta methods) can guarantee monotonicity preservation under larger step sizes relative to their traditional Runge–Kutta counterparts. In this paper we study the question of how to optimally perturb a given method in order to increase the radius of absolute monotonicity (a.m.). We prove that for methods with zero radius of a.m., it is always possible to give a perturbation with positive radius. We first study methods for linear problems and then methods for nonlinear problems. In each case, we prove upper bounds on the radius of a.m., and provide algorithms to compute optimal perturbations. We also provide optimal perturbations for many known methods.
Directory of Open Access Journals (Sweden)
Jian Ding
2014-01-01
Full Text Available This paper addresses the problem of P-type iterative learning control for a class of multiple-input multiple-output linear discrete-time systems, whose aim is to develop robust monotonically convergent control law design over a finite frequency range. It is shown that the 2 D iterative learning control processes can be taken as 1 D state space model regardless of relative degree. With the generalized Kalman-Yakubovich-Popov lemma applied, it is feasible to describe the monotonically convergent conditions with the help of linear matrix inequality technique and to develop formulas for the control gain matrices design. An extension to robust control law design against systems with structured and polytopic-type uncertainties is also considered. Two numerical examples are provided to validate the feasibility and effectiveness of the proposed method.
Non-monotonic relationships between emotional arousal and memory for color and location.
Boywitt, C Dennis
2015-01-01
Recent research points to the decreased diagnostic value of subjective retrieval experience for memory accuracy for emotional stimuli. While for neutral stimuli rich recollective experiences are associated with better context memory than merely familiar memories this association appears questionable for emotional stimuli. The present research tested the implicit assumption that the effect of emotional arousal on memory is monotonic, that is, steadily increasing (or decreasing) with increasing arousal. In two experiments emotional arousal was manipulated in three steps using emotional pictures and subjective retrieval experience as well as context memory were assessed. The results show an inverted U-shape relationship between arousal and recognition memory but for context memory and retrieval experience the relationship was more complex. For frame colour, context memory decreased linearly while for spatial location it followed the inverted U-shape function. The complex, non-monotonic relationships between arousal and memory are discussed as possible explanations for earlier divergent findings.
Risk-Sensitive Control of Pure Jump Process on Countable Space with Near Monotone Cost
International Nuclear Information System (INIS)
Suresh Kumar, K.; Pal, Chandan
2013-01-01
In this article, we study risk-sensitive control problem with controlled continuous time pure jump process on a countable space as state dynamics. We prove multiplicative dynamic programming principle, elliptic and parabolic Harnack’s inequalities. Using the multiplicative dynamic programing principle and the Harnack’s inequalities, we prove the existence and a characterization of optimal risk-sensitive control under the near monotone condition
Asian Option Pricing with Monotonous Transaction Costs under Fractional Brownian Motion
Directory of Open Access Journals (Sweden)
Di Pan
2013-01-01
Full Text Available Geometric-average Asian option pricing model with monotonous transaction cost rate under fractional Brownian motion was established. The method of partial differential equations was used to solve this model and the analytical expressions of the Asian option value were obtained. The numerical experiments show that Hurst exponent of the fractional Brownian motion and transaction cost rate have a significant impact on the option value.
Diagnosis of constant faults in iteration-free circuits over monotone basis
Alrawaf, Saad Abdullah; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2014-01-01
We show that for each iteration-free combinatorial circuit S over a basis B containing only monotone Boolean functions with at most five variables, there exists a decision tree for diagnosis of constant faults on inputs of gates with depth at most 7L(S) where L(S) is the number of gates in S. © 2013 Elsevier B.V. All rights reserved.
Inelastic behavior of materials and structures under monotonic and cyclic loading
Brünig, Michael
2015-01-01
This book presents studies on the inelastic behavior of materials and structures under monotonic and cyclic loads. It focuses on the description of new effects like purely thermal cycles or cases of non-trivial damages. The various models are based on different approaches and methods and scaling aspects are taken into account. In addition to purely phenomenological models, the book also presents mechanisms-based approaches. It includes contributions written by leading authors from a host of different countries.
Reduction theorems for weighted integral inequalities on the cone of monotone functions
Czech Academy of Sciences Publication Activity Database
Gogatishvili, Amiran; Stepanov, V.D.
2013-01-01
Roč. 68, č. 4 (2013), s. 597-664 ISSN 0036-0279 R&D Projects: GA ČR GA201/08/0383; GA ČR GA13-14743S Institutional support: RVO:67985840 Keywords : weighted Lebesgue space * cone of monotone functions * duality principle Subject RIV: BA - General Mathematics Impact factor: 1.357, year: 2013 http://iopscience.iop.org/0036-0279/68/4/597
ASPMT(QS): Non-Monotonic Spatial Reasoning with Answer Set Programming Modulo Theories
Wałęga, Przemysław Andrzej; Bhatt, Mehul; Schultz, Carl
2015-01-01
The systematic modelling of \\emph{dynamic spatial systems} [9] is a key requirement in a wide range of application areas such as comonsense cognitive robotics, computer-aided architecture design, dynamic geographic information systems. We present ASPMT(QS), a novel approach and fully-implemented prototype for non-monotonic spatial reasoning ---a crucial requirement within dynamic spatial systems-- based on Answer Set Programming Modulo Theories (ASPMT). ASPMT(QS) consists of a (qualitative) s...
Non-Monotonic Spatial Reasoning with Answer Set Programming Modulo Theories
Wałęga, Przemysław Andrzej; Schultz, Carl; Bhatt, Mehul
2016-01-01
The systematic modelling of dynamic spatial systems is a key requirement in a wide range of application areas such as commonsense cognitive robotics, computer-aided architecture design, and dynamic geographic information systems. We present ASPMT(QS), a novel approach and fully-implemented prototype for non-monotonic spatial reasoning -a crucial requirement within dynamic spatial systems- based on Answer Set Programming Modulo Theories (ASPMT). ASPMT(QS) consists of a (qualitative) spatial re...
The non-monotonic shear-thinning flow of two strongly cohesive concentrated suspensions
Buscall, Richard; Kusuma, Tiara E.; Stickland, Anthony D.; Rubasingha, Sayuri; Scales, Peter J.; Teo, Hui-En; Worrall, Graham L.
2014-01-01
The behaviour in simple shear of two concentrated and strongly cohesive mineral suspensions showing highly non-monotonic flow curves is described. Two rheometric test modes were employed, controlled stress and controlled shear-rate. In controlled stress mode the materials showed runaway flow above a yield stress, which, for one of the suspensions, varied substantially in value and seemingly at random from one run to the next, such that the up flow-curve appeared to be quite irreproducible. Th...
Diagnosis of constant faults in iteration-free circuits over monotone basis
Alrawaf, Saad Abdullah
2014-03-01
We show that for each iteration-free combinatorial circuit S over a basis B containing only monotone Boolean functions with at most five variables, there exists a decision tree for diagnosis of constant faults on inputs of gates with depth at most 7L(S) where L(S) is the number of gates in S. © 2013 Elsevier B.V. All rights reserved.
Lagarde, Fabien; Beausoleil, Claire; Belcher, Scott M; Belzunces, Luc P; Emond, Claude; Guerbet, Michel; Rousselle, Christophe
2015-01-01
International audience; Experimental studies investigating the effects of endocrine disruptors frequently identify potential unconventional dose-response relationships called non-monotonic dose-response (NMDR) relationships. Standardized approaches for investigating NMDR relationships in a risk assessment context are missing. The aim of this work was to develop criteria for assessing the strength of NMDR relationships. A literature search was conducted to identify published studies that repor...
Isochronous relaxation curves for type 304 stainless steel after monotonic and cyclic strain
International Nuclear Information System (INIS)
Swindeman, R.W.
1978-01-01
Relaxation tests to 100 hr were performed on type 304 stainless steel in the temperature range 480 to 650 0 C and were used to develop isochronous relaxation curves. Behavior after monotonic and cyclic strain was compared. Relaxation differed only slightly as a consequence of the type of previous strain, provided that plastic flow preceded the relaxation period. We observed that the short-time relaxation behavior did not manifest strong heat-to-heat variation in creep strength
Non-monotonic effect of growth temperature on carrier collection in SnS solar cells
International Nuclear Information System (INIS)
Chakraborty, R.; Steinmann, V.; Mangan, N. M.; Brandt, R. E.; Poindexter, J. R.; Jaramillo, R.; Mailoa, J. P.; Hartman, K.; Polizzotti, A.; Buonassisi, T.; Yang, C.; Gordon, R. G.
2015-01-01
We quantify the effects of growth temperature on material and device properties of thermally evaporated SnS thin-films and test structures. Grain size, Hall mobility, and majority-carrier concentration monotonically increase with growth temperature. However, the charge collection as measured by the long-wavelength contribution to short-circuit current exhibits a non-monotonic behavior: the collection decreases with increased growth temperature from 150 °C to 240 °C and then recovers at 285 °C. Fits to the experimental internal quantum efficiency using an opto-electronic model indicate that the non-monotonic behavior of charge-carrier collection can be explained by a transition from drift- to diffusion-assisted components of carrier collection. The results show a promising increase in the extracted minority-carrier diffusion length at the highest growth temperature of 285 °C. These findings illustrate how coupled mechanisms can affect early stage device development, highlighting the critical role of direct materials property measurements and simulation
International Nuclear Information System (INIS)
Kimberly, David A.; Salice, Christopher J.
2015-01-01
Generally, ecotoxicologists rely on short-term tests that assume populations to be static. Conversely, natural populations may be exposed to the same stressors for many generations, which can alter tolerance to the same (or other) stressors. The objective of this study was to improve our understanding of how multigenerational stressors alter life history traits and stressor tolerance. After continuously exposing Daphnia magna to cadmium for 120 days, we assessed life history traits and conducted a challenge at higher temperature and cadmium concentrations. Predictably, individuals exposed to cadmium showed an overall decrease in reproductive output compared to controls. Interestingly, control D. magna were the most cadmium tolerant to novel cadmium, followed by those exposed to high cadmium. Our data suggest that long-term exposure to cadmium alter tolerance traits in a non-monotonic way. Because we observed effects after one-generation removal from cadmium, transgenerational effects may be possible as a result of multigenerational exposure. - Highlights: • Daphnia magna exposed to cadmium for 120 days. • D. magna exposed to cadmium had decreased reproductive output. • Control D. magna were most cadmium tolerant to novel cadmium stress. • Long-term exposure to cadmium alter tolerance traits in a non-monotonic way. • Transgenerational effects observed as a result of multigenerational exposure. - Adverse effects of long-term cadmium exposure persist into cadmium free conditions, as seen by non-monotonic responses when exposed to novel stress one generation removed.
International Nuclear Information System (INIS)
Zhou, Fang; Wei, Huajiang; Guo, Zhouyi; Ye, Xiangping; Hu, Kun; Wu, Guoyong; Yang, Hongqin; Xie, Shusen; He, Yonghong
2015-01-01
In this work, the potential use of nanoparticles as contrast agents by using spectral domain optical coherence tomography (SD-OCT) in liver tissue was demonstrated. Gold nanoparticles (average size of 25 and 70 nm), were studied in human normal and cancerous liver tissues in vitro, respectively. Each sample was monitored with SD-OCT functional imaging for 240 min. Continuous OCT monitoring showed that, after application of gold nanoparticles, the OCT signal intensities of normal liver and cancerous liver tissue both increase with time, and the larger nanoparticles tend to produce a greater signal enhancement in the same type of tissue. The results show that the values of attenuation coefficients have significant differences between normal liver tissue and cancerous liver tissue. In addition, 25 nm gold nanoparticles allow higher penetration depth than 70 nm gold nanoparticles in liver tissues. (paper)
Directory of Open Access Journals (Sweden)
Babak Mehmandoust
2014-03-01
Full Text Available The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K.
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2014-03-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).
Scaling laws for dislocation microstructures in monotonic and cyclic deformation of fcc metals
International Nuclear Information System (INIS)
Kubin, L.P.; Sauzay, M.
2011-01-01
This work reviews and critically discusses the current understanding of two scaling laws, which are ubiquitous in the modeling of monotonic plastic deformation in face-centered cubic metals. A compilation of the available data allows extending the domain of application of these scaling laws to cyclic deformation. The strengthening relation tells that the flow stress is proportional to the square root of the average dislocation density, whereas the similitude relation assumes that the flow stress is inversely proportional to the characteristic wavelength of dislocation patterns. The strengthening relation arises from short-range reactions of non-coplanar segments and applies all through the first three stages of the monotonic stress vs. strain curves. The value of the proportionality coefficient is calculated and simulated in good agreement with the bulk of experimental measurements published since the beginning of the 1960's. The physical origin of what is called similitude is not understood and the related coefficient is not predictable. Its value is determined from a review of the experimental literature. The generalization of these scaling laws to cyclic deformation is carried out on the base of a large collection of experimental results on single and polycrystals of various materials and on different microstructures. Surprisingly, for persistent slip bands (PSBs), both the strengthening and similitude coefficients appear to be more than two times smaller than the corresponding monotonic values, whereas their ratio is the same as in monotonic deformation. The similitude relation is also checked in cell structures and in labyrinth structures. Under low cyclic stresses, the strengthening coefficient is found even lower than in PSBs. A tentative explanation is proposed for the differences observed between cyclic and monotonic deformation. Finally, the influence of cross-slip on the temperature dependence of the saturation stress of PSBs is discussed in some detail
Masuyama, Hiroyuki
2015-01-01
This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...
Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.
2003-01-01
The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Voorspoels, Wouter; Navarro, Daniel J; Perfors, Amy; Ransom, Keith; Storms, Gert
2015-09-01
A robust finding in category-based induction tasks is for positive observations to raise the willingness to generalize to other categories while negative observations lower the willingness to generalize. This pattern is referred to as monotonic generalization. Across three experiments we find systematic non-monotonicity effects, in which negative observations raise the willingness to generalize. Experiments 1 and 2 show that this effect emerges in hierarchically structured domains when a negative observation from a different category is added to a positive observation. They also demonstrate that this is related to a specific kind of shift in the reasoner's hypothesis space. Experiment 3 shows that the effect depends on the assumptions that the reasoner makes about how inductive arguments are constructed. Non-monotonic reasoning occurs when people believe the facts were put together by a helpful communicator, but monotonicity is restored when they believe the observations were sampled randomly from the environment. Copyright © 2015 Elsevier Inc. All rights reserved.
Lv Yu-Pei; Sun Tian-Chuan; Chu Yu-Ming
2011-01-01
Abstract We prove that the function F α,β (x) = x α Γ β (x)/Γ(βx) is strictly logarithmically completely monotonic on (0, ∞) if and only if (α, β) ∈ {(α, β) : β > 0, β ≥ 2α + 1, β ≥ α + 1}{(α, β) : α = 0, β = 1} and that [F α,β (x)]-1 is strictly logarithmically completely monotonic on (0, ∞) if and only if (α, β) ∈ {(α, β ...
Some completely monotonic properties for the $(p,q )$-gamma function
Krasniqi, Valmir; Merovci, Faton
2014-01-01
It is defined $\\Gamma_{p,q}$ function, a generalize of $\\Gamma$ function. Also, we defined $\\psi_{p,q}$-analogue of the psi function as the log derivative of $\\Gamma_{p,q}$. For the $\\Gamma_{p,q}$ -function, are given some properties related to convexity, log-convexity and completely monotonic function. Also, some properties of $\\psi_{p,q} $ analog of the $\\psi$ function have been established. As an application, when $p\\to \\infty, q\\to 1,$ we obtain all result of \\cite{Valmir1} and \\cite{SHA}.
Renormalization in charged colloids: non-monotonic behaviour with the surface charge
International Nuclear Information System (INIS)
Haro-Perez, C; Quesada-Perez, M; Callejas-Fernandez, J; Schurtenberger, P; Hidalgo-Alvarez, R
2006-01-01
The static structure factor S(q) is measured for a set of deionized latex dispersions with different numbers of ionizable surface groups per particle and similar diameters. For a given volume fraction, the height of the main peak of S(q), which is a direct measure of the spatial ordering of latex particles, does not increase monotonically with the number of ionizable groups. This behaviour cannot be described using the classical renormalization scheme based on the cell model. We analyse our experimental data using a renormalization model based on the jellium approximation, which predicts the weakening of the spatial order for moderate and large particle charges. (letter to the editor)
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
International Nuclear Information System (INIS)
Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George
2012-01-01
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
Energy Technology Data Exchange (ETDEWEB)
Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)
2012-12-15
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Boski, Marcin; Paszke, Wojciech
2015-11-01
This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.
Monotonicity Conditions for Multirate and Partitioned Explicit Runge-Kutta Schemes
Hundsdorfer, Willem
2013-01-01
Multirate schemes for conservation laws or convection-dominated problems seem to come in two flavors: schemes that are locally inconsistent, and schemes that lack mass-conservation. In this paper these two defects are discussed for one-dimensional conservation laws. Particular attention will be given to monotonicity properties of the multirate schemes, such as maximum principles and the total variation diminishing (TVD) property. The study of these properties will be done within the framework of partitioned Runge-Kutta methods. It will also be seen that the incompatibility of consistency and mass-conservation holds for ‘genuine’ multirate schemes, but not for general partitioned methods.
Uniform persistence and upper Lyapunov exponents for monotone skew-product semiflows
International Nuclear Information System (INIS)
Novo, Sylvia; Obaya, Rafael; Sanz, Ana M
2013-01-01
Several results of uniform persistence above and below a minimal set of an abstract monotone skew-product semiflow are obtained. When the minimal set has a continuous separation the results are given in terms of the principal spectrum. In the case that the semiflow is generated by the solutions of a family of non-autonomous differential equations of ordinary, delay or parabolic type, the former results are strongly improved. A method of calculus of the upper Lyapunov exponent of the minimal set is also determined. (paper)
Oscillation of Nonlinear Delay Differential Equation with Non-Monotone Arguments
Directory of Open Access Journals (Sweden)
Özkan Öcalan
2017-07-01
Full Text Available Consider the first-order nonlinear retarded differential equation $$ x^{\\prime }(t+p(tf\\left( x\\left( \\tau (t\\right \\right =0, t\\geq t_{0} $$ where $p(t$ and $\\tau (t$ are function of positive real numbers such that $%\\tau (t\\leq t$ for$\\ t\\geq t_{0},\\ $and$\\ \\lim_{t\\rightarrow \\infty }\\tau(t=\\infty $. Under the assumption that the retarded argument is non-monotone, new oscillation results are given. An example illustrating the result is also given.
Application of non-monotonic logic to failure diagnosis of nuclear power plant
International Nuclear Information System (INIS)
Takahashi, M.; Kitamura, M.; Sugiyama, K.
1989-01-01
A prototype diagnosis system for nuclear power plants was developed based on Truth Maintenance systems: TMS and Dempster-Shafer probability theory. The purpose of this paper is to establish basic technique for more intelligent, man-computer cooperative diagnosis system. The developed system is capable of carrying out the diagnostic inference under the imperfect observation condition with the help of the proposed belief revision procedure with TMS and the systematic uncertainty treatment with Dempster-Shafer theory. The usefulness and potentiality of the present non-monotonic logic were demonstrated through simulation experiments
Effect of meal glycemic load and caffeine consumption on prolonged monotonous driving performance.
Bragg, Christopher; Desbrow, Ben; Hall, Susan; Irwin, Christopher
2017-11-01
Monotonous driving involves low levels of stimulation and high levels of repetition and is essentially an exercise in sustained attention and vigilance. The aim of this study was to determine the effects of consuming a high or low glycemic load meal on prolonged monotonous driving performance. The effect of consuming caffeine with a high glycemic load meal was also examined. Ten healthy, non-diabetic participants (7 males, age 51±7yrs, mean±SD) completed a repeated measures investigation involving 3 experimental trials. On separate occasions, participants were provided one of three treatments prior to undertaking a 90min computer-based simulated drive. The 3 treatment conditions involved consuming: (1) a low glycemic load meal+placebo capsules (LGL), (2) a high glycemic load meal+placebo capsules (HGL) and (3) a high glycemic load meal+caffeine capsules (3mgkg -1 body weight) (CAF). Measures of driving performance included lateral (standard deviation of lane position (SDLP), average lane position (AVLP), total number of lane crossings (LC)) and longitudinal (average speed (AVSP) and standard deviation of speed (SDSP)) vehicle control parameters. Blood glucose levels, plasma caffeine concentrations and subjective ratings of sleepiness, alertness, mood, hunger and simulator sickness were also collected throughout each trial. No difference in either lateral or longitudinal vehicle control parameters or subjective ratings were observed between HGL and LGL treatments. A significant reduction in SDLP (0.36±0.20m vs 0.41±0.19m, p=0.004) and LC (34.4±31.4 vs 56.7±31.5, p=0.018) was observed in the CAF trial compared to the HGL trial. However, no differences in AVLP, AVSP and SDSP or subjective ratings were detected between these two trials (p>0.05). Altering the glycemic load of a breakfast meal had no effect on measures of monotonous driving performance in non-diabetic adults. Individuals planning to undertake a prolonged monotonous drive following consumption of a
A note on monotonically star Lindelöf spaces | Song | Quaestiones ...
African Journals Online (AJOL)
A space X is monotonically star Lindelöf if one assign to for each open cover U a subspace s(U) ⊆ X, called a kernel, such that s(U) is a Lindelöf subset of X, and st(s(U); U) = X, and if V renes U then ∪ s(U) ⊆ s(V), where st(s(U); U) = ∪ {U ∈ U : U ∩ s(U) ≠ ∅}. In this paper, we investigate the relationship between ...
DEFF Research Database (Denmark)
Gaihede, Michael Lyhne; Donghua, Liao; Gregersen, H.
2007-01-01
The quasi-static elastic properties of the tympanic membrane system can be described by the areal modulus of elasticity determined by a middle ear model. The response of the tympanic membrane to quasi-static pressure changes is determined by its elastic properties. Several clinical problems are r...... finite element analyses. In vivo estimates of Young's modulus in this study were a factor 2-3 smaller than previously found in vitro. No significant age-related differences were found in the elastic properties as expressed by the areal modulus....
Mejias, Jorge F; Payeur, Alexandre; Selin, Erik; Maler, Leonard; Longtin, André
2014-01-01
The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry-also known as "open-loop feedback"-, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.
Directory of Open Access Journals (Sweden)
Jorge F Mejias
2014-02-01
Full Text Available The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry — also known as ’open-loop feedback’ —, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.
Non-monotonic wetting behavior of chitosan films induced by silver nanoparticles
Energy Technology Data Exchange (ETDEWEB)
Praxedes, A.P.P.; Webler, G.D.; Souza, S.T. [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Ribeiro, A.S. [Instituto de Química e Biotecnologia, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Fonseca, E.J.S. [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Oliveira, I.N. de, E-mail: italo@fis.ufal.br [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil)
2016-05-01
Highlights: • The addition of silver nanoparticles modifies the morphology of chitosan films. • Metallic nanoparticles can be used to control wetting properties of chitosan films. • The contact angle shows a non-monotonic dependence on the silver concentration. - Abstract: The present work is devoted to the study of structural and wetting properties of chitosan-based films containing silver nanoparticles. In particular, the effects of silver concentration on the morphology of chitosan films are characterized by different techniques, such as atomic force microscopy (AFM), X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR). By means of dynamic contact angle measurements, we study the modification on surface properties of chitosan-based films due to the addition of silver nanoparticles. The results are analyzed in the light of molecular-kinetic theory which describes the wetting phenomena in terms of statistical dynamics for the displacement of liquid molecules in a solid substrate. Our results show that the wetting properties of chitosan-based films are high sensitive to the fraction of silver nanoparticles, with the equilibrium contact angle exhibiting a non-monotonic behavior.
Psychophysiological responses to short-term cooling during a simulated monotonous driving task.
Schmidt, Elisabeth; Decke, Ralf; Rasshofer, Ralph; Bullinger, Angelika C
2017-07-01
For drivers on monotonous routes, cognitive fatigue causes discomfort and poses an important risk for traffic safety. Countermeasures against this type of fatigue are required and thermal stimulation is one intervention method. Surprisingly, there are hardly studies available to measure the effect of cooling while driving. Hence, to better understand the effect of short-term cooling on the perceived sleepiness of car drivers, a driving simulator study (n = 34) was conducted in which physiological and vehicular data during cooling and control conditions were compared. The evaluation of the study showed that cooling applied during a monotonous drive increased the alertness of the car driver. The sleepiness rankings were significantly lower for the cooling condition. Furthermore, the significant pupillary and electrodermal responses were physiological indicators for increased sympathetic activation. In addition, during cooling a better driving performance was observed. In conclusion, the study shows generally that cooling has a positive short-term effect on drivers' wakefulness; in detail, a cooling period of 3 min delivers best results. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Abgrall, Remi; Mezine, Mohamed
2003-01-01
The aim of this paper is to construct upwind residual distribution schemes for the time accurate solution of hyperbolic conservation laws. To do so, we evaluate a space-time fluctuation based on a space-time approximation of the solution and develop new residual distribution schemes which are extensions of classical steady upwind residual distribution schemes. This method has been applied to the solution of scalar advection equation and to the solution of the compressible Euler equations both in two space dimensions. The first version of the scheme is shown to be, at least in its first order version, unconditionally energy stable and possibly conditionally monotonicity preserving. Using an idea of Csik et al. [Space-time residual distribution schemes for hyperbolic conservation laws, 15th AIAA Computational Fluid Dynamics Conference, Anahein, CA, USA, AIAA 2001-2617, June 2001], we modify the formulation to end up with a scheme that is unconditionally energy stable and unconditionally monotonicity preserving. Several numerical examples are shown to demonstrate the stability and accuracy of the method
Pettersson, Per
2013-05-01
The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain and spatially varying viscosity. We investigate well-posedness, monotonicity and stability for the extended system resulting from the Galerkin projection of the advection-diffusion equation onto the stochastic basis functions. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability of the semi-discrete system.It is essential that the eigenvalues of the resulting viscosity matrix of the stochastic Galerkin system are positive and we investigate conditions for this to hold. When the viscosity matrix is diagonalizable, stochastic Galerkin and stochastic collocation are similar in terms of computational cost, and for some cases the accuracy is higher for stochastic Galerkin provided that monotonicity requirements are met. We also investigate the total spatial operator of the semi-discretized system and its impact on the convergence to steady-state. © 2013 Elsevier B.V.
Hamid, Nubailah Abd; Ibrahim, Azmi; Adnan, Azlan; Ismail, Muhammad Hussain
2018-05-01
This paper discusses the superelastic behavior of shape memory alloy, NiTi when used as reinforcement in concrete beams. The ability of NiTi to recover and reduce permanent deformations of concrete beams was investigated. Small-scale concrete beams, with NiTi reinforcement were experimentally investigated under monotonic loads. The behaviour of simply supported reinforced concrete (RC) beams hybrid with NiTi rebars and the control beam subject to monotonic loads were experimentally investigated. This paper is to highlight the ability of the SMA bars to recover and reduce permanent deformations of concrete flexural members. The size of the control beam is 125 mm × 270 mm × 1000 mm with 3 numbers of 12 mm diameter bars as main reinforcement for compression and 3 numbers of 12 mm bars as tension or hanger bars while 6 mm diameter at 100 mm c/c used as shear reinforcement bars for control beam respectively. While, the minimal provision of 200mm using the 12.7mm of superelastic Shape Memory Alloys were employed to replace the steel rebar at the critical region of the beam. In conclusion, the contribution of the SMA bar in combination with high-strength steel to the conventional reinforcement showed that the SMA beam has exhibited an improve performance in term of better crack recovery and deformation. Therefore the usage of hybrid NiTi with the steel can substantially diminish the risk of the earthquake and also can reduce the associated cost aftermath.
Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method
Sun, Yong; Meng, Zhaohai; Li, Fengting
2018-03-01
Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.
Pettersson, Per; Doostan, Alireza; Nordströ m, Jan
2013-01-01
The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain and spatially varying viscosity. We investigate well-posedness, monotonicity and stability for the extended system resulting from the Galerkin projection of the advection-diffusion equation onto the stochastic basis functions. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability of the semi-discrete system.It is essential that the eigenvalues of the resulting viscosity matrix of the stochastic Galerkin system are positive and we investigate conditions for this to hold. When the viscosity matrix is diagonalizable, stochastic Galerkin and stochastic collocation are similar in terms of computational cost, and for some cases the accuracy is higher for stochastic Galerkin provided that monotonicity requirements are met. We also investigate the total spatial operator of the semi-discretized system and its impact on the convergence to steady-state. © 2013 Elsevier B.V.
Denjoy minimal sets and Birkhoff periodic orbits for non-exact monotone twist maps
Qin, Wen-Xin; Wang, Ya-Nan
2018-06-01
A non-exact monotone twist map φbarF is a composition of an exact monotone twist map φ bar with a generating function H and a vertical translation VF with VF ((x , y)) = (x , y - F). We show in this paper that for each ω ∈ R, there exists a critical value Fd (ω) ≥ 0 depending on H and ω such that for 0 ≤ F ≤Fd (ω), the non-exact twist map φbarF has an invariant Denjoy minimal set with irrational rotation number ω lying on a Lipschitz graph, or Birkhoff (p , q)-periodic orbits for rational ω = p / q. Like the Aubry-Mather theory, we also construct heteroclinic orbits connecting Birkhoff periodic orbits, and show that quasi-periodic orbits in these Denjoy minimal sets can be approximated by periodic orbits. In particular, we demonstrate that at the critical value F =Fd (ω), the Denjoy minimal set is not uniformly hyperbolic and can be approximated by smooth curves.
The behavior of welded joint in steel pipe members under monotonic and cyclic loading
International Nuclear Information System (INIS)
Chang, Kyong-Ho; Jang, Gab-Chul; Shin, Young-Eui; Han, Jung-Guen; Kim, Jong-Min
2006-01-01
Most steel pipe members are joined by welding. The residual stress and weld metal in a welded joint have the influence on the behavior of steel pipes. Therefore, to accurately predict the behavior of steel pipes with a welded joint, the influence of welding residual stress and weld metal on the behavior of steel pipe must be investigated. In this paper, the residual stress of steel pipes with a welded joint was investigated by using a three-dimensional non-steady heat conduction analysis and a three-dimensional thermal elastic-plastic analysis. Based on the results of monotonic and cyclic loading tests, a hysteresis model for weld metal was formulated. The hysteresis model was proposed by the authors and applied to a three-dimensional finite elements analysis. To investigate the influence of a welded joint in steel pipes under monotonic and cyclic loading, three-dimensional finite elements analysis considering the proposed model and residual stress was carried out. The influence of a welded joint on the behavior of steel pipe members was investigated by comparing the analytical result both steel pipe with a welded joint and that without a welded joint
Evaluation of the Monotonic Lagrangian Grid and Lat-Long Grid for Air Traffic Management
Kaplan, Carolyn; Dahm, Johann; Oran, Elaine; Alexandrov, Natalia; Boris, Jay
2011-01-01
The Air Traffic Monotonic Lagrangian Grid (ATMLG) is used to simulate a 24 hour period of air traffic flow in the National Airspace System (NAS). During this time period, there are 41,594 flights over the United States, and the flight plan information (departure and arrival airports and times, and waypoints along the way) are obtained from an Federal Aviation Administration (FAA) Enhanced Traffic Management System (ETMS) dataset. Two simulation procedures are tested and compared: one based on the Monotonic Lagrangian Grid (MLG), and the other based on the stationary Latitude-Longitude (Lat- Long) grid. Simulating one full day of air traffic over the United States required the following amounts of CPU time on a single processor of an SGI Altix: 88 s for the MLG method, and 163 s for the Lat-Long grid method. We present a discussion of the amount of CPU time required for each of the simulation processes (updating aircraft trajectories, sorting, conflict detection and resolution, etc.), and show that the main advantage of the MLG method is that it is a general sorting algorithm that can sort on multiple properties. We discuss how many MLG neighbors must be considered in the separation assurance procedure in order to ensure a five-mile separation buffer between aircraft, and we investigate the effect of removing waypoints from aircraft trajectories. When aircraft choose their own trajectory, there are more flights with shorter duration times and fewer CD&R maneuvers, resulting in significant fuel savings.
Monotonic and fatigue deformation of Ni--W directionally solidified eutectic
International Nuclear Information System (INIS)
Garmong, G.; Williams, J.C.
1975-01-01
Unlike many eutectic composites, the Ni--W eutectic exhibits extensive ductility by slip. Furthermore, its properties may be greatly varied by proper heat treatments. Results of studies of deformation in both monotonic and fatigue loading are reported. During monotonic deformation the fiber/matrix interface acts as a source of dislocations at low strains and an obstacle to matrix slip at higher strains. Deforming the quenched-plus-aged eutectic causes planar matrix slip, with the result that matrix slip bands create stress concentrations in the fibers at low strains. The aged eutectic reaches generally higher stress levels for comparable strains than does the as-quenched eutectic, and the failure strains decrease with increasing aging times. For the composites tested in fatigue, the aged eutectic has better high-stress fatigue resistance than the as-quenched material, but for low-stress, high-cycle fatigue their cycles to failure are nearly the same. However, both crack initiation and crack propagation are different in the two conditions, so the coincidence in high-cycle fatigue is probably fortuitous. The effect of matrix strength on composite performance is not simple, since changes in strength may be accompanied by alterations in slip modes and failure processes. (17 fig) (auth)
Shaila, Mulki; Pai, G Prakash; Shetty, Pushparaj
2013-01-01
To evaluate the salivary protein concentration in gingivitis and periodontitis patients and compare the parameters like salivary total protein, salivary albumin, salivary flow rate, pH, buffer capacity and flow rate in both young and elderly patients with simple methods. One hundred and twenty subjects were grouped based on their age as young and elderly. Each group was subgrouped (20 subjects) as controls, gingivitis and periodontitis. Unstimulated whole saliva was collected from patients and flow rate was noted down during collection of the sample. Salivary protein estimation was done using the Biuret method and salivary albumin was assessed using the Bromocresol green method. pH was estimated with a pHmeter and buffering capacity was analyzed with the titration method. Student's t-test, Fisher's test (ANOVA) and Tukey HSD (ANOVA) tests were used for statistical analysis. A very highly significant rise in the salivary total protein and albumin concentration was noted in gingivitis and periodontitis subjects of both young and elderly. An overall decrease in salivary flow rate was observed among the elderly, and also the salivary flow rate of women was significantly lower than that of men. Significant associations between salivary total protein and albumin in gingivitis and periodontitis were found with simple biochemical tests. A decrease in salivary flow rate among elderly and among women was noted.
Energy Technology Data Exchange (ETDEWEB)
Maillie, H D; Baxter, R C; Lisman, H [Rochester Univ., N.Y. (USA). School of Medicine and Dentistry
1977-11-01
The /sup 59/Fe uptake system was used to estimate the dose distribution throughout sham-operated and splenectomized rats exposed to whole-body 1000 kVp x irradiation. The marrow in the splenectomized animals was found to be slightly more radiosensitive. However, the difference between the two groups of rats was not significant. The dose distributions throughout the marrow of both groups were the same. It was concluded that splenectomy had no effect on the usefulness of this method. The method of measuring /sup 59/Fe uptake in this study used the uptake of radioiron at 6 hr post-injection and the deduction of the activity due to circulating iron. This resulted in a technique capable of measuring changes in iron uptake with midline-air-exposures of 25 R.
DEFF Research Database (Denmark)
Rønjom, Marianne Feen; Brink, Carsten; Laugaard Lorenzen, Ebbe
2015-01-01
volume, Dmean and estimated risk of HT. Bland-Altman plots were used for assessment of the systematic (mean) and random [standard deviation (SD)] variability of the three parameters, and a method for displaying the spatial variation in delineation differences was developed. Results. Intra......-observer variability resulted in a mean difference in thyroid volume and Dmean of 0.4 cm(3) (SD ± 1.6) and -0.5 Gy (SD ± 1.0), respectively, and 0.3 cm(3) (SD ± 1.8) and 0.0 Gy (SD ± 1.3) for inter-observer variability. The corresponding mean differences of NTCP values for radiation-induced HT due to intra- and inter...
Directory of Open Access Journals (Sweden)
Yoshinobu Hayashi
Full Text Available In termites, division of labor among castes, categories of individuals that perform specialized tasks, increases colony-level productivity and is the key to their ecological success. Although molecular studies on caste polymorphism have been performed in termites, we are far from a comprehensive understanding of the molecular basis of this phenomenon. To facilitate future molecular studies, we aimed to construct expressed sequence tag (EST libraries covering wide ranges of gene repertoires in three representative termite species, Hodotermopsis sjostedti, Reticulitermes speratus and Nasutitermes takasagoensis. We generated normalized cDNA libraries from whole bodies, except for guts containing microbes, of almost all castes, sexes and developmental stages and sequenced them with the 454 GS FLX titanium system. We obtained >1.2 million quality-filtered reads yielding >400 million bases for each of the three species. Isotigs, which are analogous to individual transcripts, and singletons were produced by assembling the reads and annotated using public databases. Genes related to juvenile hormone, which plays crucial roles in caste differentiation of termites, were identified from the EST libraries by BLAST search. To explore the potential for DNA methylation, which plays an important role in caste differentiation of honeybees, tBLASTn searches for DNA methyltransferases (dnmt1, dnmt2 and dnmt3 and methyl-CpG binding domain (mbd were performed against the EST libraries. All four of these genes were found in the H. sjostedti library, while all except dnmt3 were found in R. speratus and N. takasagoensis. The ratio of the observed to the expected CpG content (CpG O/E, which is a proxy for DNA methylation level, was calculated for the coding sequences predicted from the isotigs and singletons. In all of the three species, the majority of coding sequences showed depletion of CpG O/E (less than 1, and the distributions of CpG O/E were bimodal, suggesting
Hayashi, Yoshinobu; Shigenobu, Shuji; Watanabe, Dai; Toga, Kouhei; Saiki, Ryota; Shimada, Keisuke; Bourguignon, Thomas; Lo, Nathan; Hojo, Masaru; Maekawa, Kiyoto; Miura, Toru
2013-01-01
In termites, division of labor among castes, categories of individuals that perform specialized tasks, increases colony-level productivity and is the key to their ecological success. Although molecular studies on caste polymorphism have been performed in termites, we are far from a comprehensive understanding of the molecular basis of this phenomenon. To facilitate future molecular studies, we aimed to construct expressed sequence tag (EST) libraries covering wide ranges of gene repertoires in three representative termite species, Hodotermopsis sjostedti, Reticulitermes speratus and Nasutitermes takasagoensis. We generated normalized cDNA libraries from whole bodies, except for guts containing microbes, of almost all castes, sexes and developmental stages and sequenced them with the 454 GS FLX titanium system. We obtained >1.2 million quality-filtered reads yielding >400 million bases for each of the three species. Isotigs, which are analogous to individual transcripts, and singletons were produced by assembling the reads and annotated using public databases. Genes related to juvenile hormone, which plays crucial roles in caste differentiation of termites, were identified from the EST libraries by BLAST search. To explore the potential for DNA methylation, which plays an important role in caste differentiation of honeybees, tBLASTn searches for DNA methyltransferases (dnmt1, dnmt2 and dnmt3) and methyl-CpG binding domain (mbd) were performed against the EST libraries. All four of these genes were found in the H. sjostedti library, while all except dnmt3 were found in R. speratus and N. takasagoensis. The ratio of the observed to the expected CpG content (CpG O/E), which is a proxy for DNA methylation level, was calculated for the coding sequences predicted from the isotigs and singletons. In all of the three species, the majority of coding sequences showed depletion of CpG O/E (less than 1), and the distributions of CpG O/E were bimodal, suggesting the presence of
Energy Technology Data Exchange (ETDEWEB)
Lee, C [University of Michigan, Ann Arbor, MI (United States); Jung, J; Pelletier, C [East Carolina University, Greenville, NC (United States); Kim, J [University of Pittsburgh Medical Center, Pittsburgh, PA (United States); Lee, C [National Cancer Institute, Bethesda, MD (United States)
2014-06-01
Purpose: Patient cohort of second cancer study often involves radiotherapy patients with no radiological images available: We developed methods to construct a realistic surrogate anatomy by using computational human phantoms. We tested this phantom images both in a commercial treatment planning system (Eclipse) and a custom Monte Carlo (MC) transport code. Methods: We used a reference adult male phantom defined by International Commission on Radiological Protection (ICRP). The hybrid phantom which was originally developed in Non-Uniform Rational B-Spline (NURBS) and polygon mesh format was converted into more common medical imaging format. Electron density was calculated from the material composition of the organs and tissues and then converted into DICOM format. The DICOM images were imported into the Eclipse system for treatment planning, and then the resulting DICOM-RT files were imported into the MC code for MC-based dose calculation. Normal tissue doses were calculation in Eclipse and MC code for an illustrative prostate treatment case and compared to each other. Results: DICOM images were generated from the adult male reference phantom. Densities and volumes of selected organs between the original phantom and ones represented within Eclipse showed good agreements, less than 0.6%. Mean dose from Eclipse and MC code match less than 7%, whereas maximum and minimum doses were different up to 45%. Conclusion: The methods established in this study will be useful for the reconstruction of organ dose to support epidemiological studies of second cancer in cancer survivors treated by radiotherapy. We also work on implementing body size-dependent computational phantoms to better represent patient's anatomy when the height and weight of patients are available.
Directory of Open Access Journals (Sweden)
Dalei Jing
2017-07-01
Full Text Available In the present study, a modified Reynolds equation including the electrical double layer (EDL-induced electroviscous effect of lubricant is established to investigate the effect of the EDL on the hydrodynamic lubrication of a 1D slider bearing. The theoretical model is based on the nonlinear Poisson–Boltzmann equation without the use of the Debye–Hückel approximation. Furthermore, the variation in the bulk electrical conductivity of the lubricant under the influence of the EDL is also considered during the theoretical analysis of hydrodynamic lubrication. The results show that the EDL can increase the hydrodynamic load capacity of the lubricant in a 1D slider bearing. More importantly, the hydrodynamic load capacity of the lubricant under the influence of the EDL shows a non-monotonic trend, changing from enhancement to attenuation with a gradual increase in the absolute value of the zeta potential. This non-monotonic hydrodynamic lubrication is dependent on the non-monotonic electroviscous effect of the lubricant generated by the EDL, which is dominated by the non-monotonic electrical field strength and non-monotonic electrical body force on the lubricant. The subject of the paper is the theoretical modeling and the corresponding analysis.
Christodorescu, Mihai; Kinder, Johannes; Jha, Somesh; Katzenbeisser, Stefan; Veith, Helmut
2005-01-01
Malware is code designed for a malicious purpose, such as obtaining root privilege on a host. A malware detector identifies malware and thus prevents it from adversely affecting a host. In order to evade detection by malware detectors, malware writers use various obfuscation techniques to transform their malware. There is strong evidence that commercial malware detectors are susceptible to these evasion tactics. In this paper, we describe the design and implementation of a malware normalizer ...
Madheswaran, C. K.; Ambily, P. S.; Dattatreya, J. K.; Ramesh, G.
2015-06-01
This work describes the experimental investigation on behaviour of reinforced GPC beams subjected to monotonic static loading. The overall dimensions of the GPC beams are 250 mm × 300 mm × 2200 mm. The effective span of beam is 1600 mm. The beams have been designed to be critical in shear as per IS:456 provisions. The specimens were produced from a mix incorporating fly ash and ground granulated blast furnace slag, which was designed for a compressive strength of 40 MPa at 28 days. The reinforced concrete specimens are subjected to curing at ambient temperature under wet burlap. The parameters being investigated include shear span to depth ratio (a/d = 1.5 and 2.0). Experiments are conducted on 12 GPC beams and four OPCC control beams. All the beams are tested using 2000 kN servo-controlled hydraulic actuator. This paper presents the results of experimental studies.
Non-Interior Continuation Method for Solving the Monotone Semidefinite Complementarity Problem
International Nuclear Information System (INIS)
Huang, Z.H.; Han, J.
2003-01-01
Recently, Chen and Tseng extended non-interior continuation smoothing methods for solving linear/ nonlinear complementarity problems to semidefinite complementarity problems (SDCP). In this paper we propose a non-interior continuation method for solving the monotone SDCP based on the smoothed Fischer-Burmeister function, which is shown to be globally linearly and locally quadratically convergent under suitable assumptions. Our algorithm needs at most to solve a linear system of equations at each iteration. In addition, in our analysis on global linear convergence of the algorithm, we need not use the assumption that the Frechet derivative of the function involved in the SDCP is Lipschitz continuous. For non-interior continuation/ smoothing methods for solving the nonlinear complementarity problem, such an assumption has been used widely in the literature in order to achieve global linear convergence results of the algorithms
Asymptotic Poisson distribution for the number of system failures of a monotone system
International Nuclear Information System (INIS)
Aven, Terje; Haukis, Harald
1997-01-01
It is well known that for highly available monotone systems, the time to the first system failure is approximately exponentially distributed. Various normalising factors can be used as the parameter of the exponential distribution to ensure the asymptotic exponentiality. More generally, it can be shown that the number of system failures is asymptotic Poisson distributed. In this paper we study the performance of some of the normalising factors by using Monte Carlo simulation. The results show that the exponential/Poisson distribution gives in general very good approximations for highly available components. The asymptotic failure rate of the system gives best results when the process is in steady state, whereas other normalising factors seem preferable when the process is not in steady state. From a computational point of view the asymptotic system failure rate is most attractive
Simple bounds for counting processes with monotone rate of occurrence of failures
International Nuclear Information System (INIS)
Kaminskiy, Mark P.
2007-01-01
The article discusses some aspects of analogy between certain classes of distributions used as models for time to failure of nonrepairable objects, and the counting processes used as models for failure process for repairable objects. The notion of quantiles for the counting processes with strictly increasing cumulative intensity function is introduced. The classes of counting processes with increasing (decreasing) rate of occurrence of failures are considered. For these classes, the useful nonparametric bounds for cumulative intensity function based on one known quantile are obtained. These bounds, which can be used for repairable objects, are similar to the bounds introduced by Barlow and Marshall [Barlow, R. Marshall, A. Bounds for distributions with monotone hazard rate, I and II. Ann Math Stat 1964; 35: 1234-74] for IFRA (DFRA) time to failure distributions applicable to nonrepairable objects
Monotonicity of the ratio of modified Bessel functions of the first kind with applications.
Yang, Zhen-Hang; Zheng, Shen-Zhou
2018-01-01
Let [Formula: see text] with [Formula: see text] be the modified Bessel functions of the first kind of order v . In this paper, we prove the monotonicity of the function [Formula: see text] on [Formula: see text] for different values of parameter p with [Formula: see text]. As applications, we deduce some new Simpson-Spector-type inequalities for [Formula: see text] and derive a new type of bounds [Formula: see text] ([Formula: see text]) for [Formula: see text]. In particular, we show that the upper bound [Formula: see text] for [Formula: see text] is the minimum over all upper bounds [Formula: see text], where [Formula: see text] and is not comparable with other sharpest upper bounds. We also find such type of upper bounds for [Formula: see text] with [Formula: see text] and for [Formula: see text] with [Formula: see text].
Using an inductive approach for definition making: Monotonicity and boundedness of sequences
Directory of Open Access Journals (Sweden)
Deonarain Brijlall
2009-09-01
Full Text Available The study investigated fourth–year students’ construction of the definitions of monotonicity and boundedness of sequences, at the Edgewood Campus of the University of KwaZulu –Natal in South Africa. Structured worksheets based on a guided problem solving teaching model were used to help students to construct the twodefinitions. A group of twenty three undergraduateteacher trainees participated in the project. These students specialised in the teaching of mathematics in the Further Education and Training (FET (Grades 10 to 12 school curriculum. This paper, specifically, reports on the investigation of students’ definition constructions based on a learnig theory within the context of advanced mathematical thinking and makes a contribution to an understanding of how these students constructed the two definitions. It was found that despite the intervention of a structured design, these definitions were partially or inadequately conceptualised by some students.
Non-monotonic resonance in a spatially forced Lengyel-Epstein model
Energy Technology Data Exchange (ETDEWEB)
Haim, Lev [Physics Department, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Department of Oncology, Soroka University Medical Center, Beer-Sheva 84101 (Israel); Hagberg, Aric [Center for Nonlinear Studies, Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Meron, Ehud [Physics Department, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Department of Solar Energy and Environmental Physics, BIDR, Ben-Gurion University of the Negev, Sede Boqer Campus, Midreshet Ben-Gurion 84990 (Israel)
2015-06-15
We study resonant spatially periodic solutions of the Lengyel-Epstein model modified to describe the chlorine dioxide-iodine-malonic acid reaction under spatially periodic illumination. Using multiple-scale analysis and numerical simulations, we obtain the stability ranges of 2:1 resonant solutions, i.e., solutions with wavenumbers that are exactly half of the forcing wavenumber. We show that the width of resonant wavenumber response is a non-monotonic function of the forcing strength, and diminishes to zero at sufficiently strong forcing. We further show that strong forcing may result in a π/2 phase shift of the resonant solutions, and argue that the nonequilibrium Ising-Bloch front bifurcation can be reversed. We attribute these behaviors to an inherent property of forcing by periodic illumination, namely, the increase of the mean spatial illumination as the forcing amplitude is increased.
An Optimal Augmented Monotonic Tracking Controller for Aircraft Engines with Output Constraints
Directory of Open Access Journals (Sweden)
Jiakun Qin
2017-01-01
Full Text Available This paper proposes a novel min-max control scheme for aircraft engines, with the aim of transferring a set of regulated outputs between two set-points, while ensuring a set of auxiliary outputs remain within prescribed constraints. In view of this, an optimal augmented monotonic tracking controller (OAMTC is proposed, by considering a linear plant with input integration, to enhance the ability of the control system to reject uncertainty in system parameters and ensure no crossing limits. The key idea is to use the eigenvalue and eigenvector placement method and genetic algorithms to shape the output responses. The approach is validated by numerical simulation. The results show that the designed OAMTC controller can achieve a satisfactory dynamic and steady performance and keep the auxiliary outputs within constraints in the transient regime.
Directory of Open Access Journals (Sweden)
Wiwik Budiawan
2016-02-01
Full Text Available Manusia sebagai subyek yang memiliki keterbatasan dalam kerja, sehingga menyebabkan terjadinya kesalahan. Kesalahan manusia yang dilakukan mengakibatkan menurunnya tingkat kewaspadaan masinis dan asisten masinis dalam menjalankan tugas. Tingkat kewaspadaan dipengaruhi oleh 5 faktor yaitu keadaan monoton, kualitas tidur, keadaan psikofisiologi, distraksi dan kelelahan kerja. Metode untuk mengukur 5 faktor yaitu kuisioner mononton, kuisioner Pittsburgh Sleep Quality Index (PSQI, kuisioner General Job Stress dan kuisioner FAS. Sedangkan untuk menguji tingkat kewaspadaan menggunakan Software Psychomotor Vigilance Test (PVT. Responden yang dipilih adalah masinis dan asisten masinis, karena jenis pekerjaan tersebut sangat membutuhkan tingkat kewaspadaan yang tinggi. Hasil pengukuran kemudian dianalisa menggunakan uji regresi linear majemuk. Dalam penelitian ini menghasilkan keadaan monoton, kualitas tidur, keadaan psikofisiologi, distraksi dan kelelahan kerja berpengaruh secara simultan terhadap tingkat kewaspadaan. Hal ini dibuktikan dengan ketika sebelum jam dinas, hasil uji F-hitung keadaan monoton, kualitas tidur, keadaan psikofisiologi adalah sebesar 0,876, sedangkan untuk variabel distraksi dan Kelelahan Kerja (FAS terhadap tingkat kewaspadaan memiliki nilai 2,371. pada saat sesudah bekerja variabel distraksi dan kelelahan kerja (FAS terhadap tingkat kewaspadaan memiliki nilai F-hitung 2,953,dan nilai 0,544 untuk keadaan monoton, kualitas tidur, keadaan psikofisiologi. Faktor yang memiliki pengaruh terbesar terhadap tingkat kewaspadaan sebelum jam dinas yaitu faktor kualitas tidur, sedangkan untuk sesudah jam dinas adalah faktor kelelahan kerja. Abstract Human beings as subjects who have limitations in work, thus causing the error. Human error committed resulted in a decreased level of alertness machinist and assistant machinist in the line of duty. Alert level is influenced by five factors: the state of monotony, quality of sleep
Energy Technology Data Exchange (ETDEWEB)
Zhao Xuejing [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); School of mathematics and statistics, Lanzhou University, Lanzhou 730000 (China); Fouladirad, Mitra, E-mail: mitra.fouladirad@utt.f [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Berenguer, Christophe [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Bordes, Laurent [Universite de Pau et des Pays de l' Adour, LMA UMR CNRS 5142, 64013 PAU Cedex (France)
2010-08-15
The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.
International Nuclear Information System (INIS)
Zhao Xuejing; Fouladirad, Mitra; Berenguer, Christophe; Bordes, Laurent
2010-01-01
The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.
Katushkina, O. A.; Alexashov, D. B.; Izmodenov, V. V.; Gvaramadze, V. V.
2017-02-01
High-resolution mid-infrared observations of astrospheres show that many of them have filamentary (cirrus-like) structure. Using numerical models of dust dynamics in astrospheres, we suggest that their filamentary structure might be related to specific spatial distribution of the interstellar dust around the stars, caused by a gyrorotation of charged dust grains in the interstellar magnetic field. Our numerical model describes the dust dynamics in astrospheres under an influence of the Lorentz force and assumption of a constant dust charge. Calculations are performed for the dust grains with different sizes separately. It is shown that non-monotonic spatial dust distribution (viewed as filaments) appears for dust grains with the period of gyromotion comparable with the characteristic time-scale of the dust motion in the astrosphere. Numerical modelling demonstrates that the number of filaments depends on charge-to-mass ratio of dust.
Non-monotonicity and divergent time scale in Axelrod model dynamics
Vazquez, F.; Redner, S.
2007-04-01
We study the evolution of the Axelrod model for cultural diversity, a prototypical non-equilibrium process that exhibits rich dynamics and a dynamic phase transition between diversity and an inactive state. We consider a simple version of the model in which each individual possesses two features that can assume q possibilities. Within a mean-field description in which each individual has just a few interaction partners, we find a phase transition at a critical value qc between an active, diverse state for q < qc and a frozen state. For q lesssim qc, the density of active links is non-monotonic in time and the asymptotic approach to the steady state is controlled by a time scale that diverges as (q-qc)-1/2.
The monotonicity and convexity of a function involving digamma one and their applications
Yang, Zhen-Hang
2014-01-01
Let $\\mathcal{L}(x,a)$ be defined on $\\left( -1,\\infty \\right) \\times \\left( 4/15,\\infty \\right) $ or $\\left( 0,\\infty \\right) \\times \\left( 1/15,\\infty \\right) $ by the formula% \\begin{equation*} \\mathcal{L}(x,a)=\\tfrac{1}{90a^{2}+2}\\ln \\left( x^{2}+x+\\tfrac{3a+1}{3}% \\right) +\\tfrac{45a^{2}}{90a^{2}+2}\\ln \\left( x^{2}+x+\\allowbreak \\tfrac{% 15a-1}{45a}\\right) . \\end{equation*} We investigate the monotonicity and convexity of the function $x\\rightarrow F_{a}\\left( x\\right) =\\psi \\left( x+1\\r...
A new efficient algorithm for computing the imprecise reliability of monotone systems
International Nuclear Information System (INIS)
Utkin, Lev V.
2004-01-01
Reliability analysis of complex systems by partial information about reliability of components and by different conditions of independence of components may be carried out by means of the imprecise probability theory which provides a unified framework (natural extension, lower and upper previsions) for computing the system reliability. However, the application of imprecise probabilities to reliability analysis meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, an efficient simplified algorithm to solve and decompose the optimization problems is proposed in the paper. This algorithm allows us to practically implement reliability analysis of monotone systems under partial and heterogeneous information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example illustrates the algorithm
Simplest bifurcation diagrams for monotone families of vector fields on a torus
Baesens, C.; MacKay, R. S.
2018-06-01
In part 1, we prove that the bifurcation diagram for a monotone two-parameter family of vector fields on a torus has to be at least as complicated as the conjectured simplest one proposed in Baesens et al (1991 Physica D 49 387–475). To achieve this, we define ‘simplest’ by sequentially minimising the numbers of equilibria, Bogdanov–Takens points, closed curves of centre and of neutral saddle, intersections of curves of centre and neutral saddle, Reeb components, other invariant annuli, arcs of rotational homoclinic bifurcation of horizontal homotopy type, necklace points, contractible periodic orbits, points of neutral horizontal homoclinic bifurcation and half-plane fan points. We obtain two types of simplest case, including that initially proposed. In part 2, we analyse the bifurcation diagram for an explicit monotone family of vector fields on a torus and prove that it has at most two equilibria, precisely four Bogdanov–Takens points, no closed curves of centre nor closed curves of neutral saddle, at most two Reeb components, precisely four arcs of rotational homoclinic connection of ‘horizontal’ homotopy type, eight horizontal saddle-node loop points, two necklace points, four points of neutral horizontal homoclinic connection, and two half-plane fan points, and there is no simultaneous existence of centre and neutral saddle, nor contractible homoclinic connection to a neutral saddle. Furthermore, we prove that all saddle-nodes, Bogdanov–Takens points, non-neutral and neutral horizontal homoclinic bifurcations are non-degenerate and the Hopf condition is satisfied for all centres. We also find it has four points of degenerate Hopf bifurcation. It thus provides an example of a family satisfying all the assumptions of part 1 except the one of at most one contractible periodic orbit.
Zoeller, R Thomas; Vandenberg, Laura N
2015-05-15
The fundamental principle in regulatory toxicology is that all chemicals are toxic and that the severity of effect is proportional to the exposure level. An ancillary assumption is that there are no effects at exposures below the lowest observed adverse effect level (LOAEL), either because no effects exist or because they are not statistically resolvable, implying that they would not be adverse. Chemicals that interfere with hormones violate these principles in two important ways: dose-response relationships can be non-monotonic, which have been reported in hundreds of studies of endocrine disrupting chemicals (EDCs); and effects are often observed below the LOAEL, including all environmental epidemiological studies examining EDCs. In recognition of the importance of this issue, Lagarde et al. have published the first proposal to qualitatively assess non-monotonic dose response (NMDR) relationships for use in risk assessments. Their proposal represents a significant step forward in the evaluation of complex datasets for use in risk assessments. Here, we comment on three elements of the Lagarde proposal that we feel need to be assessed more critically and present our arguments: 1) the use of Klimisch scores to evaluate study quality, 2) the concept of evaluating study quality without topical experts' knowledge and opinions, and 3) the requirement of establishing the biological plausibility of an NMDR before consideration for use in risk assessment. We present evidence-based logical arguments that 1) the use of the Klimisch score should be abandoned for assessing study quality; 2) evaluating study quality requires experts in the specific field; and 3) an understanding of mechanisms should not be required to accept observable, statistically valid phenomena. It is our hope to contribute to the important and ongoing debate about the impact of NMDRs on risk assessment with positive suggestions.
MONOTONIC DERIVATIVE CORRECTION FOR CALCULATION OF SUPERSONIC FLOWS WITH SHOCK WAVES
Directory of Open Access Journals (Sweden)
P. V. Bulat
2015-07-01
Full Text Available Subject of Research. Numerical solution methods of gas dynamics problems based on exact and approximate solution of Riemann problem are considered. We have developed an approach to the solution of Euler equations describing flows of inviscid compressible gas based on finite volume method and finite difference schemes of various order of accuracy. Godunov scheme, Kolgan scheme, Roe scheme, Harten scheme and Chakravarthy-Osher scheme are used in calculations (order of accuracy of finite difference schemes varies from 1st to 3rd. Comparison of accuracy and efficiency of various finite difference schemes is demonstrated on the calculation example of inviscid compressible gas flow in Laval nozzle in the case of continuous acceleration of flow in the nozzle and in the case of nozzle shock wave presence. Conclusions about accuracy of various finite difference schemes and time required for calculations are made. Main Results. Comparative analysis of difference schemes for Euler equations integration has been carried out. These schemes are based on accurate and approximate solution for the problem of an arbitrary discontinuity breakdown. Calculation results show that monotonic derivative correction provides numerical solution uniformity in the breakdown neighbourhood. From the one hand, it prevents formation of new points of extremum, providing the monotonicity property, but from the other hand, causes smoothing of existing minimums and maximums and accuracy loss. Practical Relevance. Developed numerical calculation method gives the possibility to perform high accuracy calculations of flows with strong non-stationary shock and detonation waves. At the same time, there are no non-physical solution oscillations on the shock wave front.
International Nuclear Information System (INIS)
Chawla, N.; Liaw, P.K.; Lara-Curzio, E.; Ferber, M.K.; Lowden, R.A.
2012-01-01
The effect of fiber fabric orientation, i.e., parallel to loading and perpendicular to the loading axis, on the monotonic and fatigue behavior of plain-weave fiber reinforced SiC matrix laminated composites was investigated. Two composite systems were studied: Nextel 312 (3M Corp.) reinforced SiC and Nicalon (Nippon Carbon Corp.) reinforced SiC, both fabricated by Forced Chemical Vapor Infiltration (FCVI). The behavior of both materials was investigated under monotonic and fatigue loading. Interlaminar and in-plane shear tests were conducted to further correlate shear properties with the effect of fabric orientation, with respect to the loading axis, on the orientation effects in bending. The underlying mechanisms, in monotonic and fatigue loading, were investigated through post-fracture examination using scanning electron microscopy (SEM).
International Nuclear Information System (INIS)
Perrow, C.
1989-01-01
The author has chosen numerous concrete examples to illustrate the hazardousness inherent in high-risk technologies. Starting with the TMI reactor accident in 1979, he shows that it is not only the nuclear energy sector that bears the risk of 'normal accidents', but also quite a number of other technologies and industrial sectors, or research fields. The author refers to the petrochemical industry, shipping, air traffic, large dams, mining activities, and genetic engineering, showing that due to the complexity of the systems and their manifold, rapidly interacting processes, accidents happen that cannot be thoroughly calculated, and hence are unavoidable. (orig./HP) [de
Directory of Open Access Journals (Sweden)
Kriengsak Wattanawitoon
2011-01-01
Full Text Available We prove strong and weak convergence theorems of modified hybrid proximal-point algorithms for finding a common element of the zero point of a maximal monotone operator, the set of solutions of equilibrium problems, and the set of solution of the variational inequality operators of an inverse strongly monotone in a Banach space under different conditions. Moreover, applications to complementarity problems are given. Our results modify and improve the recently announced ones by Li and Song (2008 and many authors.
Energy Technology Data Exchange (ETDEWEB)
Duan Shukai [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); School of Electronic and Information Engineering, Southwest University, Chongqing 400715 (China)], E-mail: duansk@swu.edu.cn; Liao Xiaofeng [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)], E-mail: xfliao@cqu.edu.cn
2007-09-10
A new chaotic delayed neuron model with non-monotonously increasing transfer function, called as chaotic Liao's delayed neuron model, was recently reported and analyzed. An electronic implementation of this model is described in detail. At the same time, some methods in circuit design, especially for circuit with time delayed unit and non-monotonously increasing activation unit, are also considered carefully. We find that the dynamical behaviors of the designed circuits are closely similar to the results predicted by numerical experiments.
DEFF Research Database (Denmark)
Gildberg, Frederik Alkier; Bradley, Stephen K.; Fristed, Peter Billeskov
2012-01-01
Forensic psychiatry is an area of priority for the Danish Government. As the field expands, this calls for increased knowledge about mental health nursing practice, as this is part of the forensic psychiatry treatment offered. However, only sparse research exists in this area. The aim of this study...... was to investigate the characteristics of forensic mental health nursing staff interaction with forensic mental health inpatients and to explore how staff give meaning to these interactions. The project included 32 forensic mental health staff members, with over 307 hours of participant observations, 48 informal....... The intention is to establish a trusting relationship to form behaviour and perceptual-corrective care, which is characterized by staff's endeavours to change, halt, or support the patient's behaviour or perception in relation to staff's perception of normality. The intention is to support and teach the patient...
DEFF Research Database (Denmark)
Madsen, Louise Sofia; Handberg, Charlotte
2018-01-01
implying an influence on whether to participate in cancer survivorship care programs. Because of "pursuing normality," 8 of 9 participants opted out of cancer survivorship care programming due to prospects of "being cured" and perceptions of cancer survivorship care as "a continuation of the disease......BACKGROUND: The present study explored the reflections on cancer survivorship care of lymphoma survivors in active treatment. Lymphoma survivors have survivorship care needs, yet their participation in cancer survivorship care programs is still reported as low. OBJECTIVE: The aim of this study...... was to understand the reflections on cancer survivorship care of lymphoma survivors to aid the future planning of cancer survivorship care and overcome barriers to participation. METHODS: Data were generated in a hematological ward during 4 months of ethnographic fieldwork, including participant observation and 46...
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Evgeni V Nikolaev
2016-04-01
Full Text Available Synthetic constructs in biotechnology, biocomputing, and modern gene therapy interventions are often based on plasmids or transfected circuits which implement some form of "on-off" switch. For example, the expression of a protein used for therapeutic purposes might be triggered by the recognition of a specific combination of inducers (e.g., antigens, and memory of this event should be maintained across a cell population until a specific stimulus commands a coordinated shut-off. The robustness of such a design is hampered by molecular ("intrinsic" or environmental ("extrinsic" noise, which may lead to spontaneous changes of state in a subset of the population and is reflected in the bimodality of protein expression, as measured for example using flow cytometry. In this context, a "majority-vote" correction circuit, which brings deviant cells back into the required state, is highly desirable, and quorum-sensing has been suggested as a way for cells to broadcast their states to the population as a whole so as to facilitate consensus. In this paper, we propose what we believe is the first such a design that has mathematically guaranteed properties of stability and auto-correction under certain conditions. Our approach is guided by concepts and theory from the field of "monotone" dynamical systems developed by M. Hirsch, H. Smith, and others. We benchmark our design by comparing it to an existing design which has been the subject of experimental and theoretical studies, illustrating its superiority in stability and self-correction of synchronization errors. Our stability analysis, based on dynamical systems theory, guarantees global convergence to steady states, ruling out unpredictable ("chaotic" behaviors and even sustained oscillations in the limit of convergence. These results are valid no matter what are the values of parameters, and are based only on the wiring diagram. The theory is complemented by extensive computational bifurcation analysis
International Nuclear Information System (INIS)
Nguyen Buong.
1992-11-01
The purpose of this paper is to investigate convergence rates for an operator version of Tikhonov regularization constructed by dual mapping for nonlinear ill-posed problems involving monotone operators in real reflective Banach spaces. The obtained results are considered in combination with finite-dimensional approximations for the space. An example is considered for illustration. (author). 15 refs
DEFF Research Database (Denmark)
Beausoleil, Claire; Ormsby, Jean-Nicolas; Gies, Andreas
2013-01-01
A workshop was held in Berlin September 12–14th 2012 to assess the state of the science of the data supporting low dose effects and non-monotonic dose responses (“low dose hypothesis”) for chemicals with endocrine activity (endocrine disrupting chemicals or EDCs). This workshop consisted of lectu...
DEFF Research Database (Denmark)
Qi, Feng; Berg, Christian
2013-01-01
In the paper, the authors find necessary and sufficient conditions for a difference between the exponential function αeβ/t, α, β > 0, and the trigamma function ψ (t) to be completely monotonic on (0,∞). While proving the complete onotonicity, the authors discover some properties related to the fi...
Czech Academy of Sciences Publication Activity Database
Šremr, Jiří
2007-01-01
Roč. 132, č. 3 (2007), s. 263-295 ISSN 0862-7959 R&D Projects: GA ČR GP201/04/P183 Institutional research plan: CEZ:AV0Z10190503 Keywords : system of functional differential equations with monotone operators * initial value problem * unique solvability Subject RIV: BA - General Mathematics
International Nuclear Information System (INIS)
Chidume, C.E.
1989-06-01
The fixed points of set-valued operators satisfying a condition of monotonicity type in real Banach spaces with uniformly convex dual spaces are approximated by recursive averaging processes. Applications to important classes of linear and nonlinear operator equations are also presented. (author). 33 refs
Bruns, M.; Keyson, D.V.; Jabon, M.E.; Hummels, C.C.M.; Hekkert, P.P.M.; Bailenson, J.N.
2013-01-01
Control errors often occur in repetitive and monotonous tasks, such as manual assembly tasks. Much research has been done in the area of human error identification; however, most existing systems focus solely on the prediction of errors, not on increasing worker accuracy. The current study examines
Zhao, Shu-Xia
2018-03-01
In this work, the behavior of electron temperature against the power in argon inductively coupled plasma is investigated by a fluid model. The model properly reproduces the non-monotonic variation of temperature with power observed in experiments. By means of a novel electron mean energy equation proposed for the first time in this article, this electron temperature behavior is interpreted. In the overall considered power range, the skin effect of radio frequency electric field results in localized deposited power density, responsible for an increase of electron temperature with power by means of one parameter defined as power density divided by electron density. At low powers, the rate fraction of multistep and Penning ionizations of metastables that consume electron energy two times significantly increases with power, which dominates over the skin effect and consequently leads to the decrease of temperature with power. In the middle power regime, a transition region of temperature is given by the competition between the ionizing effect of metastables and the skin effect of electric field. The power location where the temperature alters its trend moves to the low power end as increasing the pressure due to the lack of metastables. The non-monotonic curve of temperature is asymmetric at the short chamber due to the weak role of skin effect in increasing the temperature and tends symmetric when axially prolonging the chamber. Still, the validity of the fluid model in this prediction is estimated and the role of neutral gas heating is guessed. This finding is helpful for people understanding the different trends of temperature with power in the literature.
Mechanisms of plastic deformation (cyclic and monotonous) of Inconel X750
International Nuclear Information System (INIS)
Randrianarivony, H.
1992-01-01
Plastic deformation mechanisms under cyclic or monotonous solicitations, are analysed in function of Inconel X750 initial macrostructure. Two heat treated Inconel (first one is treated at 1366 K one hour, air cooled, aged at 977 K 20 hours, and air cooled, the second alloy is aged at 1158 K 24 hours, air cooled, aged at 977 K 20 hours, and air cooled), are characterized respectively by a fine and uniform precipitation of the γ' phase (approximative formulae: Ni 3 (Al,Ti)), and by a bimodal distribution of γ' precipitates. In both alloys, dislocations pairs (characteristic of a shearing by antiphase wall creation) are observed, and the crossing mechanism of the γ' precipitates by creation of overstructure pile defects is the same. But, glissile loops dislocations are less numerous than dislocations pairs in the first alloy, involving denser bands structure for this alloy (dislocations loops are always observed around γ' precipitates). Some comportment explications of Inconel X750 in PWR medium are given. (A.B.). refs., figs., tabs
Inelastic behavior of cold-formed braced walls under monotonic and cyclic loading
Gerami, Mohsen; Lotfi, Mohsen; Nejat, Roya
2015-06-01
The ever-increasing need for housing generated the search for new and innovative building methods to increase speed and efficiency and enhance quality. One method is the use of light thin steel profiles as load-bearing elements having different solutions for interior and exterior cladding. Due to the increase in CFS construction in low-rise residential structures in the modern construction industry, there is an increased demand for performance inelastic analysis of CFS walls. In this study, the nonlinear behavior of cold-formed steel frames with various bracing arrangements including cross, chevron and k-shape straps was evaluated under cyclic and monotonic loading and using nonlinear finite element analysis methods. In total, 68 frames with different bracing arrangements and different ratios of dimensions were studied. Also, seismic parameters including resistance reduction factor, ductility and force reduction factor due to ductility were evaluated for all samples. On the other hand, the seismic response modification factor was calculated for these systems. It was concluded that the highest response modification factor would be obtained for walls with bilateral cross bracing systems with a value of 3.14. In all samples, on increasing the distance of straps from each other, shear strength increased and shear strength of the wall with bilateral bracing system was 60 % greater than that with lateral bracing system.
Explosive percolation on directed networks due to monotonic flow of activity
Waagen, Alex; D'Souza, Raissa M.; Lu, Tsai-Ching
2017-07-01
An important class of real-world networks has directed edges, and in addition, some rank ordering on the nodes, for instance the popularity of users in online social networks. Yet, nearly all research related to explosive percolation has been restricted to undirected networks. Furthermore, information on such rank-ordered networks typically flows from higher-ranked to lower-ranked individuals, such as follower relations, replies, and retweets on Twitter. Here we introduce a simple percolation process on an ordered, directed network where edges are added monotonically with respect to the rank ordering. We show with a numerical approach that the emergence of a dominant strongly connected component appears to be discontinuous. Large-scale connectivity occurs at very high density compared with most percolation processes, and this holds not just for the strongly connected component structure but for the weakly connected component structure as well. We present analysis with branching processes, which explains this unusual behavior and gives basic intuition for the underlying mechanisms. We also show that before the emergence of a dominant strongly connected component, multiple giant strongly connected components may exist simultaneously. By adding a competitive percolation rule with a small bias to link uses of similar rank, we show this leads to formation of two distinct components, one of high-ranked users, and one of low-ranked users, with little flow between the two components.
A cascadic monotonic time-discretized algorithm for finite-level quantum control computation
Ditz, P.; Borzi`, A.
2008-03-01
A computer package (CNMS) is presented aimed at the solution of finite-level quantum optimal control problems. This package is based on a recently developed computational strategy known as monotonic schemes. Quantum optimal control problems arise in particular in quantum optics where the optimization of a control representing laser pulses is required. The purpose of the external control field is to channel the system's wavefunction between given states in its most efficient way. Physically motivated constraints, such as limited laser resources, are accommodated through appropriately chosen cost functionals. Program summaryProgram title: CNMS Catalogue identifier: ADEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 770 No. of bytes in distributed program, including test data, etc.: 7098 Distribution format: tar.gz Programming language: MATLAB 6 Computer: AMD Athlon 64 × 2 Dual, 2:21 GHz, 1:5 GB RAM Operating system: Microsoft Windows XP Word size: 32 Classification: 4.9 Nature of problem: Quantum control Solution method: Iterative Running time: 60-600 sec
Matsubara, Eri; Tsunetsugu, Yuko; Ohira, Tatsuro; Sugiyama, Masaki
2017-01-21
Employee problems arising from mental illnesses have steadily increased and become a serious social problem in recent years. Wood is a widely available plant material, and knowledge of the psychophysiological effects of inhalation of woody volatile compounds has grown considerably. In this study, we established an experimental method to evaluate the effects of Japanese cedar wood essential oil on subjects performing monotonous work. Two experiment conditions, one with and another without diffusion of the essential oil were prepared. Salivary stress markers were determined during and after a calculation task followed by distribution of questionnaires to achieve subjective odor assessment. We found that inhalation of air containing the volatile compounds of Japanese cedar wood essential oil increased the secretion of dehydroepiandrosterone sulfate (DHEA-s). Slight differences in the subjective assessment of the odor of the experiment rooms were observed. The results of the present study indicate that the volatile compounds of Japanese cedar wood essential oil affect the endocrine regulatory mechanism to facilitate stress responses. Thus, we suggest that this essential oil can improve employees' mental health.
International Nuclear Information System (INIS)
Ellis, J.R.; Robinson, D.N.; Pugh, C.E.
1978-01-01
This paper addresses the elastic-plastic behavior of type 316 stainless steel, one of the major structural alloys used in liquid-metal fast breeder reactor components. The study was part of a continuing program to develop a structural design technology applicable to advanced reactor systems. Here, behaviour of solution annealed material was examined through biaxial stress experiments conducted at room temperature under radial loadings (√3tau=sigma) in tension-torsion stress space. The effects of both stress limited monotonic loading and strain limited cyclic loading were determined on the size, shape and position of yield loci corresponding to small offset strain (10 microstrain) definition of yield. In the present work, the aim was to determine the extent to which the constitutive laws previously recommended for type 304 stainless steel are applicable to type 316 stainless steel. It was concluded that for the conditions investigated, the inelastic behavior of the two materials are qualitatively similar. Specifically, the von Mises yield criterion provides a reasonable approximation of initial yield behavior and the subsequent hardening behavior, at least under small offset definitions of yield, is to the first order kinematic in nature. (Auth.)
Response of skirted suction caissons to monotonic lateral loading in saturated medium sand
Li, Da-yong; Zhang, Yu-kun; Feng, Ling-yun; Guo, Yan-xue
2014-08-01
Monotonic lateral load model tests were carried out on steel skirted suction caissons embedded in the saturated medium sand to study the bearing capacity. A three-dimensional continuum finite element model was developed with Z_SOIL software. The numerical model was calibrated against experimental results. Soil deformation and earth pressures on skirted caissons were investigated by using the finite element model to extend the model tests. It shows that the "skirted" structure can significantly increase the lateral capacity and limit the deflection, especially suitable for offshore wind turbines, compared with regular suction caissons without the "skirted" at the same load level. In addition, appropriate determination of rotation centers plays a crucial role in calculating the lateral capacity by using the analytical method. It was also found that the rotation center is related to dimensions of skirted suction caissons and loading process, i.e. the rotation center moves upwards with the increase of the "skirted" width and length; moreover, the rotation center moves downwards with the increase of loading and keeps constant when all the sand along the caisson's wall yields. It is so complex that we cannot simply determine its position like the regular suction caisson commonly with a specified position to the length ratio of the caisson.
Han, Hye Joo; Schweickert, Richard; Xi, Zhuangzhuang; Viau-Quesnel, Charles
2016-04-01
For five individuals, a social network was constructed from a series of his or her dreams. Three important network measures were calculated for each network: transitivity, assortativity, and giant component proportion. These were monotonically related; over the five networks as transitivity increased, assortativity increased and giant component proportion decreased. The relations indicate that characters appear in dreams systematically. Systematicity likely arises from the dreamer's memory of people and their relations, which is from the dreamer's cognitive social network. But the dream social network is not a copy of the cognitive social network. Waking life social networks tend to have positive assortativity; that is, people tend to be connected to others with similar connectivity. Instead, in our sample of dream social networks assortativity is more often negative or near 0, as in online social networks. We show that if characters appear via a random walk, negative assortativity can result, particularly if the random walk is biased as suggested by remote associations. Copyright © 2015 Cognitive Science Society, Inc.
Monotonic and cyclic bond behavior of confined concrete using NiTiNb SMA wires
International Nuclear Information System (INIS)
Choi, Eunsoo; Chung, Young-Soo; Kim, Yeon-Wook; Kim, Joo-Woo
2011-01-01
This study conducts bond tests of reinforced concrete confined by shape memory alloy (SMA) wires which provide active and passive confinement of concrete. This study uses NiTiNb SMA which usually shows wide temperature hysteresis; this is a good advantage for the application of shape memory effects. The aims of this study are to investigate the behavior of SMA wire under residual stress and the performance of SMA wire jackets in improving bond behavior through monotonic-loading tests. This study also conducts cyclic bond tests and analyzes cyclic bond behavior. The use of SMA wire jackets transfers the bond failure from splitting to pull-out mode and satisfactorily increases bond strength and ductile behavior. The active confinement provided by the SMA plays a major role in providing external pressure on the concrete because the developed passive confinement is much smaller than the active confinement. For cyclic behavior, slip and circumferential strain are recovered more with larger bond stress. This recovery of slip and circumferential strain are mainly due to the external pressure of the SMA wires since cracked concrete cannot provide any elastic recovery
Creep crack growth by grain boundary cavitation under monotonic and cyclic loading
Wen, Jian-Feng; Srivastava, Ankit; Benzerga, Amine; Tu, Shan-Tung; Needleman, Alan
2017-11-01
Plane strain finite deformation finite element calculations of mode I crack growth under small scale creep conditions are carried out. Attention is confined to isothermal conditions and two time histories of the applied stress intensity factor: (i) a monononic increase to a plateau value subsequently held fixed; and (ii) a cyclic time variation. The crack growth calculations are based on a micromechanics constitutive relation that couples creep deformation and damage due to grain boundary cavitation. Grain boundary cavitation, with cavity growth due to both creep and diffusion, is taken as the sole failure mechanism contributing to crack growth. The influence on the crack growth rate of loading history parameters, such as the magnitude of the applied stress intensity factor, the ratio of the applied minimum to maximum stress intensity factors, the loading rate, the hold time and the cyclic loading frequency, are explored. The crack growth rate under cyclic loading conditions is found to be greater than under monotonic creep loading with the plateau applied stress intensity factor equal to its maximum value under cyclic loading conditions. Several features of the crack growth behavior observed in creep-fatigue tests naturally emerge, for example, a Paris law type relation is obtained for cyclic loading.
International Nuclear Information System (INIS)
Dirras, G.; Bouvier, S.; Gubicza, J.; Hasni, B.; Szilagyi, T.
2009-01-01
The present work focuses on understanding the mechanical behavior of bulk ultrafine-grained nickel specimens processed by spark plasma sintering of high purity nickel nanopowder and subsequently deformed under large amplitude monotonic simple shear tests and strain-controlled cyclic simple shear tests at room temperature. During cyclic tests, the samples were deformed up to an accumulated von Mises strain of about ε VM = 0.75 (the flow stress was in the 650-700 MPa range), which is extremely high in comparison with the low tensile/compression ductility of this class of materials at quasi-static conditions. The underlying physical mechanisms were investigated by electron microscopy and X-ray diffraction profile analysis. Lattice dislocation-based plasticity leading to cell formation and dislocation interactions with twin boundaries contributed to the work-hardening of these materials. The large amount of plastic strain that has been reached during the shear tests highlights intrinsic mechanical characteristics of the ultrafine-grained nickel studied here.
Energy Technology Data Exchange (ETDEWEB)
Dirras, G., E-mail: dirras@univ-paris13.fr [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Bouvier, S. [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Gubicza, J. [Department of Materials Physics, Eoetvoes Lorand University, P.O.B. 32, Budapest H-1518 (Hungary); Hasni, B. [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Szilagyi, T. [Department of Materials Physics, Eoetvoes Lorand University, P.O.B. 32, Budapest H-1518 (Hungary)
2009-11-25
The present work focuses on understanding the mechanical behavior of bulk ultrafine-grained nickel specimens processed by spark plasma sintering of high purity nickel nanopowder and subsequently deformed under large amplitude monotonic simple shear tests and strain-controlled cyclic simple shear tests at room temperature. During cyclic tests, the samples were deformed up to an accumulated von Mises strain of about {epsilon}{sub VM} = 0.75 (the flow stress was in the 650-700 MPa range), which is extremely high in comparison with the low tensile/compression ductility of this class of materials at quasi-static conditions. The underlying physical mechanisms were investigated by electron microscopy and X-ray diffraction profile analysis. Lattice dislocation-based plasticity leading to cell formation and dislocation interactions with twin boundaries contributed to the work-hardening of these materials. The large amount of plastic strain that has been reached during the shear tests highlights intrinsic mechanical characteristics of the ultrafine-grained nickel studied here.
Non-monotonic dose dependence of the Ge- and Ti-centres in quartz
International Nuclear Information System (INIS)
Woda, C.; Wagner, G.A.
2007-01-01
The dose response of the Ge- and Ti-centres in quartz is studied over a large dose range. After an initial signal increase in the low dose range, both defects show a pronounced decrease in signal intensities for high doses. The model by Euler and Kahan [1987. Radiation effects and anelastic loss in germanium-doped quartz. Phys. Rev. B 35 (9), 4351-4359], in which the signal drop is explained by an enhanced trapping of holes at the electron trapping site, is critically discussed. A generalization of the model is then developed, following similar considerations by Lawless et al. [2005. A model for non-monotonic dose dependence of thermoluminescence (TL). J. Phys. Condens. Matter 17, 737-753], who explained a signal drop in TL by an enhanced recombination rate with electrons at the recombination centre. Finally, an alternative model for the signal decay is given, based on the competition between single and double electron capture at the electron trapping site. From the critical discussion of the different models it is concluded that the double electron capture mechanism is the most probable effect for the dose response
Non-monotonic reorganization of brain networks with Alzheimer’s disease progression
Directory of Open Access Journals (Sweden)
Hyoungkyu eKim
2015-06-01
Full Text Available Background: Identification of stage-specific changes in brain network of patients with Alzheimer’s disease (AD is critical for rationally designed therapeutics that delays the progression of the disease. However, pathological neural processes and their resulting changes in brain network topology with disease progression are not clearly known. Methods: The current study was designed to investigate the alterations in network topology of resting state fMRI among patients in three different clinical dementia rating (CDR groups (i.e., CDR = 0.5, 1, 2 and amnestic mild cognitive impairment (aMCI and age-matched healthy subject groups. We constructed cost networks from these 5 groups and analyzed their network properties using graph theoretical measures.Results: The topological properties of AD brain networks differed in a non-monotonic, stage-specific manner. Interestingly, local and global efficiency and betweenness of the network were rather higher in the aMCI and AD (CDR 1 groups than those of prior stage groups. The number, location, and structure of rich-clubs changed dynamically as the disease progressed.Conclusions: The alterations in network topology of the brain are quite dynamic with AD progression, and these dynamic changes in network patterns should be considered meticulously for efficient therapeutic interventions of AD.
Directory of Open Access Journals (Sweden)
Vedenyapin Aleksandr Dmitrievich
2015-11-01
Full Text Available This paper is the construction of the distribution function using the Bernoulli scheme, and is also designed to correct some of the mistakes that were made in the article [2]. Namely, a function built in [2] need not be monotonous, and some formulas need to be adjusted. The idea of building as well as in [2], is based on the model of Cox-Ross-Rubinstein "binary market". The essence of the model was to divide time into N steps, and assuming that the price of an asset at each step can move either up to a certain value with probability p, or down also by some certain value with probability q = 1 - p. Prices in step N can take only a finite number of values. "Success" or "failure" was the changing price for some fixed value in the model of Cox-Ross-Rubinstein. Here as a "success" or "failure" at every step we consider the affiliation of changing the index value to the section [r, S] either to the interval [I, r. Further a function P(r was introduced, which at any step gives us the probability of "success". The maximum index value increase for the all period of time [T, 2T] will be equal nS, and the maximum possible reduction will be equal nI. Then let x ∈ [nI, nS]. This segment will reflect every possible total variation that we can get at the end of a period of time [T, 2T]. The further introduced inequality k ≥ (x - nI/(S - I gives us the minimum number of successes that needed for total changing could be in the section [x, nS] if was n - k reductions with the index value to I. Then was introduced the function r(x, kmin which is defined on the interval (nI, nS] and provided us some assurance that the total index changing could be in the section [x, nS] if successful interval is [r(x, kmin, S] and the amount of success is satisfying to our inequality. The probability of k "successes" and n - k "failures" is calculated according to the formula of Bernoulli, where the probability of "success" is determined by the function P(r, and r is determined
Wang, Raorao; Lu, Chenglin; Arola, Dwayne; Zhang, Dongsheng
2013-08-01
The aim of this study was to compare failure modes and fracture strength of ceramic structures using a combination of experimental and numerical methods. Twelve specimens with flat layer structures were fabricated from two types of ceramic systems (IPS e.max ceram/e.max press-CP and Vita VM9/Lava zirconia-VZ) and subjected to monotonic load to fracture with a tungsten carbide sphere. Digital image correlation (DIC) and fractography technology were used to analyze fracture behaviors of specimens. Numerical simulation was also applied to analyze the stress distribution in these two types of dental ceramics. Quasi-plastic damage occurred beneath the indenter in porcelain in all cases. In general, the fracture strength of VZ specimens was greater than that of CP specimens. The crack initiation loads of VZ and CP were determined as 958 ± 50 N and 724 ± 36 N, respectively. Cracks were induced by plastic damage and were subsequently driven by tensile stress at the elastic/plastic boundary and extended downward toward to the veneer/core interface from the observation of DIC at the specimen surface. Cracks penetrated into e.max press core, which led to a serious bulk fracture in CP crowns, while in VZ specimens, cracks were deflected and extended along the porcelain/zirconia core interface without penetration into the zirconia core. The rupture loads for VZ and CP ceramics were determined as 1150 ± 170 N and 857 ± 66 N, respectively. Quasi-plastic deformation (damage) is responsible for crack initiation within porcelain in both types of crowns. Due to the intrinsic mechanical properties, the fracture behaviors of these two types of ceramics are different. The zirconia core with high strength and high elastic modulus has better resistance to fracture than the e.max core. © 2013 by the American College of Prosthodontists.
Directory of Open Access Journals (Sweden)
Lara Li Hesse
2016-08-01
Full Text Available The occurrence of tinnitus can be linked to hearing loss in the majority of cases, but there is nevertheless a large degree of unexplained heterogeneity in the relation between hearing loss and tinnitus. Part of the problem might be that hearing loss is usually quantified in terms of increased hearing thresholds, which only provides limited information about the underlying cochlear damage. Moreover, noise exposure that does not cause hearing threshold loss can still lead to hidden hearing loss (HHL, i.e. functional deafferentation of auditory nerve fibres (ANFs through loss of synaptic ribbons in inner hair cells. Whilst it is known that increased hearing thresholds can trigger increases in spontaneous neural activity in the central auditory system, i.e. a putative neural correlate of tinnitus, the central effects of HHL have not yet been investigated. Here, we exposed mice to octave-band noise at 100 and 105 dB SPL, to generate HHL and permanent increases of hearing thresholds, respectively. Deafferentation of ANFs was confirmed through measurement of auditory brainstem responses and cochlear immunohistochemistry. Acute extracellular recordings from the auditory midbrain (inferior colliculus demonstrated increases in spontaneous neuronal activity (a putative neural correlate of tinnitus in both groups. Surprisingly the increase in spontaneous activity was most pronounced in the mice with HHL, suggesting that the relation between hearing loss and neuronal hyperactivity might be more complex than currently understood. Our computational model indicated that these differences in neuronal hyperactivity could arise from different degrees of deafferentation of low-threshold ANFs in the two exposure groups.Our results demonstrate that HHL is sufficient to induce changes in central auditory processing, and they also indicate a non-monotonic relationship between cochlear damage and neuronal hyperactivity, suggesting an explanation for why tinnitus might
Directory of Open Access Journals (Sweden)
Hugo A. Rondón-Quintana
2012-12-01
Full Text Available The influence of compaction temperature on resistance under mono-tonic loading (Marshall of Crumb-Rubber Modified (CRM Hot-Mix As-phalt (HMA was evaluated. The emphasis of this study was the applica-tion in Bogotá D.C. (Colombia. In this city the compaction temperature of HMA mixtures decreases, compared to the optimum, in about 30°C. Two asphalt cements (AC 60-70 and AC 80-100 were modified. Two particle sizes distribution curve were used. The compaction temperatures used were 120, 130, 140 and 150°C. The decrease of the compaction tempera-ture produces a small decrease in resistance under monotonic loading of the modified mixtures tested. Mixtures without CRM undergo a lineal decrease in its resistance of up to 34%.
Directory of Open Access Journals (Sweden)
Hugo A. Rondón-Quintana
2012-12-01
Full Text Available The influence of compaction temperature on resistance under monotonic loading (Marshall of Crumb-Rubber Modified (CRM Hot-Mix Asphalt (HMA was evaluated. The emphasis of this study was the application in Bogotá D.C. (Colombia. In this city the compaction temperature of HMA mixtures decreases, compared to the optimum, in about 30°C. Two asphalt cements (AC 60-70 and AC 80-100 were modified. Two particle sizes distribution curve were used. The compaction temperatures used were 120, 130, 140 and 150°C. The decrease of the compaction temperature produces a small decrease in resistance under monotonic loading of the modified mixtures tested. Mixtures without CRM undergo a lineal decrease in its resistance of up to 34%.
Directory of Open Access Journals (Sweden)
Elizabeth L. Sandvik
2015-11-01
Full Text Available Staphylococcus aureus is a notorious pathogen with a propensity to cause chronic, non-healing wounds. Bacterial persisters have been implicated in the recalcitrance of S. aureus infections, and this motivated us to examine the persistence of S. aureus to ciprofloxacin, a quinolone antibiotic. Upon treatment of exponential phase S. aureus with ciprofloxacin, we observed that survival was a non-monotonic function of ciprofloxacin concentration. Maximal killing occurred at 1 µg/mL ciprofloxacin, which corresponded to survival that was up to ~40-fold lower than that obtained with concentrations ≥ 5 µg/mL. Investigation of this phenomenon revealed that the non-monotonic response was associated with prophage induction, which facilitated killing of S. aureus persisters. Elimination of prophage induction with tetracycline was found to prevent cell lysis and persister killing. We anticipate that these findings may be useful for the design of quinolone treatments.
Cvrčková, Fatima; Luštinec, Jiří; Žárský, Viktor
2015-01-01
We usually expect the dose-response curves of biological responses to quantifiable stimuli to be simple, either monotonic or exhibiting a single maximum or minimum. Deviations are often viewed as experimental noise. However, detailed measurements in plant primary tissue cultures (stem pith explants of kale and tobacco) exposed to varying doses of sucrose, cytokinins (BA or kinetin) or auxins (IAA or NAA) revealed that growth and several biochemical parameters exhibit multiple reproducible, statistically significant maxima over a wide range of exogenous substance concentrations. This results in complex, non-monotonic dose-response curves, reminiscent of previous reports of analogous observations in both metazoan and plant systems responding to diverse pharmacological treatments. These findings suggest the existence of a hitherto neglected class of biological phenomena resulting in dose-response curves exhibiting periodic patterns of maxima and minima, whose causes remain so far uncharacterized, partly due to insufficient sampling frequency used in many studies.
Directory of Open Access Journals (Sweden)
Jieming Zhang
2013-01-01
Full Text Available We establish some sufficient conditions for the existence and uniqueness of positive solutions to a class of initial value problem for impulsive fractional differential equations involving the Caputo fractional derivative. Our analysis relies on a fixed point theorem for mixed monotone operators. Our result can not only guarantee the existence of a unique positive solution but also be applied to construct an iterative scheme for approximating it. An example is given to illustrate our main result.
Kerimov, M. K.
2016-07-01
This work continues the study of real zeros of first- and second-kind Bessel functions and Bessel general functions with real variables and orders begun in the first part of this paper (see M.K. Kerimov, Comput. Math. Math. Phys. 54 (9), 1337-1388 (2014)). Some new results concerning such zeros are described and analyzed. Special attention is given to the monotonicity, convexity, and concavity of zeros with respect to their ranks and other parameters.
International Nuclear Information System (INIS)
Zhavrin, Yu.I.; Kosov, V.N.; Kul'zhanov, D.U.; Karataev, K.K.
2000-01-01
Presence of two types of instabilities of mechanical equilibrium of a mixture experimentally is shown at an isothermal diffusion of multicomponent system with zero gradient of density/ Theoretically is proved, that partial Rayleigh numbers R 1 , R 2 having different signs, there are two areas with monotonous (R 1 2 < by 0) instability. The experimental data confirm presence of these areas and satisfactory are described by the represented theory. (author)
Directory of Open Access Journals (Sweden)
N. Kani
2017-05-01
Full Text Available The goal of this paper is to investigate the short time-scale, thermally-induced probability of magnetization reversal for an biaxial nanomagnet that is characterized with a biaxial magnetic anisotropy. For the first time, we clearly show that for a given energy barrier of the nanomagnet, the magnetization reversal probability of an biaxial nanomagnet exhibits a non-monotonic dependence on its saturation magnetization. Specifically, there are two reasons for this non-monotonic behavior in rectangular thin-film nanomagnets that have a large perpendicular magnetic anisotropy. First, a large perpendicular anisotropy lowers the precessional period of the magnetization making it more likely to precess across the x^=0 plane if the magnetization energy exceeds the energy barrier. Second, the thermal-field torque at a particular energy increases as the magnitude of the perpendicular anisotropy increases during the magnetization precession. This non-monotonic behavior is most noticeable when analyzing the magnetization reversals on time-scales up to several tens of ns. In light of the several proposals of spintronic devices that require data retention on time-scales up to 10’s of ns, understanding the probability of magnetization reversal on the short time-scales is important. As such, the results presented in this paper will be helpful in quantifying the reliability and noise sensitivity of spintronic devices in which thermal noise is inevitably present.
Zhang, Meng; Sun, Chen-Nan; Zhang, Xiang; Goh, Phoi Chin; Wei, Jun; Li, Hua; Hardacre, David
2018-03-01
The laser powder bed fusion (L-PBF) technique builds parts with higher static strength than the conventional manufacturing processes through the formation of ultrafine grains. However, its fatigue endurance strength σ f does not match the increased monotonic tensile strength σ b. This work examines the monotonic and fatigue properties of as-built and heat-treated L-PBF stainless steel 316L. It was found that the general linear relation σ f = mσ b for describing conventional ferrous materials is not applicable to L-PBF parts because of the influence of porosity. Instead, the ductility parameter correlated linearly with fatigue strength and was proposed as the new fatigue assessment criterion for porous L-PBF parts. Annealed parts conformed to the strength-ductility trade-off. Fatigue resistance was reduced at short lives, but the effect was partially offset by the higher ductility such that comparing with an as-built part of equivalent monotonic strength, the heat-treated parts were more fatigue resistant.
Directory of Open Access Journals (Sweden)
Taylor Mac Intyer Fonseca Junior
2013-12-01
Full Text Available This work evaluate seven estimation methods of fatigue properties applied to stainless steels and aluminum alloys. Experimental strain-life curves are compared to the estimations obtained by each method. After applying seven different estimation methods at 14 material conditions, it was found that fatigue life can be estimated with good accuracy only by the Bäumel-Seeger method for the martensitic stainless steel tempered between 300°C and 500°C. The differences between mechanical behavior during monotonic and cyclic loading are probably the reason for the absence of a reliable method for estimation of fatigue behavior from monotonic properties for a group of materials.
Bias in regression coefficient estimates upon different treatments of ...
African Journals Online (AJOL)
MS and PW consistently overestimated the population parameter. EM and RI, on the other hand, tended to consistently underestimate the population parameter under non-monotonic pattern. Keywords: Missing data, bias, regression, percent missing, non-normality, missing pattern > East African Journal of Statistics Vol.
The electronic structure of normal metal-superconductor bilayers
Energy Technology Data Exchange (ETDEWEB)
Halterman, Klaus; Elson, J Merle [Sensor and Signal Sciences Division, Naval Air Warfare Center, China Lake, CA 93355 (United States)
2003-09-03
We study the electronic properties of ballistic thin normal metal-bulk superconductor heterojunctions by solving the Bogoliubov-de Gennes equations in the quasiclassical and microscopic 'exact' regimes. In particular, the significance of the proximity effect is examined through a series of self-consistent calculations of the space-dependent pair potential {delta}(r). It is found that self-consistency cannot be neglected for normal metal layer widths smaller than the superconducting coherence length {xi}{sub 0}, revealing its importance through discernible features in the subgap density of states. Furthermore, the exact self-consistent treatment yields a proximity-induced gap in the normal metal spectrum, which vanishes monotonically when the normal metal length exceeds {xi}{sub 0}. Through a careful analysis of the excitation spectra, we find that quasiparticle trajectories with wavevectors oriented mainly along the interface play a critical role in the destruction of the energy gap.
Semiparametric approach for non-monotone missing covariates in a parametric regression model
Sinha, Samiran; Saha, Krishna K.; Wang, Suojin
2014-01-01
mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation
Normal Pressure Hydrocephalus (NPH)
... local chapter Join our online community Normal Pressure Hydrocephalus (NPH) Normal pressure hydrocephalus is a brain disorder ... Symptoms Diagnosis Causes & risks Treatments About Normal Pressure Hydrocephalus Normal pressure hydrocephalus occurs when excess cerebrospinal fluid ...
Analysis of monotonic greening and browning trends from global NDVI time-series
Jong, de R.; Bruin, de S.; Wit, de A.J.W.; Schaepman, M.E.; Dent, D.L.
2011-01-01
Remotely sensed vegetation indices are widely used to detect greening and browning trends; especially the global coverage of time-series normalized difference vegetation index (NDVI) data which are available from 1981. Seasonality and serial auto-correlation in the data have previously been dealt
Smith, D. P.; Kvitek, R. G.; Ross, E.; Iampietro, P.; Paull, C. K.; Sandersfeld, M.
2010-12-01
The head of Monterey submarine canyon has been surveyed with high-precision multibeam sonar at least once each year since September 2002. This poster provides a summary of changes between September 2002 and September 2008. Data were collected with a variety of Reson mulitbeam sonar heads, and logged with an ISIS data acquisition system. Vessel attitude was corrected using an Applanix POS MV equipped with an auxillary C-Nav 2050 GPS receiver. Data were processed and filtered and cleaned in Caris HIPS. Depth changes for various time spans were determined through raster subtraction of pairs of 3-m resolution bathymetric grids in ArcMap. The depth change analyses focused on the canyon floor, except where a landslide occurred on a wall, and where obvious gullying near the headwall had occurred during the time of our study. Canyon walls were generally excluded from analysis. The analysis area was 1,414,240 sq meters. The gross changes between 2002 and 2008 include net erosion of 2,300,000 m^3 +/- 800,000 m^3 of material from the canyon. The annualized rate of net sediment loss from this time frame agrees within an order of magnitude with our previously published estimates from earlier (shorter) time frames, so the erosion events seem to be moderate magnitude and frequent, rather than infrequent and catastrophic. The greatest sediment loss appears to be from lateral erosion of channel-bounding terraces rather than deepening or scouring of the existing channel axis. A single landslide event that occurred in summer 2003 had an initial slide scar (void) volume of 71,000 m^3. The scar was observed to increase annually, and had grown to approximately 96,000 m^3 by 2008. The initial slide was too small to be tsunamigenic. In contrast to the monotonic canyon axis widening, the shoreward terminus of the canyon (canyon lip) appears to be in steady state equilibrium with sediment supply entering the canyon from the littoral zone. The lip position, indicated by the clearly defined
Directory of Open Access Journals (Sweden)
Watcharaporn Cholamjiak
2009-01-01
Full Text Available We prove a weak convergence theorem of the modified Mann iteration process for a uniformly Lipschitzian and generalized asymptotically quasi-nonexpansive mapping in a uniformly convex Banach space. We also introduce two kinds of new monotone hybrid methods and obtain strong convergence theorems for an infinitely countable family of uniformly Lipschitzian and generalized asymptotically quasi-nonexpansive mappings in a Hilbert space. The results improve and extend the corresponding ones announced by Kim and Xu (2006 and Nakajo and Takahashi (2003.
Normalization: A Preprocessing Stage
Patro, S. Gopal Krishna; Sahu, Kishore Kumar
2015-01-01
As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...
Directory of Open Access Journals (Sweden)
S. Makireddi
2017-07-01
Full Text Available Graphene-polymer nanocomposite films show good piezoresistive behaviour and it is reported that the sensitivity increases either with the increased sheet resistance or decreased number density of the graphene fillers. A little is known about this behaviour near the percolation region. In this study, graphene nanoplatelet (GNP/poly (methyl methacrylate (PMMA flexible films are fabricated via solution casting process at varying weight percent of GNP. Electrical and piezoresistive behaviour of these films is studied as a function of GNP concentration. Piezoresistive strain sensitivity of the films is measured by affixing the film to an aluminium specimen which is subjected to monotonic uniaxial tensile load. The change in resistance of the film with strain is monitored using a four probe. An electrical percolation threshold at 3 weight percent of GNP is observed. We report non-monotonic piezoresistive behaviour of these films as a function GNP concentration. We observe an increase in gauge factor (GF with unstrained resistance of the films up to a critical resistance corresponding to percolation threshold. Beyond this limit the GF decreases with unstrained resistance.
Yudhanto, Arief
2016-03-08
Impact copolymer polypropylene (IPP), a blend of isotactic polypropylene and ethylene-propylene rubber, and its continuous glass fiber composite form (glass fiber-reinforced impact polypropylene, GFIPP) are promising materials for impact-prone automotive structures. However, basic mechanical properties and corresponding damage of IPP and GFIPP at different rates, which are of keen interest in the material development stage and numerical tool validation, have not been reported. Here, we applied monotonic and cyclic tensile loads to IPP and GFIPP at different strain rates (0.001/s, 0.01/s and 0.1/s) to study the mechanical properties, failure modes and the damage parameters. We used monotonic and cyclic tests to obtain mechanical properties and define damage parameters, respectively. We also used scanning electron microscopy (SEM) images to visualize the failure mode. We found that IPP generally exhibits brittle fracture (with relatively low failure strain of 2.69-3.74%) and viscoelastic-viscoplastic behavior. GFIPP [90]8 is generally insensitive to strain rate due to localized damage initiation mostly in the matrix phase leading to catastrophic transverse failure. In contrast, GFIPP [±45]s is sensitive to the strain rate as indicated by the change in shear modulus, shear strength and failure mode.
Nie, Xiaobing; Zheng, Wei Xing
2015-05-01
This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dunn, Naomi; Williamson, Ann
2012-01-01
Although monotony is widely recognised as being detrimental to performance, its occurrence and effects are not yet well understood. This is despite the fact that task-related characteristics, such as monotony and low task demand, have been shown to contribute to performance decrements over time. Participants completed one of two simulated train-driving scenarios. Both were highly monotonous and differed only in terms of the level of cognitive demand required (i.e. low demand or high demand). These results highlight the seriously detrimental effects of the combination of monotony and low task demands and clearly show that even a relatively minor increase in cognitive demand can mitigate adverse monotony-related effects on performance for extended periods of time. Monotony is an inherent characteristic of transport industries, including rail, aviation and road transport, which can have adverse impact on safety, reliability and efficiency. This study highlights possible strategies for mitigating these adverse effects. Practitioner Summary: This study provides evidence for the importance of cognitive demand in mitigating monotony-related effects on performance. The results have clear implications for the rapid onset of performance deterioration in low demand monotonous tasks and demonstrate that these detrimental performance effects can be overcome with simple solutions, such as making the task more cognitively engaging.
Iterative approximation of the solution of a monotone operator equation in certain Banach spaces
International Nuclear Information System (INIS)
Chidume, C.E.
1988-01-01
Let X=L p (or l p ), p ≥ 2. The solution of the equation Ax=f, f is an element of X is approximated in X by an iteration process in each of the following two cases: (i) A is a bounded linear mapping of X into itself which is also bounded below; and, (ii) A is a nonlinear Lipschitz mapping of X into itself and satisfies ≥ m |x-y| 2 , for some constant m > 0 and for all x, y in X, where j is the single-valued normalized duality mapping of X into X* (the dual space of X). A related result deals with the iterative approximation of the fixed point of a Lipschitz strictly pseudocontractive mapping in X. (author). 12 refs
International Nuclear Information System (INIS)
Kanezu, Tsutomu; Nakano, Takehiro; Endo, Tatsumi
1986-01-01
The estimation methods of free deformations of reinforced concrete (RC) beams at elevated temperatures are investigated based on the concepts of ACI's and CEB/FIP's formulas, which are well used to estimate the flexural deformations of RC beams at normal temperature. Conclusions derived from the study are as follows. 1. Features of free deformations of RC beams. (i) The ratios of the average compressive strains on the top fiber of RC beams to the calculated ones at the cracked section show the inclinations that the ratios once drop after cracking and then remain constant according to temperature rises. (ii) Average compressive strains might be estimated by the average of the calculated strains at the perfect bond section and the cracked section of RC beam. (iii) The ratios of the average tensile strains on the level of reinforcements to the calculated ones at the cracked section are inclined to approach the value of 1.0 monotonically according to temperature rises. The changes of the average tensile strains are caused by the deterioration of bond strength and cracking due to the increase of the differences of expansive strains between reinforcement and concrete. 2. Estimation methods of free deformations of RC beams. (i) In order to estimate the free deformations of RC beams at elevated temperatures, the basic concepts of ACI's and CEB/FIP's formulas are adopted, which are well used to estimate the M-φ relations of RC beams at normal temperature. (ii) It was confirmed that the suggested formulas are able to estimate the free deformations of RC beams, that is, the longitudinal deformation and the curvature, at elevated temperatures. (author)
le Graverend, J.-B.
2018-05-01
A lattice-misfit-dependent damage density function is developed to predict the non-linear accumulation of damage when a thermal jump from 1050 °C to 1200 °C is introduced somewhere in the creep life. Furthermore, a phenomenological model aimed at describing the evolution of the constrained lattice misfit during monotonous creep load is also formulated. The response of the lattice-misfit-dependent plasticity-coupled damage model is compared with the experimental results obtained at 140 and 160 MPa on the first generation Ni-based single crystal superalloy MC2. The comparison reveals that the damage model is well suited at 160 MPa and less at 140 MPa because the transfer of stress to the γ' phase occurs for stresses above 150 MPa which leads to larger variations and, therefore, larger effects of the constrained lattice misfit on the lifetime during thermo-mechanical loading.
Laser induced non-monotonic degradation in short-circuit current of triple-junction solar cells
Dou, Peng-Cheng; Feng, Guo-Bin; Zhang, Jian-Min; Song, Ming-Ying; Zhang, Zhen; Li, Yun-Peng; Shi, Yu-Bin
2018-06-01
In order to study the continuous wave (CW) laser radiation effects and mechanism of GaInP/GaAs/Ge triple-junction solar cells (TJSCs), 1-on-1 mode irradiation experiments were carried out. It was found that the post-irradiation short circuit current (ISC) of the TJSCs initially decreased and then increased with increasing of irradiation laser power intensity. To explain this phenomenon, a theoretical model had been established and then verified by post-damage tests and equivalent circuit simulations. Conclusion was drawn that laser induced alterations in the surface reflection and shunt resistance were the main causes for the observed non-monotonic decrease in the ISC of the TJSCs.
Investigation on de-trapping mechanisms related to non-monotonic kink pattern in GaN HEMT devices
Directory of Open Access Journals (Sweden)
Chandan Sharma
2017-08-01
Full Text Available This article reports an experimental approach to analyze the kink effect phenomenon which is usually observed during the GaN high electron mobility transistor (HEMT operation. De-trapping of charge carriers is one of the prominent reasons behind the kink effect. The commonly observed non-monotonic behavior of kink pattern is analyzed under two different device operating conditions and it is found that two different de-trapping mechanisms are responsible for a particular kink behavior. These different de-trapping mechanisms are investigated through a time delay analysis which shows the presence of traps with different time constants. Further voltage sweep and temperature analysis corroborates the finding that different de-trapping mechanisms play a role in kink behavior under different device operating conditions.
Investigation on de-trapping mechanisms related to non-monotonic kink pattern in GaN HEMT devices
Sharma, Chandan; Laishram, Robert; Amit, Rawal, Dipendra Singh; Vinayak, Seema; Singh, Rajendra
2017-08-01
This article reports an experimental approach to analyze the kink effect phenomenon which is usually observed during the GaN high electron mobility transistor (HEMT) operation. De-trapping of charge carriers is one of the prominent reasons behind the kink effect. The commonly observed non-monotonic behavior of kink pattern is analyzed under two different device operating conditions and it is found that two different de-trapping mechanisms are responsible for a particular kink behavior. These different de-trapping mechanisms are investigated through a time delay analysis which shows the presence of traps with different time constants. Further voltage sweep and temperature analysis corroborates the finding that different de-trapping mechanisms play a role in kink behavior under different device operating conditions.
Directory of Open Access Journals (Sweden)
S. S. Chang
2014-05-01
Full Text Available Modulated high-frequency (HF heating of the ionosphere provides a feasible means of artificially generating extremely low-frequency (ELF/very low-frequency (VLF whistler waves, which can leak into the inner magnetosphere and contribute to resonant interactions with high-energy electrons in the plasmasphere. By ray tracing the magnetospheric propagation of ELF/VLF emissions artificially generated at low-invariant latitudes, we evaluate the relativistic electron resonant energies along the ray paths and show that propagating artificial ELF/VLF waves can resonate with electrons from ~ 100 keV to ~ 10 MeV. We further implement test particle simulations to investigate the effects of resonant scattering of energetic electrons due to triggered monotonic/single-frequency ELF/VLF waves. The results indicate that within the period of a resonance timescale, changes in electron pitch angle and kinetic energy are stochastic, and the overall effect is cumulative, that is, the changes averaged over all test electrons increase monotonically with time. The localized rates of wave-induced pitch-angle scattering and momentum diffusion in the plasmasphere are analyzed in detail for artificially generated ELF/VLF whistlers with an observable in situ amplitude of ~ 10 pT. While the local momentum diffusion of relativistic electrons is small, with a rate of −7 s−1, the local pitch-angle scattering can be intense near the loss cone with a rate of ~ 10−4 s−1. Our investigation further supports the feasibility of artificial triggering of ELF/VLF whistler waves for removal of high-energy electrons at lower L shells within the plasmasphere. Moreover, our test particle simulation results show quantitatively good agreement with quasi-linear diffusion coefficients, confirming the applicability of both methods to evaluate the resonant diffusion effect of artificial generated ELF/VLF whistlers.
Macherey, Olivier; Carlyon, Robert P; Chatron, Jacques; Roman, Stéphane
2017-06-01
Most cochlear implants (CIs) activate their electrodes non-simultaneously in order to eliminate electrical field interactions. However, the membrane of auditory nerve fibers needs time to return to its resting state, causing the probability of firing to a pulse to be affected by previous pulses. Here, we provide new evidence on the effect of pulse polarity and current level on these interactions. In experiment 1, detection thresholds and most comfortable levels (MCLs) were measured in CI users for 100-Hz pulse trains consisting of two consecutive biphasic pulses of the same or of opposite polarity. All combinations of polarities were studied: anodic-cathodic-anodic-cathodic (ACAC), CACA, ACCA, and CAAC. Thresholds were lower when the adjacent phases of the two pulses had the same polarity (ACCA and CAAC) than when they were different (ACAC and CACA). Some subjects showed a lower threshold for ACCA than for CAAC while others showed the opposite trend demonstrating that polarity sensitivity at threshold is genuine and subject- or electrode-dependent. In contrast, anodic (CAAC) pulses always showed a lower MCL than cathodic (ACCA) pulses, confirming previous reports. In experiments 2 and 3, the subjects compared the loudness of several pulse trains differing in current level separately for ACCA and CAAC. For 40 % of the electrodes tested, loudness grew non-monotonically as a function of current level for ACCA but never for CAAC. This finding may relate to a conduction block of the action potentials along the fibers induced by a strong hyperpolarization of their central processes. Further analysis showed that the electrodes showing a lower threshold for ACCA than for CAAC were more likely to yield a non-monotonic loudness growth. It is proposed that polarity sensitivity at threshold reflects the local neural health and that anodic asymmetric pulses should preferably be used to convey sound information while avoiding abnormal loudness percepts.
International Nuclear Information System (INIS)
Shariati, Mahdi; Ramli Sulong, N.H.; Suhatril, Meldi; Shariati, Ali; Arabnejad Khanouki, M.M.; Sinaei, Hamid
2012-01-01
Highlights: ► C-shaped angle connectors show 8.8–33.1% strength degradation under cyclic loading. ► Connector fracture type of failure was experienced in C-shaped angle shear connectors. ► In push-out samples, more cracking was observed in those slabs with longer angles. ► C-shaped angle connectors show good behaviour in terms of the ultimate shear capacity. ► C-shaped angle connectors did not fulfil the requirements for ductility criteria. -- Abstract: This paper presents an evaluation of the structural behaviour of C-shaped angle shear connectors in composite beams, suitable for transferring shear force in composite structures. The results of the experimental programme, including eight push-out tests, are presented and discussed. The results include resistance, strength degradation, ductility, and failure modes of C-shaped angle shear connectors, under monotonic and fully reversed cyclic loading. The results show that connector fracture type of failure was experienced in C-shaped angle connectors and after the failure, more cracking was observed in those slabs with longer angles. On top of that, by comparing the shear resistance of C-shaped angle shear connectors under monotonic and cyclic loading, these connectors showed 8.8–33.1% strength degradation, under fully reversed cyclic loading. Furthermore, it was concluded that the mentioned shear connector shows a proper behaviour, in terms of the ultimate shear capacity, but it does not satisfy the ductility criteria, imposed by the Eurocode 4, to perform a plastic distribution of the shear force between different connectors along the beam length.
Directory of Open Access Journals (Sweden)
Chi-Chang Wang
2013-09-01
Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.
Normalized modes at selected points without normalization
Kausel, Eduardo
2018-04-01
As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á
International Nuclear Information System (INIS)
Weissman, S.D.
1989-01-01
The foot may be thought of as a bag of bones tied tightly together and functioning as a unit. The bones re expected to maintain their alignment without causing symptomatology to the patient. The author discusses a normal radiograph. The bones must have normal shape and normal alignment. The density of the soft tissues should be normal and there should be no fractures, tumors, or foreign bodies
Non-monotonic swelling of surface grafted hydrogels induced by pH and/or salt concentration
Longo, Gabriel S.; Olvera de la Cruz, Monica; Szleifer, I.
2014-09-01
We use a molecular theory to study the thermodynamics of a weak-polyacid hydrogel film that is chemically grafted to a solid surface. We investigate the response of the material to changes in the pH and salt concentration of the buffer solution. Our results show that the pH-triggered swelling of the hydrogel film has a non-monotonic dependence on the acidity of the bath solution. At most salt concentrations, the thickness of the hydrogel film presents a maximum when the pH of the solution is increased from acidic values. The quantitative details of such swelling behavior, which is not observed when the film is physically deposited on the surface, depend on the molecular architecture of the polymer network. This swelling-deswelling transition is the consequence of the complex interplay between the chemical free energy (acid-base equilibrium), the electrostatic repulsions between charged monomers, which are both modulated by the absorption of ions, and the ability of the polymer network to regulate charge and control its volume (molecular organization). In the absence of such competition, for example, for high salt concentrations, the film swells monotonically with increasing pH. A deswelling-swelling transition is similarly predicted as a function of the salt concentration at intermediate pH values. This reentrant behavior, which is due to the coupling between charge regulation and the two opposing effects triggered by salt concentration (screening electrostatic interactions and charging/discharging the acid groups), is similar to that found in end-grafted weak polyelectrolyte layers. Understanding how to control the response of the material to different stimuli, in terms of its molecular structure and local chemical composition, can help the targeted design of applications with extended functionality. We describe the response of the material to an applied pressure and an electric potential. We present profiles that outline the local chemical composition of the
International Nuclear Information System (INIS)
Henes, D.; Straub, S.; Blum, W.; Moehlig, H.; Granacher, J.; Berger, C.
1999-01-01
The current state of development of the composite model of deformation of the martensitic steel X 20(22) CrMoV 12 1 under conditions of creep is briefly described. The model is able to reproduce differences in monotonic creep strength of different melts with slightly different initial microstructures and to simulate cyclic creep with alternating phases of tension and compression. (orig.)
Kerimov, M. K.
2016-12-01
This paper continues the study of real zeros of Bessel functions begun in the previous parts of this work (see M. K. Kerimov, Comput. Math. Math. Phys. 54 (9), 1337-1388 (2014); 56 (7), 1175-1208 (2016)). Some new results regarding the monotonicity, convexity, concavity, and other properties of zeros are described. Additionally, the zeros of q-Bessel functions are investigated.
Energy Technology Data Exchange (ETDEWEB)
Koyama, Motomichi, E-mail: koyama@mech.kyushu-u.ac.jp [Faculty of Engineering, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395 (Japan); Yu, Yachen; Zhou, Jia-Xi [Faculty of Engineering, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395 (Japan); Yoshimura, Nobuyuki [Nippon Steel & Sumitomo Metal Corporation, 20-1 Shintomi, Futtsu, Chiba 293-8511 (Japan); Sakurada, Eisaku [Nippon Steel & Sumitomo Metal Corporation, 5-3 Tokai, Aichi 476-8686 (Japan); Ushioda, Kohsaku [Nippon Steel & Sumitomo Metal Corporation, 20-1 Shintomi, Futtsu, Chiba 293-8511 (Japan); Noguchi, Hiroshi [Faculty of Engineering, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka-shi, Fukuoka 819-0395 (Japan)
2016-06-14
The effects of the morphology and distribution of cementite on damage formation were studied using in situ scanning electron microscopy under monotonic and cyclic tension. To investigate the effects of the morphology/distribution of cementite, intergranular cementite precipitation (ICP) and transgranular cementite precipitation (TCP) steels were prepared from an ingot of Fe-0.017 wt% C binary alloy using different heat treatments. In all cases, the damage incidents were observed primarily at the grain boundaries. The damage morphology was dependent on the cementite morphology and loading condition. Monotonic tension in the ICP steel caused cracks across the cementite plates, located at the grain boundaries. In contrast, fatigue loading in the ICP steel induced cracking at the ferrite/cementite interface. Moreover, in the TCP steel, monotonic tension- and cyclic tension-induced intergranular cracking was distinctly observed, due to the slip localization associated with a limited availability of free slip paths. When a notch is introduced to the ICP steel specimen, the morphology of the cyclic tension-induced damage at the notch tip changed to resemble that across the intergranular cementite, and was rather similar to the monotonic tension-induced damage. The damage at the notch tip coalesced with the main crack, accelerating the growth of the fatigue crack.
Directory of Open Access Journals (Sweden)
Constantin E. Chalioris
2013-01-01
Full Text Available This paper presents the findings of an experimental study on the application of a reinforced self-compacting concrete jacketing technique in damaged reinforced concrete beams. Test results of 12 specimens subjected to monotonic loading up to failure or under repeated loading steps prior to total failure are included. First, 6 beams were designed to be shear dominated, constructed by commonly used concrete, were initially tested, damaged, and failed in a brittle manner. Afterwards, the shear-damaged beams were retrofitted using a self-compacting concrete U-formed jacket that consisted of small diameter steel bars and U-formed stirrups in order to increase their shear resistance and potentially to alter their initially observed shear response to a more ductile one. The jacketed beams were retested under the same loading. Test results indicated that the application of reinforced self-compacting concrete jacketing in damaged reinforced concrete beams is a promising rehabilitation technique. All the jacketed beams showed enhanced overall structural response and 35% to 50% increased load bearing capacities. The ultimate shear load of the jacketed beams varied from 39.7 to 42.0 kN, whereas the capacity of the original beams was approximately 30% lower. Further, all the retrofitted specimens exhibited typical flexural response with high values of deflection ductility.
Directory of Open Access Journals (Sweden)
Ngoc-Trung Nguyen
2014-02-01
Full Text Available Large-strain monotonic and cyclic loading tests of AZ31B magnesium alloy sheets were performed with a newly developed testing system, at different temperatures, ranging from room temperature to 250 °C. Behaviors showing significant twinning during initial in-plane compression and untwinning in subsequent tension at and slightly above room temperature were recorded. Strong yielding asymmetry and nonlinear hardening behavior were also revealed. Considerable Bauschinger effects, transient behavior, and variable permanent softening responses were observed near room temperature, but these were reduced and almost disappeared as the temperature increased. Different stress–strain responses were inherent to the activation of twinning at lower temperatures and non-basal slip systems at elevated temperatures. A critical temperature was identified to account for the transition between the twinning-dominant and slip-dominant deformation mechanisms. Accordingly, below the transition point, stress–strain curves of cyclic loading tests exhibited concave-up shapes for compression or compression following tension, and an unusual S-shape for tension following compression. This unusual shape disappeared when the temperature was above the transition point. Shrinkage of the elastic range and variation in Young’s modulus due to plastic strain deformation during stress reversals were also observed. The texture-induced anisotropy of both the elastic and plastic behaviors was characterized experimentally.
International Nuclear Information System (INIS)
Shrier, O; Khachan, J; Bosi, S
2006-01-01
A Markov chain method is presented as an alternative approach to Monte Carlo simulations of charge exchange collisions by an energetic hydrogen ion beam with a cold background hydrogen gas. This method was used to determine the average energy of the resulting energetic neutrals along the path of the beam. A comparison with Monte Carlo modelling showed a good agreement but with the advantage that it required much less computing time and produced no numerical noise. In particular, the Markov chain method works well for monotonically increasing or decreasing electrostatic potentials. Finally, a good agreement is obtained with experimental results from Doppler shift spectroscopy on energetic beams from a hollow cathode discharge. In particular, the average energy of ions that undergo charge exchange reaches a plateau that can be well below the full energy that might be expected from the applied voltage bias, depending on the background gas pressure. For example, pressures of ∼20 mTorr limit the ion energy to ∼20% of the applied voltage
Directory of Open Access Journals (Sweden)
Mohaiman Jaffar Sharba
2016-02-01
Full Text Available Natural–synthetic fiber hybrid composites offer a combination of high mechanical properties from the synthetic fibers and the advantages of renewable fibers to produce a material with highly specific and determined properties. In this study, plain-woven kenaf/glass reinforced unsaturated polyester (UP hybrid composites were fabricated using the hand lay-up method with a cold hydraulic press in a sandwich-configuration laminate. The glass was used as a shell with kenaf as a core, with an approximate total fiber content of 40%. Three glass/kenaf weight ratios percentages of (70/30% (H1, (55/45% (H2, and (30/70% (H3 were used to fabricate hybrid composites. Also pure glass/UP and kenaf/UP were fabricated for comparison purposes. Monotonic tests, namely tensile, compression, and flexural strengths of the composites, were performed. The morphological properties of tensile and compression failure of kenaf and hybrid composites were studied. In addition, uniaxial tensile fatigue life of hybrid composites were conducted and evaluated. The results revealed that the hybrid composite (H1 offered a good balance and the best static properties, but in tensile fatigue loading (H3 displayed low fatigue sensitivity when compared with the other hybrid composites.
Zhou, Chunlüe; Wang, Kaicun
2016-05-13
Most studies on global warming rely on global mean surface temperature, whose change is jointly determined by anthropogenic greenhouse gases (GHGs) and natural variability. This introduces a heated debate on whether there is a recent warming hiatus and what caused the hiatus. Here, we presented a novel method and applied it to a 5° × 5° grid of Northern Hemisphere land for the period 1900 to 2013. Our results show that the coldest 5% of minimum temperature anomalies (the coldest deviation) have increased monotonically by 0.22 °C/decade, which reflects well the elevated anthropogenic GHG effect. The warmest 5% of maximum temperature anomalies (the warmest deviation), however, display a significant oscillation following the Atlantic Multidecadal Oscillation (AMO), with a warming rate of 0.07 °C/decade from 1900 to 2013. The warmest (0.34 °C/decade) and coldest deviations (0.25 °C/decade) increased at much higher rates over the most recent decade than last century mean values, indicating the hiatus should not be interpreted as a general slowing of climate change. The significant oscillation of the warmest deviation provides an extension of previous study reporting no pause in the hottest temperature extremes since 1979, and first uncovers its increase from 1900 to 1939 and decrease from 1940 to 1969.
Bas-Relief Modeling from Normal Images with Intuitive Styles.
Ji, Zhongping; Ma, Weiyin; Sun, Xianfang
2014-05-01
Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.
... I'm breast-feeding my newborn and her bowel movements are yellow and mushy. Is this normal for baby poop? Answers from Jay L. Hoecker, M.D. Yellow, mushy bowel movements are perfectly normal for breast-fed babies. Still, ...
Visual Memories Bypass Normalization.
Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam
2018-05-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.
Wakefield, Jerome C; Schmitz, Mark F
2017-04-01
"Complicated" subthreshold depression (CsD) includes at least one of six pathosuggestive "complicated" symptoms: >6 months duration, marked role impairment, sense of worthlessness, suicidal ideation, psychotic ideation, and psychomotor retardation. "Uncomplicated" subthreshold depression (UsD) has no complicated features. Whereas studies show that complicated (CMDD) versus uncomplicated (UMDD) major depression differ substantially in severity and prognosis, UsD and CsD severity has not been previously compared. This study evaluates UsD and CsD pathology validator levels and examines whether the complicated/uncomplicated distinction offers incremental concurrent validity over the standard number-of-symptoms dimension as a depression severity measure. Using nationally representative community data from the National Comorbidity Survey, seven depression lifetime history subgroups were identified: one MDD screener symptom (n=1432); UsD (n=430); CsD (n=611); UMDD (n=182); and CMDD with 5-6 symptoms (n=518), 7 symptoms (n=217), and 8-9 symptoms (n=291). Severity was evaluated using five concurrent pathology validators: suicide attempt, interference with life, help seeking, hospitalization, and generalized anxiety disorder. CsD validator levels are substantially higher than both UsD and UMDD levels, and similar to mild CMDD, disconfirming the "monotonicity thesis" that severity increase with symptom number. Complicated/uncomplicated status predicts severity, and when complicatedness is controlled, number of symptoms no longer predicts validator levels. Diagnoses were based on respondents' fallible retrospective symptom reports during a lay-administered structured interview, which may not yield diagnoses comparable to clinicians' assessments. CsD is more severe than UsD and comparable to mild MDD. Complicated status more validly indicates depression severity than the standard number-of-symptoms measure. Copyright © 2017 Elsevier B.V. All rights reserved.
Time-dependent, non-monotonic response of warm convective cloud fields to changes in aerosol loading
Directory of Open Access Journals (Sweden)
G. Dagan
2017-06-01
Full Text Available Large eddy simulations (LESs with bin microphysics are used here to study cloud fields' sensitivity to changes in aerosol loading and the time evolution of this response. Similarly to the known response of a single cloud, we show that the mean field properties change in a non-monotonic trend, with an optimum aerosol concentration for which the field reaches its maximal water mass or rain yield. This trend is a result of competition between processes that encourage cloud development versus those that suppress it. However, another layer of complexity is added when considering clouds' impact on the field's thermodynamic properties and how this is dependent on aerosol loading. Under polluted conditions, rain is suppressed and the non-precipitating clouds act to increase atmospheric instability. This results in warming of the lower part of the cloudy layer (in which there is net condensation and cooling of the upper part (net evaporation. Evaporation at the upper part of the cloudy layer in the polluted simulations raises humidity at these levels and thus amplifies the development of the next generation of clouds (preconditioning effect. On the other hand, under clean conditions, the precipitating clouds drive net warming of the cloudy layer and net cooling of the sub-cloud layer due to rain evaporation. These two effects act to stabilize the atmospheric boundary layer with time (consumption of the instability. The evolution of the field's thermodynamic properties affects the cloud properties in return, as shown by the migration of the optimal aerosol concentration toward higher values.
Canonical single field slow-roll inflation with a non-monotonic tensor-to-scalar ratio
Energy Technology Data Exchange (ETDEWEB)
Germán, Gabriel [Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford, OX1 3NP (United Kingdom); Herrera-Aguilar, Alfredo [Instituto de Física, Benemérita Universidad Autónoma de Puebla, Apdo. postal J-48, CP 72570, Puebla, Pue., México (Mexico); Hidalgo, Juan Carlos [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México, Apdo. postal 48-3, 62251 Cuernavaca, Morelos, México (Mexico); Sussman, Roberto A., E-mail: gabriel@fis.unam.mx, E-mail: aherrera@ifuap.buap.mx, E-mail: hidalgo@fis.unam.mx, E-mail: sussman@nucleares.unam.mx [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Apdo. postal 70-543, 04510 México D. F., México (Mexico)
2016-05-01
We take a pragmatic, model independent approach to single field slow-roll canonical inflation by imposing conditions, not on the potential, but on the slow-roll parameter ε(φ) and its derivatives ε'(φ) and ε''(φ), thereby extracting general conditions on the tensor-to-scalar ratio r and the running n {sub sk} at φ {sub H} where the perturbations are produced, some 50–60 e -folds before the end of inflation. We find quite generally that for models where ε(φ) develops a maximum, a relatively large r is most likely accompanied by a positive running while a negligible tensor-to-scalar ratio implies negative running. The definitive answer, however, is given in terms of the slow-roll parameter ξ{sub 2}(φ). To accommodate a large tensor-to-scalar ratio that meets the limiting values allowed by the Planck data, we study a non-monotonic ε(φ) decreasing during most part of inflation. Since at φ {sub H} the slow-roll parameter ε(φ) is increasing, we thus require that ε(φ) develops a maximum for φ > φ {sub H} after which ε(φ) decrease to small values where most e -folds are produced. The end of inflation might occur trough a hybrid mechanism and a small field excursion Δφ {sub e} ≡ |φ {sub H} −φ {sub e} | is obtained with a sufficiently thin profile for ε(φ) which, however, should not conflict with the second slow-roll parameter η(φ). As a consequence of this analysis we find bounds for Δφ {sub e} , r {sub H} and for the scalar spectral index n {sub sH} . Finally we provide examples where these considerations are explicitly realised.
International Nuclear Information System (INIS)
Haehlen, Peter; Elmiger, Bruno
2000-01-01
The mechanics of the Swiss NPPs' 'come and see' programme 1995-1999 were illustrated in our contributions to all PIME workshops since 1996. Now, after four annual 'waves', all the country has been covered by the NPPs' invitation to dialogue. This makes PIME 2000 the right time to shed some light on one particular objective of this initiative: making nuclear 'normal'. The principal aim of the 'come and see' programme, namely to give the Swiss NPPs 'a voice of their own' by the end of the nuclear moratorium 1990-2000, has clearly been attained and was commented on during earlier PIMEs. It is, however, equally important that Swiss nuclear energy not only made progress in terms of public 'presence', but also in terms of being perceived as a normal part of industry, as a normal branch of the economy. The message that Swiss nuclear energy is nothing but a normal business involving normal people, was stressed by several components of the multi-prong campaign: - The speakers in the TV ads were real - 'normal' - visitors' guides and not actors; - The testimonials in the print ads were all real NPP visitors - 'normal' people - and not models; - The mailings inviting a very large number of associations to 'come and see' activated a typical channel of 'normal' Swiss social life; - Spending money on ads (a new activity for Swiss NPPs) appears to have resulted in being perceived by the media as a normal branch of the economy. Today we feel that the 'normality' message has well been received by the media. In the controversy dealing with antinuclear arguments brought forward by environmental organisations journalists nowadays as a rule give nuclear energy a voice - a normal right to be heard. As in a 'normal' controversy, the media again actively ask themselves questions about specific antinuclear claims, much more than before 1990 when the moratorium started. The result is that in many cases such arguments are discarded by journalists, because they are, e.g., found to be
International Nuclear Information System (INIS)
Codorniu Pujals, Daniel
2013-01-01
Raman spectroscopy is one of the most used experimental techniques in studying irradiated carbon nanostructures, in particular graphene, due to its high sensibility to the presence of defects in the crystalline lattice. Special attention has been given to the variation of the intensity of the Raman D-band of graphene with the concentration of defects produced by irradiation. Nowadays, there are enough experimental evidences about the non-monotonous character of that dependence, but the explanation of this behavior is still controversial. In the present work we developed a simplified mathematical model to obtain a functional relationship between these two magnitudes and showed that the non-monotonous dependence is intrinsic to the nature of the D-band and that it is not necessarily linked to amorphization processes. The obtained functional dependence was used to fit experimental data taken from other authors. The determination coefficient of the fitting was 0.96.
Directory of Open Access Journals (Sweden)
Cristina Câmpian
2006-01-01
Full Text Available For more than one hundred years the construction system based on steel or composite steel -- concrete frames became one of the more utilized types of building in civil engineering domain. For an optimal dimensioning of the structure, the engineers had to found a compromise between the structural exigency for the resistance, stiffness and ductility, on one side, and architectural exigency on the other side. Three monotonic tests and nine cyclic tests according to ECCS loading procedure were carried out in Cluj Laboratory of Concrete. The tested composite columns of fully encased type were subject to a variable transverse load at one end while keeping a constant value of the axial compression force into them. An analytical interpretation is given for the calculus of column stiffness for the monotonic tests, making a comparation with the latest versions of the Eurocode 4 stiffness formula.
Yao, Bo; Belin, Pascal; Scheepers, Christoph
2012-04-15
In human communication, direct speech (e.g., Mary said, "I'm hungry") is perceived as more vivid than indirect speech (e.g., Mary said that she was hungry). This vividness distinction has previously been found to underlie silent reading of quotations: Using functional magnetic resonance imaging (fMRI), we found that direct speech elicited higher brain activity in the temporal voice areas (TVA) of the auditory cortex than indirect speech, consistent with an "inner voice" experience in reading direct speech. Here we show that listening to monotonously spoken direct versus indirect speech quotations also engenders differential TVA activity. This suggests that individuals engage in top-down simulations or imagery of enriched supra-segmental acoustic representations while listening to monotonous direct speech. The findings shed new light on the acoustic nature of the "inner voice" in understanding direct speech. Copyright Â© 2012 Elsevier Inc. All rights reserved.
... improves the chance of a good recovery. Without treatment, symptoms may worsen and cause death. What research is being done? The NINDS conducts and supports research on neurological disorders, including normal pressure hydrocephalus. Research on disorders such ...
Normality in Analytical Psychology
Myers, Steve
2013-01-01
Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity. PMID:25379262
Hydrocephalus - occult; Hydrocephalus - idiopathic; Hydrocephalus - adult; Hydrocephalus - communicating; Dementia - hydrocephalus; NPH ... Ferri FF. Normal pressure hydrocephalus. In: Ferri FF, ed. ... Elsevier; 2016:chap 648. Rosenberg GA. Brain edema and disorders ...
... Spread the Word Shop AAP Find a Pediatrician Family Life Medical Home Family Dynamics Adoption & Foster Care ... Español Text Size Email Print Share Normal Functioning Family Page Content Article Body Is there any way ...
... page: //medlineplus.gov/ency/article/002456.htm Normal growth and development To use the sharing features on this page, please enable JavaScript. A child's growth and development can be divided into four periods: ...
L∞-error estimate for a system of elliptic quasivariational inequalities
Directory of Open Access Journals (Sweden)
M. Boulbrachene
2003-01-01
Full Text Available We deal with the numerical analysis of a system of elliptic quasivariational inequalities (QVIs. Under W2,p(Ω-regularity of the continuous solution, a quasi-optimal L∞-convergence of a piecewise linear finite element method is established, involving a monotone algorithm of Bensoussan-Lions type and standard uniform error estimates known for elliptic variational inequalities (VIs.
Energy Technology Data Exchange (ETDEWEB)
Wang, Long, E-mail: longwang_calt@163.com [Univ. Lille, CNRS, Centrale Lille, Arts et Metiers Paris tech, FRE 3723 – LML – Laboratoire de Mecanique de Lille, F-59000 Lille (France); Limodin, Nathalie; El Bartali, Ahmed; Witz, Jean-François; Seghir, Rian [Univ. Lille, CNRS, Centrale Lille, Arts et Metiers Paris tech, FRE 3723 – LML – Laboratoire de Mecanique de Lille, F-59000 Lille (France); Buffiere, Jean-Yves [Laboratoire Matériaux, Ingénierie et Sciences (MATEIS), CNRS UMR5510, INSA-Lyon, 20 Av. Albert Einstein, 69621 Villeurbanne (France); Charkaluk, Eric [Univ. Lille, CNRS, Centrale Lille, Arts et Metiers Paris tech, FRE 3723 – LML – Laboratoire de Mecanique de Lille, F-59000 Lille (France)
2016-09-15
Lost Foam Casting (LFC) process is replacing the conventional gravity Die Casting (DC) process in automotive industry for the purpose of geometry optimization, cost reduction and consumption control. However, due to lower cooling rate, LFC produces in a coarser microstructure that reduces fatigue life. In order to study the influence of the casting microstructure of LFC Al-Si alloy on damage micromechanisms under monotonic tensile loading and Low Cycle Fatigue (LCF) at room temperature, an experimental protocol based on the three dimensional (3D) in-situ analysis has been set up and validated. This paper focuses on the influence of pores on crack initiation in monotonic and cyclic tensile loadings. X-ray Computed Tomography (CT) allowed the microstructure of material being characterized in 3D and damage evolution being followed in-situ also in 3D. Experimental and numerical mechanical fields were obtained by using Digital Volume Correlation (DVC) technique and Finite Element Method (FEM) simulation respectively. Pores were shown to have an important influence on strain localization as large pores generate enough strain localization zones for crack initiation both in monotonic tensile and cyclic loadings.
International Nuclear Information System (INIS)
Wang, Long; Limodin, Nathalie; El Bartali, Ahmed; Witz, Jean-François; Seghir, Rian; Buffiere, Jean-Yves; Charkaluk, Eric
2016-01-01
Lost Foam Casting (LFC) process is replacing the conventional gravity Die Casting (DC) process in automotive industry for the purpose of geometry optimization, cost reduction and consumption control. However, due to lower cooling rate, LFC produces in a coarser microstructure that reduces fatigue life. In order to study the influence of the casting microstructure of LFC Al-Si alloy on damage micromechanisms under monotonic tensile loading and Low Cycle Fatigue (LCF) at room temperature, an experimental protocol based on the three dimensional (3D) in-situ analysis has been set up and validated. This paper focuses on the influence of pores on crack initiation in monotonic and cyclic tensile loadings. X-ray Computed Tomography (CT) allowed the microstructure of material being characterized in 3D and damage evolution being followed in-situ also in 3D. Experimental and numerical mechanical fields were obtained by using Digital Volume Correlation (DVC) technique and Finite Element Method (FEM) simulation respectively. Pores were shown to have an important influence on strain localization as large pores generate enough strain localization zones for crack initiation both in monotonic tensile and cyclic loadings.
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2015-11-01
The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jiang, Rengui; Xie, Jiancang; He, Hailong; Kuo, Chun-Chao; Zhu, Jiwei; Yang, Mingxiang
2016-09-01
As one of the most popular vegetation indices to monitor terrestrial vegetation productivity, Normalized Difference Vegetation Index (NDVI) has been widely used to study the plant growth and vegetation productivity around the world, especially the dynamic response of vegetation to climate change in terms of precipitation and temperature. Alberta is the most important agricultural and forestry province and with the best climatic observation systems in Canada. However, few studies pertaining to climate change and vegetation productivity are found. The objectives of this paper therefore were to better understand impacts of climate change on vegetation productivity in Alberta using the NDVI and provide reference for policy makers and stakeholders. We investigated the following: (1) the variations of Alberta's smoothed NDVI (sNDVI, eliminated noise compared to NDVI) and two climatic variables (precipitation and temperature) using non-parametric Mann-Kendall monotonic test and Thiel-Sen's slope; (2) the relationships between sNDVI and climatic variables, and the potential predictability of sNDVI using climatic variables as predictors based on two predicted models; and (3) the use of a linear regression model and an artificial neural network calibrated by the genetic algorithm (ANN-GA) to estimate Alberta's sNDVI using precipitation and temperature as predictors. The results showed that (1) the monthly sNDVI has increased during the past 30 years and a lengthened growing season was detected; (2) vegetation productivity in northern Alberta was mainly temperature driven and the vegetation in southern Alberta was predominantly precipitation driven for the period of 1982-2011; and (3) better performances of the sNDVI-climate relationships were obtained by nonlinear model (ANN-GA) than using linear (regression) model. Similar results detected in both monthly and summer sNDVI prediction using climatic variables as predictors revealed the applicability of two models for
Ma, X.; Elbanna, A. E.; Kothari, K.
2017-12-01
Fault zone dynamics hold the key to resolving many outstanding geophysical problems including the heat flow paradox, discrepancy between fault static and dynamic strength, and energy partitioning. Most fault zones that generate tectonic events are gouge filled and fluid saturated posing the need for formulating gouge-specific constitutive models that capture spatially heterogeneous compaction and dilation, non-monotonic rate dependence, and transition between localized and distributed deformation. In this presentation, we focus primarily on elucidating microscopic underpinnings for shear banding and stick-slip instabilities in sheared saturated granular materials and explore their implications for earthquake dynamics. We use a non-equilibrium thermodynamics model, the Shear Transformation Zone theory, to investigate the dynamics of strain localization and its connection to stability of sliding in the presence and absence of pore fluids. We also consider the possible influence of self-induced mechanical vibrations as well as the role of external acoustic vibrations as analogue for triggering by a distant event. For the dry case, our results suggest that at low and intermediate strain rates, persistent shear bands develop only in the absence of vibrations. Vibrations tend to fluidize the granular network and de-localize slip at these rates. Stick-slip is only observed for rough grains and it is confined to the shear band. At high strain rates, stick-slip disappears and the different systems exhibit similar stress-slip response. Changing the vibration intensity, duration or time of application alters the system response and may cause long-lasting rheological changes. The presence of pore fluids modifies the stick slip pattern and may lead to both loss and development of slip instability depending on the value of the confining pressure, imposed strain rate and hydraulic parameters. We analyze these observations in terms of possible transitions between rate
Smooth quantile normalization.
Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada
2018-04-01
Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.