The Truncated Lognormal Distribution as a Luminosity Function for SWIFT-BAT Gamma-Ray Bursts
Directory of Open Access Journals (Sweden)
Lorenzo Zaninetti
2016-11-01
Full Text Available The determination of the luminosity function (LF in Gamma ray bursts (GRBs depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here, we analyze three cosmologies: the standard cosmology, the plasma cosmology and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law and, secondly, by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.
Directory of Open Access Journals (Sweden)
J. C. Ferrari
Full Text Available Abstract This work evaluates the usage of the multimodal lognormal function to describe Particle Size Distributions (PSD of emulsion and suspension polymerization processes, including continuous reactions with particle re-nucleation leading to complex multimodal PSDs. A global optimization algorithm, namely Particle Swarm Optimization (PSO, was used for parameter estimation of the proposed model, minimizing the objective function defined by the mean squared errors. Statistical evaluation of the results indicated that the multimodal lognormal function could describe distinctive features of different types of PSDs with accuracy and consistency.
Mohammadi, Mohammad Hossein; Vanclooster, Marnik
2012-05-01
Solute transport in partially saturated soils is largely affected by fluid velocity distribution and pore size distribution within the solute transport domain. Hence, it is possible to describe the solute transport process in terms of the pore size distribution of the soil, and indirectly in terms of the soil hydraulic properties. In this paper, we present a conceptual approach that allows predicting the parameters of the Convective Lognormal Transfer model from knowledge of soil moisture and the Soil Moisture Characteristic (SMC), parameterized by means of the closed-form model of Kosugi (1996). It is assumed that in partially saturated conditions, the air filled pore volume act as an inert solid phase, allowing the use of the Arya et al. (1999) pragmatic approach to estimate solute travel time statistics from the saturation degree and SMC parameters. The approach is evaluated using a set of partially saturated transport experiments as presented by Mohammadi and Vanclooster (2011). Experimental results showed that the mean solute travel time, μ(t), increases proportionally with the depth (travel distance) and decreases with flow rate. The variance of solute travel time σ²(t) first decreases with flow rate up to 0.4-0.6 Ks and subsequently increases. For all tested BTCs predicted solute transport with μ(t) estimated from the conceptual model performed much better as compared to predictions with μ(t) and σ²(t) estimated from calibration of solute transport at shallow soil depths. The use of μ(t) estimated from the conceptual model therefore increases the robustness of the CLT model in predicting solute transport in heterogeneous soils at larger depths. In view of the fact that reasonable indirect estimates of the SMC can be made from basic soil properties using pedotransfer functions, the presented approach may be useful for predicting solute transport at field or watershed scales. Copyright © 2012 Elsevier B.V. All rights reserved.
Messica, A.
2016-10-01
The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.
International Nuclear Information System (INIS)
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.
2015-01-01
High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal distribution of the magnetic dipole moment. Here, we test this assumption for different types of superparamagnetic iron oxide nanoparticles in the 5–20 nm range, by multimodal fitting of magnetization curves using the MINORIM inversion method. The particles are studied while in dilute colloidal dispersion in a liquid, thereby preventing hysteresis and diminishing the effects of magnetic anisotropy on the interpretation of the magnetization curves. For two different types of well crystallized particles, the magnetic distribution is indeed log-normal, as expected from the physical size distribution. However, two other types of particles, with twinning defects or inhomogeneous oxide phases, are found to have a bimodal magnetic distribution. Our qualitative explanation is that relatively low fields are sufficient to begin aligning the particles in the liquid on the basis of their net dipole moment, whereas higher fields are required to align the smaller domains or less magnetic phases inside the particles. - Highlights: • Multimodal fits of dilute ferrofluids reveal when the particles are multidomain. • No a priori shape of the distribution is assumed by the MINORIM inversion method. • Well crystallized particles have log-normal TEM and magnetic size distributions. • Defective particles can combine a monomodal size and a bimodal dipole moment
Monomode microwave-assisted atom transfer radical polymerization
Zhang, H.; Schubert, U.S.
2004-01-01
The first monomode microwave-assisted atom transfer radical polymerization (ATRP) is reported. The ATRP of methyl methacrylate was successfully performed with microwave heating, which was well controlled and provided almost the same results as experiments with conventional heating, demonstrating the
Preconditioned stochastic gradient descent optimisation for monomodal image registration
Klein, S.; Staring, M.; Andersson, J.P.; Pluim, J.P.W.; Fichtinger, G.; Martel, A.; Peters, T.
2011-01-01
We present a stochastic optimisation method for intensity-based monomodal image registration. The method is based on a Robbins-Monro stochastic gradient descent method with adaptive step size estimation, and adds a preconditioning matrix. The derivation of the pre-conditioner is based on the
Arshad, Muhammad; Seadawy, Aly R.; Lu, Dianchen
2018-01-01
In mono-mode optical fibers, the higher order non-linear Schrödinger equation (NLSE) describes the propagation of enormously short light pulses. We constructed optical solitons and, solitary wave solutions of higher order NLSE mono-mode optical fibers via employing modified extended mapping method which has important applications in Mathematics and physics. Furthermore, the formation conditions are also given on parameters in which optical bright and dark solitons can exist for this media. The moment of the obtained solutions are also given graphically, that helps to realize the physical phenomena's of this model. The modulation instability analysis is utilized to discuss the model stability, which verifies that all obtained solutions are exact and stable. Many other such types of models arising in applied sciences can also be solved by this reliable, powerful and effective method. The method can also be functional to other sorts of higher order nonlinear problems in contemporary areas of research.
Exponential Family Techniques for the Lognormal Left Tail
DEFF Research Database (Denmark)
Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo
[Xe−θX]/L(θ)=x. The asymptotic formulas involve the Lambert W function. The established relations are used to provide two different numerical methods for evaluating the left tail probability of lognormal sum Sn=X1+⋯+Xn: a saddlepoint approximation and an exponential twisting importance sampling estimator. For the latter we...
Evolution and mass extinctions as lognormal stochastic processes
Maccone, Claudio
2014-10-01
In a series of recent papers and in a book, this author put forward a mathematical model capable of embracing the search for extra-terrestrial intelligence (SETI), Darwinian Evolution and Human History into a single, unified statistical picture, concisely called Evo-SETI. The relevant mathematical tools are: (1) Geometric Brownian motion (GBM), the stochastic process representing evolution as the stochastic increase of the number of species living on Earth over the last 3.5 billion years. This GBM is well known in the mathematics of finances (Black-Sholes models). Its main features are that its probability density function (pdf) is a lognormal pdf, and its mean value is either an increasing or, more rarely, decreasing exponential function of the time. (2) The probability distributions known as b-lognormals, i.e. lognormals starting at a certain positive instant b>0 rather than at the origin. These b-lognormals were then forced by us to have their peak value located on the exponential mean-value curve of the GBM (Peak-Locus theorem). In the framework of Darwinian Evolution, the resulting mathematical construction was shown to be what evolutionary biologists call Cladistics. (3) The (Shannon) entropy of such b-lognormals is then seen to represent the `degree of progress' reached by each living organism or by each big set of living organisms, like historic human civilizations. Having understood this fact, human history may then be cast into the language of b-lognormals that are more and more organized in time (i.e. having smaller and smaller entropy, or smaller and smaller `chaos'), and have their peaks on the increasing GBM exponential. This exponential is thus the `trend of progress' in human history. (4) All these results also match with SETI in that the statistical Drake equation (generalization of the ordinary Drake equation to encompass statistics) leads just to the lognormal distribution as the probability distribution for the number of extra
Estimation of expected value for lognormal and gamma distributions
International Nuclear Information System (INIS)
White, G.C.
1978-01-01
Concentrations of environmental pollutants tend to follow positively skewed frequency distributions. Two such density functions are the gamma and lognormal. Minimum variance unbiased estimators of the expected value for both densities are available. The small sample statistical properties of each of these estimators were compared for its own distribution, as well as the other distribution to check the robustness of the estimator. Results indicated that the arithmetic mean provides an unbiased estimator when the underlying density function of the sample is either lognormal or gamma, and that the achieved coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two. Further Monte Carlo simulations were conducted to study the robustness of the above estimators by simulating a lognormal or gamma distribution with the expected value of a particular observation selected from a uniform distribution before the lognormal or gamma observation is generated. Again, the arithmetic mean provides an unbiased estimate of expected value, and the coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Neuronal variability during handwriting: lognormal distribution.
Directory of Open Access Journals (Sweden)
Valery I Rupasov
Full Text Available We examined time-dependent statistical properties of electromyographic (EMG signals recorded from intrinsic hand muscles during handwriting. Our analysis showed that trial-to-trial neuronal variability of EMG signals is well described by the lognormal distribution clearly distinguished from the Gaussian (normal distribution. This finding indicates that EMG formation cannot be described by a conventional model where the signal is normally distributed because it is composed by summation of many random sources. We found that the variability of temporal parameters of handwriting--handwriting duration and response time--is also well described by a lognormal distribution. Although, the exact mechanism of lognormal statistics remains an open question, the results obtained should significantly impact experimental research, theoretical modeling and bioengineering applications of motor networks. In particular, our results suggest that accounting for lognormal distribution of EMGs can improve biomimetic systems that strive to reproduce EMG signals in artificial actuators.
Beyond lognormal inequality: The Lorenz Flow Structure
Eliazar, Iddo
2016-11-01
Observed from a socioeconomic perspective, the intrinsic inequality of the lognormal law happens to manifest a flow generated by an underlying ordinary differential equation. In this paper we extend this feature of the lognormal law to a general ;Lorenz Flow Structure; of Lorenz curves-objects that quantify socioeconomic inequality. The Lorenz Flow Structure establishes a general framework of size distributions that span continuous spectra of socioeconomic states ranging from the pure-communism extreme to the absolute-monarchy extreme. This study introduces and explores the Lorenz Flow Structure, analyzes its statistical properties and its inequality properties, unveils the unique role of the lognormal law within this general structure, and presents various examples of this general structure. Beyond the lognormal law, the examples include the inverse-Pareto and Pareto laws-which often govern the tails of composite size distributions.
Optimal approximations for risk measures of sums of lognormals based on conditional expectations
Vanduffel, S.; Chen, X.; Dhaene, J.; Goovaerts, M.; Henrard, L.; Kaas, R.
2008-11-01
In this paper we investigate the approximations for the distribution function of a sum S of lognormal random variables. These approximations are obtained by considering the conditional expectation E[S|[Lambda
Simulation of gas hydrogen diffusion through partially water saturated mono-modal materials
International Nuclear Information System (INIS)
Boher, C.; Lorente, S.; Frizon, F.; Bart, F.
2012-01-01
Concerning the disposal of nuclear wastes, it is important to design concrete envelopes with pore networks that allow the diffusion of hydrogen towards the outside. This work documents the relationship between geo-polymers, which are materials with a quasi mono-modal pore network, and their gaseous diffusivity capacities. Using a mono-modal material allows studying a specific pore size contribution to gaseous diffusion. The pore network is characterized by mercury porosimetry. These experimental results are used as data in a model named MOHYCAN. The modeling work consists of creating a virtual pore network. Then, water layers are deposited in this network to simulate variable water saturation levels. Finally hydrogen is transported through the virtual network using a combination of ordinary diffusion and Knudsen diffusion. MOHYCAN calculates the hydrogen diffusion coefficient for water saturation degree from 0% to 100%. The impacts of the pore network arrangement or the pore network discretization have been studied. The results are, for a quasi mono-modal material: -) the diffusion coefficient is not sensitive to different virtual pore network arrangement; -) the diffusion coefficient values have a sharp drop at specific water saturation (this is due to the water saturation of the main and unique pore family); -) a 2 pores family based model is sufficient to represent the pore network. Theses observations will not be valid if we consider a material with a large pore size distribution, like cementitious materials
The lognormal handwriter: learning, performing and declining.
Directory of Open Access Journals (Sweden)
Réjean ePlamondon
2013-12-01
Full Text Available The generation of handwriting is a complex neuromotor skill requiring the interaction of many cognitive processes. It aims at producing a message to be imprinted as an ink trace left on a writing medium. The generated trajectory of the pen tip is made up of strokes superimposed over time. The Kinematic Theory of rapid human movements and its family of lognormal models provide analytical representations of these strokes, often considered as the basic unit of handwriting. This paradigm has not only been experimentally confirmed in numerous predictive and physiologically significant tests but it has also been shown to be the ideal mathematical description for the impulse response of a neuromuscular system. This latter demonstration suggests that the lognormality of the velocity patterns can be interpreted as reflecting the behaviour of subjects who are in perfect control of their movements. To illustrate this interpretation, we present a short overview of the main concepts behind the Kinematic Theory and briefly describe how its models can be exploited, using various software tools, to investigate these ideal lognormal behaviors. We emphasize that the parameters extracted during various tasks can be used to analyze some underlying processes associated with their realization. To investigate the operational convergence hypothesis, we report on two original studies. First, we focus on the early steps of the motor learning process as seen as a converging behaviour toward the production of more precise lognormal patterns as young children practicing handwriting start to become more fluent writers. Second, we illustrate how aging affects handwriting by pointing out the increasing departure from the ideal lognormal behaviour as the control of the fine motricity begins to decline. Overall, the paper highlights this developmental process of merging toward a lognormal behaviour with learning, mastering this behaviour to succeed in performing a given task
On Riemann zeroes, lognormal multiplicative chaos, and Selberg integral
International Nuclear Information System (INIS)
Ostrovsky, Dmitry
2016-01-01
Rescaled Mellin-type transforms of the exponential functional of the Bourgade–Kuan–Rodgers statistic of Riemann zeroes are conjecturally related to the distribution of the total mass of the limit lognormal stochastic measure of Mandelbrot–Bacry–Muzy. The conjecture implies that a non-trivial, log-infinitely divisible probability distribution is associated with Riemann zeroes. For application, integral moments, covariance structure, multiscaling spectrum, and asymptotics associated with the exponential functional are computed in closed form using the known meromorphic extension of the Selberg integral. (paper)
International Nuclear Information System (INIS)
Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.
2016-01-01
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg"2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3–10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ"2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Lastly, our methods are validated against maps from the MICE Grand Challenge N-body simulation.
Asymptotic Ergodic Capacity Analysis of Composite Lognormal Shadowed Channels
Ansari, Imran Shafique
2015-05-01
Capacity analysis of composite lognormal (LN) shadowed links, such as Rician-LN, Gamma-LN, and Weibull-LN, is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single composite link transmission system is presented in terms of well- known elementary functions. Capitalizing on these new moments expressions, we present asymptotically tight lower bounds for the ergodic capacity at high SNR. All the presented results are verified via computer-based Monte-Carlo simulations. © 2015 IEEE.
Multilevel quadrature of elliptic PDEs with log-normal diffusion
Harbrecht, Helmut
2015-01-07
We apply multilevel quadrature methods for the moment computation of the solution of elliptic PDEs with lognormally distributed diffusion coefficients. The computation of the moments is a difficult task since they appear as high dimensional Bochner integrals over an unbounded domain. Each function evaluation corresponds to a deterministic elliptic boundary value problem which can be solved by finite elements on an appropriate level of refinement. The complexity is thus given by the number of quadrature points times the complexity for a single elliptic PDE solve. The multilevel idea is to reduce this complexity by combining quadrature methods with different accuracies with several spatial discretization levels in a sparse grid like fashion.
Asymptotic Ergodic Capacity Analysis of Composite Lognormal Shadowed Channels
Ansari, Imran Shafique; Alouini, Mohamed-Slim
2015-01-01
Capacity analysis of composite lognormal (LN) shadowed links, such as Rician-LN, Gamma-LN, and Weibull-LN, is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single composite link transmission system is presented in terms of well- known elementary functions. Capitalizing on these new moments expressions, we present asymptotically tight lower bounds for the ergodic capacity at high SNR. All the presented results are verified via computer-based Monte-Carlo simulations. © 2015 IEEE.
On the Laplace transform of the Lognormal distribution
DEFF Research Database (Denmark)
Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo
-form approximation L˜(θ) of the Laplace transform L(θ) which is obtained via a modified version of Laplace's method. This approximation, given in terms of the Lambert W(⋅) function, is tractable enough for applications. We prove that L˜(θ) is asymptotically equivalent to L(θ) as θ→∞. We apply this result......Integral transforms of the lognormal distribution are of great importance in statistics and probability, yet closed-form expressions do not exist. A wide variety of methods have been employed to provide approximations, both analytical and numerical. In this paper, we analyze a closed...
Log-Normality and Multifractal Analysis of Flame Surface Statistics
Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.
2013-11-01
The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.
Increased Statistical Efficiency in a Lognormal Mean Model
Directory of Open Access Journals (Sweden)
Grant H. Skrepnek
2014-01-01
Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.
Confidence intervals for the lognormal probability distribution
International Nuclear Information System (INIS)
Smith, D.L.; Naberejnev, D.G.
2004-01-01
The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Inc, Mustafa; Aliyu, Aliyu Isa; Yusuf, Abdullahi; Baleanu, Dumitru
2018-01-01
This paper addresses the coupled nonlinear Schrödinger equation (CNLSE) in monomode step-index in optical fibers which describes the nonlinear modulations of two monochromatic waves, whose group velocities are almost equal. A class of dark, bright, dark-bright and dark-singular optical solitary wave solutions of the model are constructed using the complex envelope function ansatz. Singular solitary waves are also retrieved as bye products of the in integration scheme. This naturally lead to some constraint conditions placed on the solitary wave parameters which must hold for the solitary waves to exist. The modulation instability (MI) analysis of the model is studied based on the standard linear-stability analysis. Numerical simulation and physical interpretations of the obtained results are demonstrated. It is hoped that the results reported in this paper can enrich the nonlinear dynamical behaviors of the CNLSE.
Life prediction for white OLED based on LSM under lognormal distribution
Zhang, Jianping; Liu, Fang; Liu, Yu; Wu, Helen; Zhu, Wenqing; Wu, Wenli; Wu, Liang
2012-09-01
In order to acquire the reliability information of White Organic Light Emitting Display (OLED), three groups of OLED constant stress accelerated life tests (CSALTs) were carried out to obtain failure data of samples. Lognormal distribution function was applied to describe OLED life distribution, and the accelerated life equation was determined by Least square method (LSM). The Kolmogorov-Smirnov test was performed to verify whether the white OLED life meets lognormal distribution or not. Author-developed software was employed to predict the average life and the median life. The numerical results indicate that the white OLED life submits to lognormal distribution, and that the accelerated life equation meets inverse power law completely. The estimated life information of the white OLED provides manufacturers and customers with important guidelines.
van Rijssel, Jozef; Kuipers, Bonny W M; Erne, Ben
2015-01-01
High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal
International Nuclear Information System (INIS)
Keall, P J; Webb, S
2007-01-01
The heterogeneity of human tumour radiation response is well known. Researchers have used the normal distribution to describe interpatient tumour radiosensitivity. However, many natural phenomena show a log-normal distribution. Log-normal distributions are common when mean values are low, variances are large and values cannot be negative. These conditions apply to radiosensitivity. The aim of this work was to evaluate the log-normal distribution to predict clinical tumour control probability (TCP) data and to compare the results with the homogeneous (δ-function with single α-value) and normal distributions. The clinically derived TCP data for four tumour types-melanoma, breast, squamous cell carcinoma and nodes-were used to fit the TCP models. Three forms of interpatient tumour radiosensitivity were considered: the log-normal, normal and δ-function. The free parameters in the models were the radiosensitivity mean, standard deviation and clonogenic cell density. The evaluation metric was the deviance of the maximum likelihood estimation of the fit of the TCP calculated using the predicted parameters to the clinical data. We conclude that (1) the log-normal and normal distributions of interpatient tumour radiosensitivity heterogeneity more closely describe clinical TCP data than a single radiosensitivity value and (2) the log-normal distribution has some theoretical and practical advantages over the normal distribution. Further work is needed to test these models on higher quality clinical outcome datasets
International Nuclear Information System (INIS)
Agterberg, Frits
2017-01-01
Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that
Energy Technology Data Exchange (ETDEWEB)
Agterberg, Frits, E-mail: agterber@nrcan.gc.ca [Geological Survey of Canada (Canada)
2017-07-01
Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that
Certain bright soliton interactions of the Sasa-Satsuma equation in a monomode optical fiber
Liu, Lei; Tian, Bo; Chai, Han-Peng; Yuan, Yu-Qiang
2017-03-01
Under investigation in this paper is the Sasa-Satsuma equation, which describes the propagation of ultrashort pulses in a monomode fiber with the third-order dispersion, self-steepening, and stimulated Raman scattering effects. Based on the known bilinear forms, through the modified expanded formulas and symbolic computation, we construct the bright two-soliton solutions. Through classifying the interactions under different parameter conditions, we reveal six cases of interactions between the two solitons via an asymptotic analysis. With the help of the analytic and graphic analysis, we find that such interactions are different from those of the nonlinear Schrödinger equation and Hirota equation. When those solitons interact with each other, the singular-I soliton is shape-preserving, while the singular-II and nonsingular solitons may be shape preserving or shape changing. Such elastic and inelastic interaction phenomena in a scalar equation might enrich the knowledge of soliton behavior, which could be expected to be experimentally observed.
Dobinski-type relations and the log-normal distribution
International Nuclear Information System (INIS)
Blasiak, P; Penson, K A; Solomon, A I
2003-01-01
We consider sequences of generalized Bell numbers B(n), n = 1, 2, ..., which can be represented by Dobinski-type summation formulae, i.e. B(n) = 1/C Σ k =0 ∞ [P(k)] n /D(k), with P(k) a polynomial, D(k) a function of k and C = const. They include the standard Bell numbers (P(k) k, D(k) = k!, C = e), their generalizations B r,r (n), r = 2, 3, ..., appearing in the normal ordering of powers of boson monomials (P(k) (k+r)!/k!, D(k) = k!, C = e), variants of 'ordered' Bell numbers B o (p) (n) (P(k) = k, D(k) = (p+1/p) k , C = 1 + p, p = 1, 2 ...), etc. We demonstrate that for α, β, γ, t positive integers (α, t ≠ 0), [B(αn 2 + βn + γ)] t is the nth moment of a positive function on (0, ∞) which is a weighted infinite sum of log-normal distributions. (letter to the editor)
Neutron dosimetry and spectrometry with Bonner spheres. Working out a log-normal reference matrix
International Nuclear Information System (INIS)
Zaborowski, Henrick.
1981-11-01
From the experimental and theoretical studies made upon the BONNER's spheres System with a I 6 Li(Eu) crystal and with a miniaturized 3 He counter we get the normalized energy response functions R*sub(i)(E). This normalization is obtained by the mathematization of the Resolution Function R*(i,E) in the Log-Normal distribution hypothesis to mono energetic neutrons given in April 1976 to the International Symposium on Californium 252. The fit of the Log-Normal Hypothesis with the experimental and Theoretical data is very satisfactory. The parameter's tabulated values allow a precise interpolation, at all energies between 0.4 eV and 15 MeV and for all spheres diameters between 2 and 12 inches, of the discretized R*sub(ij) Reference Matrix for the applications to neutron dosimetry and spectrometry [fr
Neti, Prasad V.S.V.; Howell, Roger W.
2010-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086
Log-Normal Turbulence Dissipation in Global Ocean Models
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
Efficient simulation of tail probabilities of sums of correlated lognormals
DEFF Research Database (Denmark)
Asmussen, Søren; Blanchet, José; Juneja, Sandeep
We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff......We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...
Lognormal Behavior of the Size Distributions of Animation Characters
Yamamoto, Ken
This study investigates the statistical property of the character sizes of animation, superhero series, and video game. By using online databases of Pokémon (video game) and Power Rangers (superhero series), the height and weight distributions are constructed, and we find that the weight distributions of Pokémon and Zords (robots in Power Rangers) follow the lognormal distribution in common. For the theoretical mechanism of this lognormal behavior, the combination of the normal distribution and the Weber-Fechner law is proposed.
Aerosol Extinction Profile Mapping with Lognormal Distribution Based on MPL Data
Lin, T. H.; Lee, T. T.; Chang, K. E.; Lien, W. H.; Liu, G. R.; Liu, C. Y.
2017-12-01
This study intends to challenge the profile mapping of aerosol vertical distribution by mathematical function. With the similarity in distribution pattern, lognormal distribution is examined for mapping the aerosol extinction profile based on MPL (Micro Pulse LiDAR) in situ measurements. The variables of lognormal distribution are log mean (μ) and log standard deviation (σ), which will be correlated with the parameters of aerosol optical depht (AOD) and planetary boundary layer height (PBLH) associated with the altitude of extinction peak (Mode) defined in this study. On the base of 10 years MPL data with single peak, the mapping results showed that the mean error of Mode and σ retrievals are 16.1% and 25.3%, respectively. The mean error of σ retrieval can be reduced to 16.5% under the cases of larger distance between PBLH and Mode. The proposed method is further applied to MODIS AOD product in mapping extinction profile for the retrieval of PM2.5 in terms of satellite observations. The results indicated well agreement between retrievals and ground measurements when aerosols under 525 meters are well-mixed. The feasibility of proposed method to satellite remote sensing is also suggested by the case study. Keyword: Aerosol extinction profile, Lognormal distribution, MPL, Planetary boundary layer height (PBLH), Aerosol optical depth (AOD), Mode
Baar, Marsha R.; Gammerdinger, William; Leap, Jennifer; Morales, Erin; Shikora, Jonathan; Weber, Michael H.
2014-01-01
Five reactions were rate-accelerated relative to the standard reflux workup in both multi-mode and mono-mode microwave ovens, and the results were compared to determine whether the sequential processing of a mono-mode unit could provide for better lab logistics and pedagogy. Conditions were optimized so that yields matched in both types of…
Asymptotics of sums of lognormal random variables with Gaussian copula
DEFF Research Database (Denmark)
Asmussen, Søren; Rojas-Nandayapa, Leonardo
2008-01-01
Let (Y1, ..., Yn) have a joint n-dimensional Gaussian distribution with a general mean vector and a general covariance matrix, and let Xi = eYi, Sn = X1 + ⋯ + Xn. The asymptotics of P (Sn > x) as n → ∞ are shown to be the same as for the independent case with the same lognormal marginals. In part...
A physical explanation of the lognormality of pollutant concentrations
International Nuclear Information System (INIS)
Ott, W.R.
1990-01-01
Investigators in different environmental fields have reported that the concentrations of various measured substances have frequency distributions that are lognormal, or nearly so. That is, when the logarithms of the observed concentrations are plotted as a frequency distribution, the resulting distribution is approximately normal, or Gaussian, over much of the observed range. Examples include radionuclides in soil, pollutants in ambient air, indoor air quality, trace metals in streams, metals in biological tissue, calcium in human remains. The ubiquity of the lognormal distribution in environmental processes is surprising and has not been adequately explained, since common processes in nature (for example, computation of the mean and the analysis of error) usually give rise to distributions that are normal rather than lognormal. This paper takes the first step toward explaining why lognormal distributions can arise naturally from certain physical processes that are analogous to those found in the environment. In this paper, these processes are treated mathematically, and the results are illustrated in a laboratory beaker experiment that is simulated on the computer
Percentile estimation using the normal and lognormal probability distribution
International Nuclear Information System (INIS)
Bement, T.R.
1980-01-01
Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution
Generating log-normal mock catalog of galaxies in redshift space
Energy Technology Data Exchange (ETDEWEB)
Agrawal, Aniket; Makiya, Ryu; Saito, Shun; Komatsu, Eiichiro [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Chiang, Chi-Ting [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Jeong, Donghui, E-mail: aniket@mpa-garching.mpg.de, E-mail: makiya@mpa-garching.mpg.de, E-mail: chi-ting.chiang@stonybrook.edu, E-mail: djeong@psu.edu, E-mail: ssaito@mpa-garching.mpg.de, E-mail: komatsu@mpa-garching.mpg.de [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)
2017-10-01
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.
Grunspan, Cyril
2011-01-01
First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.
A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique
Rached, Nadhir B.
2015-06-08
The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.
A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique
Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul
2015-01-01
The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.
On the capacity of FSO links under lognormal and Rician-lognormal turbulences
Ansari, Imran Shafique; Alouini, Mohamed-Slim; Cheng, Julian
2014-01-01
) is addressed in this work. More specifically, a unified exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system is presented in terms of well-known elementary functions. Capitalizing
Critical elements on fitting the Bayesian multivariate Poisson Lognormal model
Zamzuri, Zamira Hasanah binti
2015-10-01
Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.
On the capacity of FSO links under lognormal and Rician-lognormal turbulences
Ansari, Imran Shafique
2014-09-01
A unified capacity analysis under weak and composite turbulences of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection as well as heterodyne detection) is addressed in this work. More specifically, a unified exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system is presented in terms of well-known elementary functions. Capitalizing on these new moments expressions, unified approximate and simple closed- form results are offered for the ergodic capacity at high SNR regime as well as at low SNR regime. All the presented results are verified via computer- based Monte-Carlo simulations.
Weibull and lognormal Taguchi analysis using multiple linear regression
International Nuclear Information System (INIS)
Piña-Monarrez, Manuel R.; Ortiz-Yañez, Jesús F.
2015-01-01
The paper provides to reliability practitioners with a method (1) to estimate the robust Weibull family when the Taguchi method (TM) is applied, (2) to estimate the normal operational Weibull family in an accelerated life testing (ALT) analysis to give confidence to the extrapolation and (3) to perform the ANOVA analysis to both the robust and the normal operational Weibull family. On the other hand, because the Weibull distribution neither has the normal additive property nor has a direct relationship with the normal parameters (µ, σ), in this paper, the issues of estimating a Weibull family by using a design of experiment (DOE) are first addressed by using an L_9 (3"4) orthogonal array (OA) in both the TM and in the Weibull proportional hazard model approach (WPHM). Then, by using the Weibull/Gumbel and the lognormal/normal relationships and multiple linear regression, the direct relationships between the Weibull and the lifetime parameters are derived and used to formulate the proposed method. Moreover, since the derived direct relationships always hold, the method is generalized to the lognormal and ALT analysis. Finally, the method’s efficiency is shown through its application to the used OA and to a set of ALT data. - Highlights: • It gives the statistical relations and steps to use the Taguchi Method (TM) to analyze Weibull data. • It gives the steps to determine the unknown Weibull family to both the robust TM setting and the normal ALT level. • It gives a method to determine the expected lifetimes and to perform its ANOVA analysis in TM and ALT analysis. • It gives a method to give confidence to the extrapolation in an ALT analysis by using the Weibull family of the normal level.
Collision prediction models using multivariate Poisson-lognormal regression.
El-Basyouny, Karim; Sayed, Tarek
2009-07-01
This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.
Correlated random sampling for multivariate normal and log-normal distributions
International Nuclear Information System (INIS)
Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.
2012-01-01
A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.
International Nuclear Information System (INIS)
Lund, D.
1992-01-01
The report analyses the possibility that the lognormal diffusion process should be an equilibrium spot price process for an exhaustible resource. A partial equilibrium model is used under the assumption that the resource deposits have different extraction costs. Two separate problems have been pointed out. Under full certainty, when the process reduces to an exponentially growing price, the equilibrium places a very strong restriction on a relationship between the demand function and the cost density function. Under uncertainty there is an additional problem that during periods in which the price is lower than its previously recorded high, no new deposits will start extraction. 30 refs., 1 fig
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The effect of mis-specification on mean and selection between the Weibull and lognormal models
Jia, Xiang; Nadarajah, Saralees; Guo, Bo
2018-02-01
The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.
Handbook of tables for order statistics from lognormal distributions with applications
Balakrishnan, N
1999-01-01
Lognormal distributions are one of the most commonly studied models in the sta tistical literature while being most frequently used in the applied literature. The lognormal distributions have been used in problems arising from such diverse fields as hydrology, biology, communication engineering, environmental science, reliability, agriculture, medical science, mechanical engineering, material science, and pharma cology. Though the lognormal distributions have been around from the beginning of this century (see Chapter 1), much of the work concerning inferential methods for the parameters of lognormal distributions has been done in the recent past. Most of these methods of inference, particUlarly those based on censored samples, involve extensive use of numerical methods to solve some nonlinear equations. Order statistics and their moments have been discussed quite extensively in the literature for many distributions. It is very well known that the moments of order statistics can be derived explicitly only...
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Subcarrier MPSK/MDPSK modulated optical wireless communications in lognormal turbulence
Song, Xuegui; Yang, Fan; Cheng, Julian; Alouini, Mohamed-Slim
2015-01-01
Bit-error rate (BER) performance of subcarrier Mary phase-shift keying (MPSK) and M-ary differential phase-shift keying (MDPSK) is analyzed for optical wireless communications over the lognormal turbulence channels. Both exact BER and approximate
Song, Xuegui; Cheng, Julian; Alouini, Mohamed-Slim
2014-01-01
Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes.
Log-normality of indoor radon data in the Walloon region of Belgium
International Nuclear Information System (INIS)
Cinelli, Giorgia; Tondeur, François
2015-01-01
The deviations of the distribution of Belgian indoor radon data from the log-normal trend are examined. Simulated data are generated to provide a theoretical frame for understanding these deviations. It is shown that the 3-component structure of indoor radon (radon from subsoil, outdoor air and building materials) generates deviations in the low- and high-concentration tails, but this low-C trend can be almost completely compensated by the effect of measurement uncertainties and by possible small errors in background subtraction. The predicted low-C and high-C deviations are well observed in the Belgian data, when considering the global distribution of all data. The agreement with the log-normal model is improved when considering data organised in homogeneous geological groups. As the deviation from log-normality is often due to the low-C tail for which there is no interest, it is proposed to use the log-normal fit limited to the high-C half of the distribution. With this prescription, the vast majority of the geological groups of data are compatible with the log-normal model, the remaining deviations being mostly due to a few outliers, and rarely to a “fat tail”. With very few exceptions, the log-normal modelling of the high-concentration part of indoor radon data is expected to give reasonable results, provided that the data are organised in homogeneous geological groups. - Highlights: • Deviations of the distribution of Belgian indoor Rn data from the log-normal trend. • 3-component structure of indoor Rn: subsoil, outdoor air and building materials. • Simulated data generated to provide a theoretical frame for understanding deviations. • Data organised in homogeneous geological groups; better agreement with the log-normal
powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks
Murray, Steven G.
2018-05-01
powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.
Song, Xuegui
2014-09-01
Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes.
Farahi, Arya; Evrard, August E.; McCarthy, Ian; Barnes, David J.; Kay, Scott T.
2018-05-01
Using tens of thousands of halos realized in the BAHAMAS and MACSIS simulations produced with a consistent astrophysics treatment that includes AGN feedback, we validate a multi-property statistical model for the stellar and hot gas mass behavior in halos hosting groups and clusters of galaxies. The large sample size allows us to extract fine-scale mass-property relations (MPRs) by performing local linear regression (LLR) on individual halo stellar mass (Mstar) and hot gas mass (Mgas) as a function of total halo mass (Mhalo). We find that: 1) both the local slope and variance of the MPRs run with mass (primarily) and redshift (secondarily); 2) the conditional likelihood, p(Mstar, Mgas| Mhalo, z) is accurately described by a multivariate, log-normal distribution, and; 3) the covariance of Mstar and Mgas at fixed Mhalo is generally negative, reflecting a partially closed baryon box model for high mass halos. We validate the analytical population model of Evrard et al. (2014), finding sub-percent accuracy in the log-mean halo mass selected at fixed property, ⟨ln Mhalo|Mgas⟩ or ⟨ln Mhalo|Mstar⟩, when scale-dependent MPR parameters are employed. This work highlights the potential importance of allowing for running in the slope and scatter of MPRs when modeling cluster counts for cosmological studies. We tabulate LLR fit parameters as a function of halo mass at z = 0, 0.5 and 1 for two popular mass conventions.
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient
Nobile, Fabio
2016-03-18
In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature and interpolation on unbounded sets. We also consider several profit indicators that are suitable to drive the adaptation process. We then use such algorithm to solve an important test case in Uncertainty Quantification problem, namely the Darcy equation with lognormal permeability random field, and compare the results with those obtained with the quasi-optimal sparse grids based on profit estimates, which we have proposed in our previous works (cf. e.g. Convergence of quasi-optimal sparse grids approximation of Hilbert-valued functions: application to random elliptic PDEs). To treat the case of rough permeability fields, in which a sparse grid approach may not be suitable, we propose to use the adaptive sparse grid quadrature as a control variate in a Monte Carlo simulation. Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields. Moreover, their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.
Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-05-01
Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.
Species Abundance in a Forest Community in South China: A Case of Poisson Lognormal Distribution
Institute of Scientific and Technical Information of China (English)
Zuo-Yun YIN; Hai REN; Qian-Mei ZHANG; Shao-Lin PENG; Qin-Feng GUO; Guo-Yi ZHOU
2005-01-01
Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m×20 m, 5 m×5 m, and 1 m×1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal;(ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (σ andμ) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the σ and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/σ should be an alternative measure of diversity.
Directory of Open Access Journals (Sweden)
Skraparlis D
2009-01-01
Full Text Available Abstract The study of relaying systems has found renewed interest in the context of cooperative diversity for communication channels suffering from fading. This paper provides analytical expressions for the end-to-end SNR and outage probability of cooperative diversity in correlated lognormal channels, typically found in indoor and specific outdoor environments. The system under consideration utilizes decode-and-forward relaying and Selection Combining or Maximum Ratio Combining at the destination node. The provided expressions are used to evaluate the gains of cooperative diversity compared to noncooperation in correlated lognormal channels, taking into account the spectral and energy efficiency of the protocols and the half-duplex or full-duplex capability of the relay. Our analysis demonstrates that correlation and lognormal variances play a significant role on the performance gain of cooperative diversity against noncooperation.
Directory of Open Access Journals (Sweden)
García, Victoriano J.
2014-12-01
Full Text Available In this paper, a new heavy-tailed distribution is used to model data with a strong right tail, as often occurs in practical situations. The distribution proposed is derived from the lognormal distribution, by using the Marshall and Olkin procedure. Some basic properties of this new distribution are obtained and we present situations where this new distribution correctly reflects the sample behaviour for the right tail probability. An application of the model to dental insurance data is presented and analysed in depth. We conclude that the generalized lognormal distribution proposed is a distribution that should be taken into account among other possible distributions for insurance data in which the properties of a heavy-tailed distribution are present. || Presentamos una nueva distribución lognormal con colas pesadas que se adapta bien a muchas situaciones prácticas en el campo de los seguros. Utilizamos el procedimiento de Marshall y Olkin para generar tal distribución y estudiamos sus propiedades básicas. Se presenta una aplicación de la misma para datos de seguros dentales que es analizada en profundidad, concluyendo que tal distribución deberá formar parte del catálogo de distribuciones a tener cuenta para la modernización de datos en seguros cuando hay presencia de colas pesadas.
Subcarrier MPSK/MDPSK modulated optical wireless communications in lognormal turbulence
Song, Xuegui
2015-03-01
Bit-error rate (BER) performance of subcarrier Mary phase-shift keying (MPSK) and M-ary differential phase-shift keying (MDPSK) is analyzed for optical wireless communications over the lognormal turbulence channels. Both exact BER and approximate BER expressions are presented. We demonstrate that the approximate BER, which is obtained by dividing the symbol error rate by the number of bits per symbol, can be used to estimate the BER performance with acceptable accuracy. Through our asymptotic analysis, we derive closed-form asymptotic BER performance loss expression for MDPSK with respect to MPSK in the lognormal turbulence channels. © 2015 IEEE.
Competition and fragmentation: a simple model generating lognormal-like distributions
International Nuclear Information System (INIS)
Schwaemmle, V; Queiros, S M D; Brigatti, E; Tchumatchenko, T
2009-01-01
The current distribution of language size in terms of speaker population is generally described using a lognormal distribution. Analyzing the original real data we show how the double-Pareto lognormal distribution can give an alternative fit that indicates the existence of a power law tail. A simple Monte Carlo model is constructed based on the processes of competition and fragmentation. The results reproduce the power law tails of the real distribution well and give better results for a poorly connected topology of interactions.
Hall, Eric
2016-01-09
The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.
On the Ergodic Capacity of Dual-Branch Correlated Log-Normal Fading Channels with Applications
Al-Quwaiee, Hessa; Alouini, Mohamed-Slim
2015-01-01
Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high
Hall, Eric; Haakon, Hoel; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul
2016-01-01
lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible
Asymptotic Expansions of the Lognormal Implied Volatility : A Model Free Approach
Cyril Grunspan
2011-01-01
We invert the Black-Scholes formula. We consider the cases low strike, large strike, short maturity and large maturity. We give explicitly the first 5 terms of the expansions. A method to compute all the terms by induction is also given. At the money, we have a closed form formula for implied lognormal volatility in terms of a power series in call price.
STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION
Directory of Open Access Journals (Sweden)
Oleg V. Rusakov
2015-01-01
Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.
Multilevel quadrature of elliptic PDEs with log-normal diffusion
Harbrecht, Helmut; Peters, Michael; Siebenmorgen, Markus
2015-01-01
Each function evaluation corresponds to a deterministic elliptic boundary value problem which can be solved by finite elements on an appropriate level of refinement. The complexity is thus given by the number
Directory of Open Access Journals (Sweden)
Savary Etienne
2013-11-01
Full Text Available Le but premier de cette étude est de montrer la faisabilité du frittage direct en cavité micro-ondes monomode de deux biomatériaux céramiques : l'hydroxyapatite et le phosphate tri-calcique. Ainsi, cette étude montre que ce procédé a permis d'obtenir, en des temps très courts, inférieurs à 20 minutes, des échantillons denses présentant des microstructures fines. Les caractérisations mécaniques sur les échantillons frittés par micro-ondes ont révélé des valeurs de module d'élasticité et de dureté supérieures à celles généralement obtenues sur des échantillons frittés de manière conventionnelle. Ces résultats sont discutés en fonction de la microstructure obtenue et des différents paramètres expérimentaux : granulométrie des poudres, température de frittage, temps d'irradiation micro-ondes. The main purpose of this study consists in investigating the direct microwaves sintering in a single mode cavity of two bioceramics: hydroxyapatite and tri-calcium phosphate. Thus, dense samples presenting fine microstructures are successfully obtained in less than 20 minutes of irradiation. The resulting mechanical characterizations on microwaves sintered samples evidence higher Young's modulus and hardness values than those usually reported on conventionally sintered samples. Those results are discussed according to the microstructures observed and the experimental parameters such as powders granulometries, sintering temperatures, microwaves irradiation times.
Graham, John H; Robb, Daniel T; Poe, Amy R
2012-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of
Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua
2018-06-01
The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.
International Nuclear Information System (INIS)
Raabe, O.G.
1986-01-01
The three-dimensional lognormal cumulative probability power function was used to provide a unifying dose-response description of the lifetime cancer risk for chronic exposure of experimental animals and people, for risk evaluation, and for scaling between species. Bone tumor fatilities, primarily from alpha irradiation of the skeleton in lifetime studies of beagles injected with 226 Ra, were shown to be well described by this function. This function described cancer risk in lifetime studies as a curved smooth surface depending on radiation exposure rate and elapsed time, such that the principal risk at low dose rates occurred near the end of the normal life span without significant life shortening. Essentially identical functions with the median value of the power function displaced with respect to appropriate RBE values were shown to describe bone-cancer induction primarily from alpha irradiation of the skeleton in lifetime beagle studies with injected 226 Ra, 228 Th, 239 Pu and 241 Am, and with inhaled 238 Pu. Application of this model to human exposures to 226 Ra yielded a response ratio of 3.6; that is, the time required for development of bone cancer in people was 3.6 times longer than for beagles at the same average skeletal dose rate. It was suggested that similar techniques were appropriate to other carcinogens and other critical organs. 20 refs., 8 figs., 3 tabs
International Nuclear Information System (INIS)
Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro
2011-01-01
We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≅10 -2.5 Mpc -1 with the upper limit B < or approx. 3 nG.
Behaviour interpretation log-normal tenor of uranium in the context of intrusive rocks
International Nuclear Information System (INIS)
Valencia, Jacinto; Palacios, Andres; Maguina, Jose
2015-01-01
Analysis and processing of the results of the tenor of uranium obtained from a rock intrusive by the method of gamma spectrometry, which result in a better correlation between uranium and thorium when the logarithm of these analyzes is used is discussed and is represented in a thorium/uranium diagram obtaining a better response. This is provided that the expression of the lognormal distribution provides a closer relation to the spatial distribution of uranium in a mineral deposit. The representation of a normal distribution and a log-normal distribution is shown. In the interpretative part explained by diagrams the behavior of the thorium/uranium and relation to potassium from direct measurements of tenors obtained in the field of sampling points of a section of granite San Ramon (SR) relationship, and volcanic Mitu Group (GM) where it has identified the granite rock of this unit as a source of uranium. (author)
International Nuclear Information System (INIS)
Miller, G.; Martz, H.; Bertelli, L.; Melo, D.
2008-01-01
A simplified biokinetic model for 137 Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods. (authors)
LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS
Energy Technology Data Exchange (ETDEWEB)
Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu [Division of Science and Mathematics, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi (United Arab Emirates)
2017-01-20
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.
On the Efficient Simulation of Outage Probability in a Log-normal Fading Environment
Rached, Nadhir B.
2017-02-15
The outage probability (OP) of the signal-to-interference-plus-noise ratio (SINR) is an important metric that is used to evaluate the performance of wireless systems. One difficulty toward assessing the OP is that, in realistic scenarios, closed-form expressions cannot be derived. This is for instance the case of the Log-normal environment, in which evaluating the OP of the SINR amounts to computing the probability that a sum of correlated Log-normal variates exceeds a given threshold. Since such a probability does not admit a closed-form expression, it has thus far been evaluated by several approximation techniques, the accuracies of which are not guaranteed in the region of small OPs. For these regions, simulation techniques based on variance reduction algorithms is a good alternative, being quick and highly accurate for estimating rare event probabilities. This constitutes the major motivation behind our work. More specifically, we propose a generalized hybrid importance sampling scheme, based on a combination of a mean shifting and a covariance matrix scaling, to evaluate the OP of the SINR in a Log-normal environment. We further our analysis by providing a detailed study of two particular cases. Finally, the performance of these techniques is performed both theoretically and through various simulation results.
Use of the lognormal distribution for the coefficients of friction and wear
International Nuclear Information System (INIS)
Steele, Clint
2008-01-01
To predict the reliability of a system, an engineer might allocate a distribution to each input. This raises a question: how to select the correct distribution? Siddall put forward an evolutionary approach that was intended to utilise both the understanding of the engineer and available data. However, this method requires a subjective initial distribution based on the engineer's understanding of the variable or parameter. If the engineer's understanding is limited, the initial distribution will be misrepresentative of the actual distribution, and application of the method will likely fail. To provide some assistance, the coefficients of friction and wear are considered here. Basic tribology theory, dimensional issues and the central limit theorem are used to argue that the distribution for each of the coefficients will typically be like a lognormal distribution. Empirical evidence from other sources is cited to lend support to this argument. It is concluded that the distributions for the coefficients of friction and wear would typically be lognormal in nature. It is therefore recommended that the engineer, without data or evidence to suggest differently, should allocate a lognormal distribution to the coefficients of friction and wear
On the Efficient Simulation of Outage Probability in a Log-normal Fading Environment
Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul
2017-01-01
The outage probability (OP) of the signal-to-interference-plus-noise ratio (SINR) is an important metric that is used to evaluate the performance of wireless systems. One difficulty toward assessing the OP is that, in realistic scenarios, closed-form expressions cannot be derived. This is for instance the case of the Log-normal environment, in which evaluating the OP of the SINR amounts to computing the probability that a sum of correlated Log-normal variates exceeds a given threshold. Since such a probability does not admit a closed-form expression, it has thus far been evaluated by several approximation techniques, the accuracies of which are not guaranteed in the region of small OPs. For these regions, simulation techniques based on variance reduction algorithms is a good alternative, being quick and highly accurate for estimating rare event probabilities. This constitutes the major motivation behind our work. More specifically, we propose a generalized hybrid importance sampling scheme, based on a combination of a mean shifting and a covariance matrix scaling, to evaluate the OP of the SINR in a Log-normal environment. We further our analysis by providing a detailed study of two particular cases. Finally, the performance of these techniques is performed both theoretically and through various simulation results.
Energy Technology Data Exchange (ETDEWEB)
Žerovnik, Gašper, E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, Andrej, E-mail: andrej.trkov@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Smith, Donald L., E-mail: donald.l.smith@anl.gov [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States); Capote, Roberto, E-mail: roberto.capotenoy@iaea.org [NAPC–Nuclear Data Section, International Atomic Energy Agency, PO Box 100, Vienna-A-1400 (Austria)
2013-11-01
Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue.
International Nuclear Information System (INIS)
Žerovnik, Gašper; Trkov, Andrej; Smith, Donald L.; Capote, Roberto
2013-01-01
Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue
On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain
Meneghini, Robert; Rincon, Rafael; Liao, Liang
2003-01-01
Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been
On the efficient simulation of the left-tail of the sum of correlated log-normal variates
Alouini, Mohamed-Slim; Rached, Nadhir B.; Kammoun, Abla; Tempone, Raul
2018-01-01
The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However
Use of critical pathway models and log-normal frequency distributions for siting nuclear facilities
International Nuclear Information System (INIS)
Waite, D.A.; Denham, D.H.
1975-01-01
The advantages and disadvantages of potential sites for nuclear facilities are evaluated through the use of environmental pathway and log-normal distribution analysis. Environmental considerations of nuclear facility siting are necessarily geared to the identification of media believed to be sifnificant in terms of dose to man or to be potential centres for long-term accumulation of contaminants. To aid in meeting the scope and purpose of this identification, an exposure pathway diagram must be developed. This type of diagram helps to locate pertinent environmental media, points of expected long-term contaminant accumulation, and points of population/contaminant interface for both radioactive and non-radioactive contaminants. Confirmation of facility siting conclusions drawn from pathway considerations must usually be derived from an investigatory environmental surveillance programme. Battelle's experience with environmental surveillance data interpretation using log-normal techniques indicates that this distribution has much to offer in the planning, execution and analysis phases of such a programme. How these basic principles apply to the actual siting of a nuclear facility is demonstrated for a centrifuge-type uranium enrichment facility as an example. A model facility is examined to the extent of available data in terms of potential contaminants and facility general environmental needs. A critical exposure pathway diagram is developed to the point of prescribing the characteristics of an optimum site for such a facility. Possible necessary deviations from climatic constraints are reviewed and reconciled with conclusions drawn from the exposure pathway analysis. Details of log-normal distribution analysis techniques are presented, with examples of environmental surveillance data to illustrate data manipulation techniques and interpretation procedures as they affect the investigatory environmental surveillance programme. Appropriate consideration is given these
Davis, Joe M
2011-10-28
General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.
On the low SNR capacity of log-normal turbulence channels with full CSI
Benkhelifa, Fatma; Tall, Abdoulaye; Rezki, Zouheir; Alouini, Mohamed-Slim
2014-01-01
In this paper, we characterize the low signal-To-noise ratio (SNR) capacity of wireless links undergoing the log-normal turbulence when the channel state information (CSI) is perfectly known at both the transmitter and the receiver. We derive a closed form asymptotic expression of the capacity and we show that it scales essentially as λ SNR where λ is the water-filling level satisfying the power constraint. An asymptotically closed-form expression of λ is also provided. Using this framework, we also propose an on-off power control scheme which is capacity-achieving in the low SNR regime.
Lognormal switching times for titanium dioxide bipolar memristors: origin and resolution
International Nuclear Information System (INIS)
Medeiros-Ribeiro, Gilberto; Perner, Frederick; Carter, Richard; Abdalla, Hisham; Pickett, Matthew D; Williams, R Stanley
2011-01-01
We measured the switching time statistics for a TiO 2 memristor and found that they followed a lognormal distribution, which is a potentially serious problem for computer memory and data storage applications. We examined the underlying physical phenomena that determine the switching statistics and proposed a simple analytical model for the distribution based on the drift/diffusion equation and previously measured nonlinear drift behavior. We designed a closed-loop switching protocol that dramatically narrows the time distribution, which can significantly improve memory circuit performance and reliability.
On the low SNR capacity of log-normal turbulence channels with full CSI
Benkhelifa, Fatma
2014-09-01
In this paper, we characterize the low signal-To-noise ratio (SNR) capacity of wireless links undergoing the log-normal turbulence when the channel state information (CSI) is perfectly known at both the transmitter and the receiver. We derive a closed form asymptotic expression of the capacity and we show that it scales essentially as λ SNR where λ is the water-filling level satisfying the power constraint. An asymptotically closed-form expression of λ is also provided. Using this framework, we also propose an on-off power control scheme which is capacity-achieving in the low SNR regime.
DEFF Research Database (Denmark)
Petersen, Peter C.; Berg, Rune W.
2016-01-01
fraction that operates within either a ‘mean-driven’ or a ‘fluctuation–driven’ regime. Fluctuation-driven neurons have a ‘supralinear’ input-output curve, which enhances sensitivity, whereas the mean-driven regime reduces sensitivity. We find a rich diversity of firing rates across the neuronal population...... as reflected in a lognormal distribution and demonstrate that half of the neurons spend at least 50 %% of the time in the ‘fluctuation–driven’ regime regardless of behavior. Because of the disparity in input–output properties for these two regimes, this fraction may reflect a fine trade–off between stability...
Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G
2012-10-01
Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.
Detecting Non-Gaussian and Lognormal Characteristics of Temperature and Water Vapor Mixing Ratio
Kliewer, A.; Fletcher, S. J.; Jones, A. S.; Forsythe, J. M.
2017-12-01
Many operational data assimilation and retrieval systems assume that the errors and variables come from a Gaussian distribution. This study builds upon previous results that shows that positive definite variables, specifically water vapor mixing ratio and temperature, can follow a non-Gaussian distribution and moreover a lognormal distribution. Previously, statistical testing procedures which included the Jarque-Bera test, the Shapiro-Wilk test, the Chi-squared goodness-of-fit test, and a composite test which incorporated the results of the former tests were employed to determine locations and time spans where atmospheric variables assume a non-Gaussian distribution. These tests are now investigated in a "sliding window" fashion in order to extend the testing procedure to near real-time. The analyzed 1-degree resolution data comes from the National Oceanic and Atmospheric Administration (NOAA) Global Forecast System (GFS) six hour forecast from the 0Z analysis. These results indicate the necessity of a Data Assimilation (DA) system to be able to properly use the lognormally-distributed variables in an appropriate Bayesian analysis that does not assume the variables are Gaussian.
Directory of Open Access Journals (Sweden)
Hea-Jung Kim
2017-06-01
Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.
Yang, Shiliang; Sun, Yuhao; Zhao, Ya; Chew, Jia Wei
2018-05-01
Granular materials are mostly polydisperse, which gives rise to phenomena such as segregation that has no monodisperse counterpart. The discrete element method is applied to simulate lognormal particle size distributions (PSDs) with the same arithmetic mean particle diameter but different PSD widths in a three-dimensional rotating drum operating in the rolling regime. Despite having the same mean particle diameter, as the PSD width of the lognormal PSDs increases, (i) the steady-state mixing index, the total kinetic energy, the ratio of the active region depth to the total bed depth, the mass fraction in the active region, the steady-state active-passive mass-based exchanging rate, and the mean solid residence time (SRT) of the particles in the active region increase, while (ii) the steady-state gyration radius, the streamwise velocity, and the SRT in the passive region decrease. Collectively, these highlight the need for more understanding of the effect of PSD width on the granular flow behavior in the rotating drum operating in the rolling flow regime.
Directory of Open Access Journals (Sweden)
Enrique Calderín-Ojeda
2017-11-01
Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.
Testing the Beta-Lognormal Model in Amazonian Rainfall Fields Using the Generalized Space q-Entropy
Directory of Open Access Journals (Sweden)
Hernán D. Salas
2017-12-01
Full Text Available We study spatial scaling and complexity properties of Amazonian radar rainfall fields using the Beta-Lognormal Model (BL-Model with the aim to characterize and model the process at a broad range of spatial scales. The Generalized Space q-Entropy Function (GSEF, an entropic measure defined as a continuous set of power laws covering a broad range of spatial scales, S q ( λ ∼ λ Ω ( q , is used as a tool to check the ability of the BL-Model to represent observed 2-D radar rainfall fields. In addition, we evaluate the effect of the amount of zeros, the variability of rainfall intensity, the number of bins used to estimate the probability mass function, and the record length on the GSFE estimation. Our results show that: (i the BL-Model adequately represents the scaling properties of the q-entropy, S q, for Amazonian rainfall fields across a range of spatial scales λ from 2 km to 64 km; (ii the q-entropy in rainfall fields can be characterized by a non-additivity value, q s a t, at which rainfall reaches a maximum scaling exponent, Ω s a t; (iii the maximum scaling exponent Ω s a t is directly related to the amount of zeros in rainfall fields and is not sensitive to either the number of bins to estimate the probability mass function or the variability of rainfall intensity; and (iv for small-samples, the GSEF of rainfall fields may incur in considerable bias. Finally, for synthetic 2-D rainfall fields from the BL-Model, we look for a connection between intermittency using a metric based on generalized Hurst exponents, M ( q 1 , q 2 , and the non-extensive order (q-order of a system, Θ q, which relates to the GSEF. Our results do not exhibit evidence of such relationship.
Generating log-normally distributed random numbers by using the Ziggurat algorithm
International Nuclear Information System (INIS)
Choi, Jong Soo
2016-01-01
Uncertainty analyses are usually based on the Monte Carlo method. Using an efficient random number generator(RNG) is a key element in success of Monte Carlo simulations. Log-normal distributed variates are very typical in NPP PSAs. This paper proposes an approach to generate log normally distributed variates based on the Ziggurat algorithm and evaluates the efficiency of the proposed Ziggurat RNG. The proposed RNG can be helpful to improve the uncertainty analysis of NPP PSAs. This paper focuses on evaluating the efficiency of the Ziggurat algorithm from a NPP PSA point of view. From this study, we can draw the following conclusions. - The Ziggurat algorithm is one of perfect random number generators to product normal distributed variates. - The Ziggurat algorithm is computationally much faster than the most commonly used method, Marsaglia polar method
On the Ergodic Capacity of Dual-Branch Correlated Log-Normal Fading Channels with Applications
Al-Quwaiee, Hessa
2015-05-01
Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high signal-to-noise (SNR). In this work, we propose simple closed-form asymptotic expressions of the ergodic capacity of dual-branch correlated Log- Normal corresponding to selection combining, and switch-and-stay combining. Furthermore, we capitalize on these new results to find new asymptotic ergodic capacity of correlated dual- branch free-space optical communication system under the impact of pointing error with both heterodyne and intensity modulation/direct detection. © 2015 IEEE.
On the efficient simulation of the left-tail of the sum of correlated log-normal variates
Alouini, Mohamed-Slim
2018-04-04
The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However, these methods are not accurate in the tail regions. These regions are of primordial interest as small probability values have to be evaluated with high precision. Variance reduction techniques are known to yield accurate, yet efficient, estimates of small probability values. Most of the existing approaches have focused on estimating the right-tail of the sum of log-normal random variables (RVs). Here, we instead consider the left-tail of the sum of correlated log-normal variates with Gaussian copula, under a mild assumption on the covariance matrix. We propose an estimator combining an existing mean-shifting importance sampling approach with a control variate technique. This estimator has an asymptotically vanishing relative error, which represents a major finding in the context of the left-tail simulation of the sum of log-normal RVs. Finally, we perform simulations to evaluate the performances of the proposed estimator in comparison with existing ones.
International Nuclear Information System (INIS)
Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow
2013-01-01
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more
Energy Technology Data Exchange (ETDEWEB)
Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)
2013-11-01
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more
Geomagnetic storms, the Dst ring-current myth and lognormal distributions
Campbell, W.H.
1996-01-01
The definition of geomagnetic storms dates back to the turn of the century when researchers recognized the unique shape of the H-component field change upon averaging storms recorded at low latitude observatories. A generally accepted modeling of the storm field sources as a magnetospheric ring current was settled about 30 years ago at the start of space exploration and the discovery of the Van Allen belt of particles encircling the Earth. The Dst global 'ring-current' index of geomagnetic disturbances, formulated in that period, is still taken to be the definitive representation for geomagnetic storms. Dst indices, or data from many world observatories processed in a fashion paralleling the index, are used widely by researchers relying on the assumption of such a magnetospheric current-ring depiction. Recent in situ measurements by satellites passing through the ring-current region and computations with disturbed magnetosphere models show that the Dst storm is not solely a main-phase to decay-phase, growth to disintegration, of a massive current encircling the Earth. Although a ring current certainly exists during a storm, there are many other field contributions at the middle-and low-latitude observatories that are summed to show the 'storm' characteristic behavior in Dst at these observatories. One characteristic of the storm field form at middle and low latitudes is that Dst exhibits a lognormal distribution shape when plotted as the hourly value amplitude in each time range. Such distributions, common in nature, arise when there are many contributors to a measurement or when the measurement is a result of a connected series of statistical processes. The amplitude-time displays of Dst are thought to occur because the many time-series processes that are added to form Dst all have their own characteristic distribution in time. By transforming the Dst time display into the equivalent normal distribution, it is shown that a storm recovery can be predicted with
Moreira, Joao; Zeng, Xiaohan; Amaral, Luis
2013-03-01
Assessing the career performance of scientists has become essential to modern science. Bibliometric indicators, like the h-index are becoming more and more decisive in evaluating grants and approving publication of articles. However, many of the more used indicators can be manipulated or falsified by publishing with very prolific researchers or self-citing papers with a certain number of citations, for instance. Accounting for these factors is possible but it introduces unwanted complexity that drives us further from the purpose of the indicator: to represent in a clear way the prestige and importance of a given scientist. Here we try to overcome this challenge. We used Thompson Reuter's Web of Science database and analyzed all the papers published until 2000 by ~1500 researchers in the top 30 departments of seven scientific fields. We find that over 97% of them have a citation distribution that is consistent with a discrete lognormal model. This suggests that our model can be used to accurately predict the performance of a researcher. Furthermore, this predictor does not depend on the individual number of publications and is not easily ``gamed'' on. The authors acknowledge support from FCT Portugal, and NSF grants
Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah
2014-01-01
This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance.
A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.
Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua
2017-07-01
Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading
Rabie, Khaled M.
2017-02-07
Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance.
Energy-harvesting in cooperative AF relaying networks over log-normal fading channels
Rabie, Khaled M.; Salem, Abdelhamid; Alsusa, Emad; Alouini, Mohamed-Slim
2016-01-01
Energy-harvesting (EH) and wireless power transfer are increasingly becoming a promising source of power in future wireless networks and have recently attracted a considerable amount of research, particularly on cooperative two-hop relay networks in Rayleigh fading channels. In contrast, this paper investigates the performance of wireless power transfer based two-hop cooperative relaying systems in indoor channels characterized by log-normal fading. Specifically, two EH protocols are considered here, namely, time switching relaying (TSR) and power splitting relaying (PSR). Our findings include accurate analytical expressions for the ergodic capacity and ergodic outage probability for the two aforementioned protocols. Monte Carlo simulations are used throughout to confirm the accuracy of our analysis. The results show that increasing the channel variance will always provide better ergodic capacity performance. It is also shown that a good selection of the EH time in the TSR protocol, and the power splitting factor in the PTS protocol, is the key to achieve the best system performance. © 2016 IEEE.
Energy-harvesting in cooperative AF relaying networks over log-normal fading channels
Rabie, Khaled M.
2016-07-26
Energy-harvesting (EH) and wireless power transfer are increasingly becoming a promising source of power in future wireless networks and have recently attracted a considerable amount of research, particularly on cooperative two-hop relay networks in Rayleigh fading channels. In contrast, this paper investigates the performance of wireless power transfer based two-hop cooperative relaying systems in indoor channels characterized by log-normal fading. Specifically, two EH protocols are considered here, namely, time switching relaying (TSR) and power splitting relaying (PSR). Our findings include accurate analytical expressions for the ergodic capacity and ergodic outage probability for the two aforementioned protocols. Monte Carlo simulations are used throughout to confirm the accuracy of our analysis. The results show that increasing the channel variance will always provide better ergodic capacity performance. It is also shown that a good selection of the EH time in the TSR protocol, and the power splitting factor in the PTS protocol, is the key to achieve the best system performance. © 2016 IEEE.
Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading
Rabie, Khaled M.; Adebisi, Bamidele; Alouini, Mohamed-Slim
2017-01-01
Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance.
Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah
2014-01-01
This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance. PMID:25126597
Duarte Queirós, Sílvio M.
2012-07-01
We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.
Berg, Wesley; Chase, Robert
1992-01-01
Global estimates of monthly, seasonal, and annual oceanic rainfall are computed for a period of one year using data from the Special Sensor Microwave/Imager (SSM/I). Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-matrix algorithm. The instantaneous rainfall estimates are stored in 1 deg square bins over the global oceans for each month. A mixed probability distribution combining a lognormal distribution describing the positive rainfall values and a spike at zero describing the observations indicating no rainfall is used to compute mean values. The resulting data for the period of interest are fitted to a lognormal distribution by using a maximum-likelihood. Mean values are computed for the mixed distribution and qualitative comparisons with published historical results as well as quantitative comparisons with corresponding in situ raingage data are performed.
Energy Technology Data Exchange (ETDEWEB)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.
International Nuclear Information System (INIS)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man
2013-01-01
The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis
Multivariate poisson lognormal modeling of crashes by type and severity on rural two lane highways.
Wang, Kai; Ivan, John N; Ravishanker, Nalini; Jackson, Eric
2017-02-01
In an effort to improve traffic safety, there has been considerable interest in estimating crash prediction models and identifying factors contributing to crashes. To account for crash frequency variations among crash types and severities, crash prediction models have been estimated by type and severity. The univariate crash count models have been used by researchers to estimate crashes by crash type or severity, in which the crash counts by type or severity are assumed to be independent of one another and modelled separately. When considering crash types and severities simultaneously, this may neglect the potential correlations between crash counts due to the presence of shared unobserved factors across crash types or severities for a specific roadway intersection or segment, and might lead to biased parameter estimation and reduce model accuracy. The focus on this study is to estimate crashes by both crash type and crash severity using the Integrated Nested Laplace Approximation (INLA) Multivariate Poisson Lognormal (MVPLN) model, and identify the different effects of contributing factors on different crash type and severity counts on rural two-lane highways. The INLA MVPLN model can simultaneously model crash counts by crash type and crash severity by accounting for the potential correlations among them and significantly decreases the computational time compared with a fully Bayesian fitting of the MVPLN model using Markov Chain Monte Carlo (MCMC) method. This paper describes estimation of MVPLN models for three-way stop controlled (3ST) intersections, four-way stop controlled (4ST) intersections, four-way signalized (4SG) intersections, and roadway segments on rural two-lane highways. Annual Average Daily traffic (AADT) and variables describing roadway conditions (including presence of lighting, presence of left-turn/right-turn lane, lane width and shoulder width) were used as predictors. A Univariate Poisson Lognormal (UPLN) was estimated by crash type and
El-Basyouny, Karim; Barua, Sudip; Islam, Md Tazul
2014-12-01
Previous research shows that various weather elements have significant effects on crash occurrence and risk; however, little is known about how these elements affect different crash types. Consequently, this study investigates the impact of weather elements and sudden extreme snow or rain weather changes on crash type. Multivariate models were used for seven crash types using five years of daily weather and crash data collected for the entire City of Edmonton. In addition, the yearly trend and random variation of parameters across the years were analyzed by using four different modeling formulations. The proposed models were estimated in a full Bayesian context via Markov Chain Monte Carlo simulation. The multivariate Poisson lognormal model with yearly varying coefficients provided the best fit for the data according to Deviance Information Criteria. Overall, results showed that temperature and snowfall were statistically significant with intuitive signs (crashes decrease with increasing temperature; crashes increase as snowfall intensity increases) for all crash types, while rainfall was mostly insignificant. Previous snow showed mixed results, being statistically significant and positively related to certain crash types, while negatively related or insignificant in other cases. Maximum wind gust speed was found mostly insignificant with a few exceptions that were positively related to crash type. Major snow or rain events following a dry weather condition were highly significant and positively related to three crash types: Follow-Too-Close, Stop-Sign-Violation, and Ran-Off-Road crashes. The day-of-the-week dummy variables were statistically significant, indicating a possible weekly variation in exposure. Transportation authorities might use the above results to improve road safety by providing drivers with information regarding the risk of certain crash types for a particular weather condition. Copyright © 2014 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Chiodo, Elio; Lauria, Davide; Mottola, Fabio; Pisani, Cosimo
2016-01-01
authors verified that the transformer’s lifetime is modeled as a lognormal, stochastic process. Hence, a novel, closed-form relationship was derived between the transformer’s lifetime and the distributional properties of the stochastic load. The usefulness of the closed-form expression is discussed for sake of design, even if a few of the considerations also are performed with respect to operating conditions. The aim of the numerical application was to demonstrate the feasibility and the easy applicability of the analytical methodology.
Energy Technology Data Exchange (ETDEWEB)
Bellasio, R [Enviroware s.r.l., Agrate Brianza, Milan (Italy). Centro Direzionale Colleoni; Lanzani, G; Ripamonti, M; Valore, M [Amministrazione Provinciale, Como (Italy)
1998-04-01
This work illustrates the possibility to interpolate the measured concentrations of CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} during one year (1995) at the 13 stations of the air quality monitoring station network of the Provinces of Como and Lecco (Italy) by means of a log-normal distribution. Particular attention was given in choosing the method for the determination of the log-normal distribution parameters among four possible methods: I natural, II percentiles, III moments, IV maximum likelihood. In order to evaluate the goodness of fit a ranking procedure was carried out over the values of four indices: absolute deviation, weighted absolute deviation, Kolmogorov-Smirnov index and Cramer-von Mises-Smirnov index. The capability of the log-normal distribution to fit the measured data is then discussed as a function of the pollutant and of the monitoring station. Finally an example of application is given: the effect of an emission reduction strategy in Lombardy Region (the so called `bollino blu`) is evaluated using a log-normal distribution. [Italiano] In questo lavoro si discute la possibilita` di interpolare le concentrazioni misurate di CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} durante un anno solare (il 1995) nelle 13 stazioni della Rete di Rilevamento della qualita` dell`aria delle Provincie di Como e di Lecco mediante una funzione log-normale. In particolare si discute quale metodo e` meglio usare per l`individuazione dei 2 parametri caratteristici della log-normale, tra 4 teoreticamente possibili: I naturale, II dei percentili, III dei momenti, IV della massima verosimiglianza. Per valutare i risultati ottenuti si usano: la deviazione assoluta, la deviazione pesata, il parametro di Kolmogorov-Smirnov e quello di Cramer-von Mises-Smirnov effettuando un ranking tra i metodi in funzione degli inquinanti e della stazione di misura. Ancora in funzione degli inquinanti e delle diverse stazioni di misura si discute poi la capacita` della funzione log-normale di
Directory of Open Access Journals (Sweden)
LIU Jun
2015-10-01
Full Text Available For new generation wireless communication networks,this paper studies the optimization of the capacity and end-to-end throughput of the MIMO-OFDM based multi-hop relay systems.A water-filling power allocation method is proposed to improve the channel capacity and the throughput of the MIMO-OFDM system based multi-hop relay system in the Lognormal-Rayleigh shadowing compound channels.Simulations on the capacity and throughput show that the water-filling algorithm can improve the system throughput effectively in the MIMO-OFDM multi-hop relay system.
International Nuclear Information System (INIS)
Vardavas, I.M.
1992-01-01
A simple procedure is presented for the statistical analysis of measurement data where the primary concern is the determination of the value corresponding to a specified average exceedance probability. The analysis employs the normal and log-normal frequency distributions together with a χ 2 -test and an error analysis. The error analysis introduces the concept of a counting error criterion, or ζ-test, to test whether the data are sufficient to make the Z 2 -test reliable. The procedure is applied to the analysis of annual rainfall data recorded at stations in the tropical Top End of Australia where the Ranger uranium deposit is situated. 9 refs., 12 tabs., 9 figs
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
Malevergne, Yannick; Pisarenko, Vladilen; Sornette, Didier
2011-03-01
Fat-tail distributions of sizes abound in natural, physical, economic, and social systems. The lognormal and the power laws have historically competed for recognition with sometimes closely related generating processes and hard-to-distinguish tail properties. This state-of-affair is illustrated with the debate between Eeckhout [Amer. Econ. Rev. 94, 1429 (2004)] and Levy [Amer. Econ. Rev. 99, 1672 (2009)] on the validity of Zipf's law for US city sizes. By using a uniformly most powerful unbiased (UMPU) test between the lognormal and the power-laws, we show that conclusive results can be achieved to end this debate. We advocate the UMPU test as a systematic tool to address similar controversies in the literature of many disciplines involving power laws, scaling, "fat" or "heavy" tails. In order to demonstrate that our procedure works for data sets other than the US city size distribution, we also briefly present the results obtained for the power-law tail of the distribution of personal identity (ID) losses, which constitute one of the major emergent risks at the interface between cyberspace and reality.
International Nuclear Information System (INIS)
Stanisic, D.; Hancock, A.A.; Kyncl, J.J.; Lin, C.T.; Bush, E.N.
1986-01-01
(-)-Norepinephrine (NE) is used as an internal standard in their in vitro adrenergic assays, and the concentration of NE which produces a half-maximal inhibition of specific radioligand binding (affinity; K/sub I/), or half-maximal contractile response (potency; ED 50 ) has been measured numerous times. The goodness-of-fit test for normality was performed on both normal (Gaussian) or log 10 -normal frequency histograms of these data using the SAS Univariate procedure. Specific binding of 3 H-prazosin to rat liver (α 1 -), 3 H rauwolscine to rat cortex (α 2 -) and 3 H-dihydroalprenolol to rat ventricle (β 1 -) or rat lung (β 2 -receptors) was inhibited by NE; the distributions of NE K/sub I/'s at all these sites were skewed to the right, with highly significant (p 50 's of NE in isolated rabbit aorta (α 1 ), phenoxybenzamine-treated dog saphenous vein (α 2 ) and guinea pig atrium (β 1 ). The vasorelaxant potency of atrial natriuretic hormone in histamine-contracted rabbit aorta also was better described by a log-normal distribution, indicating that log-normalcy is probably a general phenomenon of drug-receptor interactions. Because data of this type appear to be log-normally distributed, geometric means should be used in parametric statistical analyses
Half-Duplex and Full-Duplex AF and DF Relaying with Energy-Harvesting in Log-Normal Fading
Rabie, Khaled M.
2017-08-15
Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels, which represent outdoor environments. In contrast, this paper is dedicated to analyze the performance of dual-hop relaying systems with EH over indoor channels characterized by log-normal fading. Both half-duplex (HD) and full-duplex (FD) relaying mechanisms are studied in this work with decode-and-forward (DF) and amplify-and-forward (AF) relaying protocols. In addition, three EH schemes are investigated, namely, time switching relaying, power splitting relaying and ideal relaying receiver which serves as a lower bound. The system performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions. Monte Carlo simulations are provided throughout to validate the accuracy of our analysis. Results reveal that, in both HD and FD scenarios, AF relaying performs only slightly worse than DF relaying which can make the former a more efficient solution when the processing energy cost at the DF relay is taken into account. It is also shown that FD relaying systems can generally outperform HD relaying schemes as long as the loop-back interference in FD is relatively small. Furthermore, increasing the variance of the log-normal channel has shown to deteriorate the performance in all the relaying and EH protocols considered.
Noise Challenges in Monomodal Gaze Interaction
DEFF Research Database (Denmark)
Skovsgaard, Henrik
of life via dedicated interface tools especially tailored to the users’ needs (e.g., interaction, communication, e-mailing, web browsing and entertainment). Much effort has been put towards robustness, accuracy and precision of modern eye-tracking systems and there are many available on the market. Even...
International Nuclear Information System (INIS)
Kane, V.E.
1979-10-01
The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical data from the National Uranium Resource Evaluation Program
Directory of Open Access Journals (Sweden)
José Raúl Machado Fernández
2018-01-01
Full Text Available Se presenta el nuevo detector LN-MoM-CA-CFAR que tiene una desviación reducida en la tasa de probabilidad de falsa alarma operacional con respecto al valor concebido de diseño. La solución corrige un problema fundamental de los procesadores CFAR que ha sido ignora-do en múltiples desarrollos. En efecto, la mayoría de los esquemas previamente propuestos tratan con los cambios bruscos del nivel del clutter mientras que la presente solución corrige los cambios lentos estadísticos de la señal de fondo. Se ha demostrado que estos tienen una influencia marcada en la selección del factor de ajuste multiplicativo CFAR, y consecuen-temente en el mantenimiento de la probabilidad de falsa alarma. Los autores aprovecharon la alta precisión que se alcanza en la estimación del parámetro de forma Log-Normal con el MoM, y la amplia aplicación de esta distribución en la modelación del clutter, para crear una arquitectura que ofrece resultados precisos y con bajo costo computacional. Luego de un procesamiento intensivo de 100 millones de muestras Log-Normal, se creó un esquema que, mejorando el desempeño del clásico CA-CFAR a través de la corrección continua de su fac-tor de ajuste, opera con una excelente estabilidad alcanzando una desviación de solamente 0,2884 % para la probabilidad de falsa alarma de 0,01.
Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza
2016-01-01
Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center, capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (Pdonation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.
Wang, Yiyi; Kockelman, Kara M
2013-11-01
This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tulet, Pierre; Crassier, Vincent; Cousin, Frederic; Suhre, Karsten; Rosset, Robert
2005-09-01
Classical aerosol schemes use either a sectional (bin) or lognormal approach. Both approaches have particular capabilities and interests: the sectional approach is able to describe every kind of distribution, whereas the lognormal one makes assumption of the distribution form with a fewer number of explicit variables. For this last reason we developed a three-moment lognormal aerosol scheme named ORILAM to be coupled in three-dimensional mesoscale or CTM models. This paper presents the concept and hypothesis of a range of aerosol processes such as nucleation, coagulation, condensation, sedimentation, and dry deposition. One particular interest of ORILAM is to keep explicit the aerosol composition and distribution (mass of each constituent, mean radius, and standard deviation of the distribution are explicit) using the prediction of three-moment (m0, m3, and m6). The new model was evaluated by comparing simulations to measurements from the Escompte campaign and to a previously published aerosol model. The numerical cost of the lognormal mode is lower than two bins of the sectional one.
Log-Normal Distribution in a Growing System with Weighted and Multiplicatively Interacting Particles
Fujihara, Akihiro; Tanimoto, Satoshi; Yamamoto, Hiroshi; Ohtsuki, Toshiya
2018-03-01
A growing system with weighted and multiplicatively interacting particles is investigated. Each particle has a quantity that changes multiplicatively after a binary interaction, with its growth rate controlled by a weight parameter in a homogeneous symmetric kernel. We consider the system using moment inequalities and analytically derive the log-normal-type tail in the probability distribution function of quantities when the parameter is negative, which is different from the result for single-body multiplicative processes. We also find that the system approaches a winner-take-all state when the parameter is positive.
International Nuclear Information System (INIS)
Gigase, Yves
2007-01-01
Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)
EVIDENCE FOR TWO LOGNORMAL STATES IN MULTI-WAVELENGTH FLUX VARIATION OF FSRQ PKS 1510-089
Energy Technology Data Exchange (ETDEWEB)
Kushwaha, Pankaj; Misra, Ranjeev [Inter University Center for Astronomy and Astrophysics, Pune 411007 (India); Chandra, Sunil; Singh, K. P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Sahayanathan, S. [Astrophysical Sciences Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Baliyan, K. S., E-mail: pankajk@iucaa.in [Physical Research Laboratory, Ahmedabad 380009 (India)
2016-05-01
We present a systematic characterization of multi-wavelength emission from blazar PKS 1510-089 using well-sampled data at near-infrared (NIR), optical, X-ray, and γ -ray energies. The resulting flux distributions, except at X-rays, show two distinct lognormal profiles corresponding to a high and a low flux level. The dispersions exhibit energy-dependent behavior except in the LAT γ -ray and optical B-band. During the low level flux states, it is higher toward the peak of the spectral energy distribution, with γ -ray being intrinsically more variable followed by IR and then optical, consistent with mainly being a result of varying bulk Lorentz factor. On the other hand, the dispersions during the high state are similar in all bands except the optical B-band, where thermal emission still dominates. The centers of distributions are a factor of ∼4 apart, consistent with anticipation from studies of extragalactic γ -ray background with the high state showing a relatively harder mean spectral index compared to the low state.
Simulation of mineral dust aerosol with Piecewise Log-normal Approximation (PLA in CanAM4-PAM
Directory of Open Access Journals (Sweden)
Y. Peng
2012-08-01
Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Model (CanAM4-PAM. The total simulated annual global dust emission is 2500 Tg yr^{−1}, and the dust mass load is 19.3 Tg for year 2000. Both are consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Biases in long-range transport are also contributing. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with satellite and surface remote sensing measurements and shows general agreement in terms of the dust distribution around sources. The model yields a dust AOD of 0.042 and dust aerosol direct radiative forcing (ADRF of −1.24 W m^{−2} respectively, which show good consistency with model estimates from other studies.
Energy Technology Data Exchange (ETDEWEB)
Mould, Richard F [41 Ewhurst Avenue, South Croydon, Surrey CR2 0DH (United Kingdom); Lahanas, Michael [Klinikum Offenbach, Strahlenklinik, 66 Starkenburgring, 63069 Offenbach am Main (Germany); Asselain, Bernard [Institut Curie, Biostatistiques, 26 rue d' Ulm, 75231 Paris Cedex 05 (France); Brewster, David [Director, Scottish Cancer Registry, Information Services (NHS National Services Scotland) Area 155, Gyle Square, 1 South Gyle Crescent, Edinburgh EH12 9EB (United Kingdom); Burgers, Sjaak A [Department of Thoracic Oncology, Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam, The (Netherlands); Damhuis, Ronald A M [Rotterdam Cancer Registry, Rochussenstraat 125, PO Box 289, 3000 AG Rotterdam, The (Netherlands); Rycke, Yann De [Institut Curie, Biostatistiques, 26 rue d' Ulm, 75231 Paris Cedex 05 (France); Gennaro, Valerio [Liguria Mesothelioma Cancer Registry, Etiology and Epidemiology Department, National Cancer Research Institute, Pad. Maragliano, Largo R Benzi, 10-16132 Genoa (Italy); Szeszenia-Dabrowska, Neonila [Department of Occupational and Environmental Epidemiology, National Institute of Occupational Medicine, PO Box 199, Swietej Teresy od Dzieciatka Jezus 8, 91-348 Lodz (Poland)
2004-09-07
A truncated left-censored and right-censored lognormal model has been validated for representing pleural mesothelioma survival times in the range 5-200 weeks for data subsets grouped by age for males, 40-49, 50-59, 60-69, 70-79 and 80+ years and for all ages combined for females. The cases available for study were from Europe and USA and totalled 5580. This is larger than any other pleural mesothelioma cohort accrued for study. The methodology describes the computation of reference baseline probabilities, 5-200 weeks, which can be used in clinical trials to assess results of future promising treatment methods. This study is an extension of previous lognormal modelling by Mould et al (2002 Phys. Med. Biol. 47 3893-924) to predict long-term cancer survival from short-term data where the proportion cured is denoted by C and the uncured proportion, which can be represented by a lognormal, by (1 - C). Pleural mesothelioma is a special case when C = 0.
International Nuclear Information System (INIS)
Mould, Richard F; Lahanas, Michael; Asselain, Bernard; Brewster, David; Burgers, Sjaak A; Damhuis, Ronald A M; Rycke, Yann De; Gennaro, Valerio; Szeszenia-Dabrowska, Neonila
2004-01-01
A truncated left-censored and right-censored lognormal model has been validated for representing pleural mesothelioma survival times in the range 5-200 weeks for data subsets grouped by age for males, 40-49, 50-59, 60-69, 70-79 and 80+ years and for all ages combined for females. The cases available for study were from Europe and USA and totalled 5580. This is larger than any other pleural mesothelioma cohort accrued for study. The methodology describes the computation of reference baseline probabilities, 5-200 weeks, which can be used in clinical trials to assess results of future promising treatment methods. This study is an extension of previous lognormal modelling by Mould et al (2002 Phys. Med. Biol. 47 3893-924) to predict long-term cancer survival from short-term data where the proportion cured is denoted by C and the uncured proportion, which can be represented by a lognormal, by (1 - C). Pleural mesothelioma is a special case when C = 0
Directory of Open Access Journals (Sweden)
Rehez Ahlip
2015-01-01
model for the exchange rate with log-normal jump amplitudes and the volatility model with log-uniformly distributed jump amplitudes. We assume that the domestic and foreign stochastic interest rates are governed by the CIR dynamics. The instantaneous volatility is correlated with the dynamics of the exchange rate return, whereas the domestic and foreign short-term rates are assumed to be independent of the dynamics of the exchange rate and its volatility. The main result furnishes a semianalytical formula for the price of the foreign exchange European call option.
Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.
2017-04-01
Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.
Hussein, Ahmed Abdulqader; Rahman, Tharek A; Leow, Chee Yen
2015-12-04
Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks.
Abdulqader Hussein, Ahmed; Rahman, Tharek A.; Leow, Chee Yen
2015-01-01
Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks. PMID:26690159
Rationalisation of distribution functions for models of nanoparticle magnetism
International Nuclear Information System (INIS)
El-Hilo, M.; Chantrell, R.W.
2012-01-01
A formalism is presented which reconciles the use of different distribution functions of particle diameter in analytical models of the magnetic properties of nanoparticle systems. For the lognormal distribution a transformation is derived which shows that a distribution of volume fraction transforms into a lognormal distribution of particle number albeit with a modified median diameter. This transformation resolves an apparent discrepancy reported in Tournus and Tamion [Journal of Magnetism and Magnetic Materials 323 (2011) 1118]. - Highlights: ► We resolve a problem resulting from the misunderstanding of the nature. ► The nature of dispersion functions in models of nanoparticle magnetism. ► The derived transformation between distributions will be of benefit in comparing models and experimental results.
International Nuclear Information System (INIS)
Iliadis, C.; Longland, R.; Champagne, A.E.; Coc, A.; Fitzgerald, R.
2010-01-01
Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this issue (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, 'lower limit', 'nominal value' and 'upper limit' of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters μ and σ at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rate probability density functions directly in a stellar model code for studies of stellar energy generation and nucleosynthesis. For each reaction, the Monte Carlo reaction rate probability density functions, together with their lognormal approximations, are displayed graphically for selected temperatures in order to provide a visual impression. Our new reaction rates are appropriate for bare nuclei in the laboratory. The nuclear physics input used to derive our reaction rates is presented in the subsequent paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.
International Nuclear Information System (INIS)
Taleb, M.; McKay, E.
1999-01-01
Full text: Any strategy for image registration requires some method (a cost function) by which two images may be compared The mutual information (MI) between images is one such cost function. MI measures the structural similarity between pairs of gray-scale images and performs cross-modality image registration with minimal image pre-processing. This project compares the performance of MI vs the sum of absolute differences (SAD) 'gold standard' in monomodality image registration problems. It also examines the precision of cross-modality registration based on MI, using a human observer to decide whether registration is accurate. Thirteen paired brain SPET scans were registered using SAD as a cost function. Registration was repeated using MI and differences from the SAD results were recorded. Ten paired MRI and SPET brain scans registered using the MI cost function. Registration was repeated three times for each pair, varying the SPET position or orientation each time. Comparing MI to SAD, the median values of translation error were 2.85, 4.63 and 2.56 mm in the x, y and z axis and 0.5 j , 1.1 j and 1.0 j around the x, y and z axis respectively. For the cross-modality problems, the mean standard deviation (MSD) observed in x, y and z positioning was 0.18, 0.28 and 0.16 mm respectively. The MSD of orientation was 5.35 j , 1.95 j and 2.48 j around the x, y and z axis respectively. MI performed as well as SAD for monomodality registration. Unlike SAD, MI is also useful for cross-modality image registration tasks, producing visually acceptable results with minimal preprocessing
Directory of Open Access Journals (Sweden)
Alberto Cargnelutti Filho
2004-12-01
Full Text Available O objetivo deste trabalho foi verificar o ajuste das séries de dados de radiação solar global média decendial, de 22 municípios do Estado do Rio Grande do Sul, às funções de distribuições de probabilidade normal, log-normal, gama, gumbel e weibull. Aplicou-se o teste de aderência de Kolmogorov-Smirnov, nas 792 séries de dados (22 municípios x 36 decêndios de radiação solar global média decendial, para verificar o ajuste dos dados às distribuições normal, log-normal, gama, gumbel e weibull, totalizando 3.960 testes. Os dados decendiais de radiação solar global média se ajustam às funções de distribuições de probabilidade normal, log-normal, gama, gumbel e weibull, e apresentam melhor ajuste à função de distribuição de probabilidade normal.The objective of this work was to verify the adjustment of data series for average global solar radiation to the normal, log-normal, gamma, gumbel and weibull probability distribution functions. Data were collected from 22 cities in Rio Grande do Sul State, Brazil. The Kolmogorov-Smirnov test was applied in the 792 series of data (22 localities x 36 periods of ten days of average global solar radiation to verify the adjustment of the data to the normal, log-normal, gamma, gumbel and weibull probability distribution functions, totalizing 3,960 tests. The data of average global solar radiation adjust to the normal, log-normal, gamma, gumbel and weibull probability distribution functions, and present a better adjustment to the normal probability function.
Distribution functions for the linear region of the S-N curve
Energy Technology Data Exchange (ETDEWEB)
Mueller, Christian; Waechter, Michael; Masendorf, Rainer; Esderts, Alfons [TU Clausthal, Clausthal-Zellerfeld (Germany). Inst. for Plant Engineering and Fatigue Analysis
2017-08-01
This study establishes a database containing the results of fatigue tests from the linear region of the S-N curve using sources from the literature. Each set of test results originates from testing metallic components on a single load level. Eighty-nine test series with sample sizes of 14 ≤ n ≤ 500 are included in the database, resulting in a sum of 6,086 individual test results. The test series are tested in terms of the type of distribution function (log-normal or 2-parameter Weibull) using the Shapiro-Wilk test, the Anderson-Darling test and probability plots. The majority of the tested individual test results follows a log-normal distribution.
Patil, N.G.; Benaskar, F.; Rebrov, E.; Meuldijk, J.; Hulshof, L.A.; Hessel, V.; Schouten, J.C.
2014-01-01
A new scale-up concept for microwave assisted flow processing is presented where modular scale-up is achieved by implementing microwave cavities in series. The scale-up concept is demonstrated for case studies of a packed-bed reactor and a wall-coated tubular reactor. With known kinetics and
Continuously tunable monomode mid-infrared vertical external cavity surface emitting laser on Si
Khiar, A.; Rahim, M.; Fill, M.; Felder, F.; Hobrecker, F.; Zogg, H.
2010-10-01
A tunable PbTe based mid-infrared vertical external cavity surface emitting laser is described. The active part is a ˜1 μm thick PbTe layer grown epitaxially on a Bragg mirror on the Si-substrate. The cavity is terminated with a curved Si/SiO Bragg top mirror and pumped optically with a 1.55 μm laser. Cavity length is <100 μm in order that only one longitudinal mode is supported. By changing the cavity length, up to 5% wavelength continuous and mode-hop free tuning is achieved at fixed temperature. The total tuning extends from 5.6 to 4.7 μm at 100-170 K operation temperature.
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
A Dual Power Law Distribution for the Stellar Initial Mass Function
Hoffmann, Karl Heinz; Essex, Christopher; Basu, Shantanu; Prehl, Janett
2018-05-01
We introduce a new dual power law (DPL) probability distribution function for the mass distribution of stellar and substellar objects at birth, otherwise known as the initial mass function (IMF). The model contains both deterministic and stochastic elements, and provides a unified framework within which to view the formation of brown dwarfs and stars resulting from an accretion process that starts from extremely low mass seeds. It does not depend upon a top down scenario of collapsing (Jeans) masses or an initial lognormal or otherwise IMF-like distribution of seed masses. Like the modified lognormal power law (MLP) distribution, the DPL distribution has a power law at the high mass end, as a result of exponential growth of mass coupled with equally likely stopping of accretion at any time interval. Unlike the MLP, a power law decay also appears at the low mass end of the IMF. This feature is closely connected to the accretion stopping probability rising from an initially low value up to a high value. This might be associated with physical effects of ejections sometimes (i.e., rarely) stopping accretion at early times followed by outflow driven accretion stopping at later times, with the transition happening at a critical time (therefore mass). Comparing the DPL to empirical data, the critical mass is close to the substellar mass limit, suggesting that the onset of nuclear fusion plays an important role in the subsequent accretion history of a young stellar object.
Robust functional statistics applied to Probability Density Function shape screening of sEMG data.
Boudaoud, S; Rix, H; Al Harrach, M; Marin, F
2014-01-01
Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.
Usng subjective percentiles and test data for estimating fragility functions
International Nuclear Information System (INIS)
George, L.L.; Mensing, R.W.
1981-01-01
Fragility functions are cumulative distribution functions (cdfs) of strengths at failure. They are needed for reliability analyses of systems such as power generation and transmission systems. Subjective opinions supplement sparse test data for estimating fragility functions. Often the opinions are opinions on the percentiles of the fragility function. Subjective percentiles are likely to be less biased than opinions on parameters of cdfs. Solutions to several problems in the estimation of fragility functions are found for subjective percentiles and test data. How subjective percentiles should be used to estimate subjective fragility functions, how subjective percentiles should be combined with test data, how fragility functions for several failure modes should be combined into a composite fragility function, and how inherent randomness and uncertainty due to lack of knowledge should be represented are considered. Subjective percentiles are treated as independent estimates of percentiles. The following are derived: least-squares parameter estimators for normal and lognormal cdfs, based on subjective percentiles (the method is applicable to any invertible cdf); a composite fragility function for combining several failure modes; estimators of variation within and between groups of experts for nonidentically distributed subjective percentiles; weighted least-squares estimators when subjective percentiles have higher variation at higher percents; and weighted least-squares and Bayes parameter estimators based on combining subjective percentiles and test data. 4 figures, 2 tables
Chowdhury, Snehaunshu
2017-01-23
In this study, we demonstrate the use of a scanning mobility particle sizer (SMPS) as an effective tool to measure the probability density functions (PDFs) of soot nanoparticles in turbulent flames. Time-averaged soot PDFs necessary for validating existing soot models are reported at intervals of ∆x/D∆x/D = 5 along the centerline of turbulent, non-premixed, C2H4/N2 flames. The jet exit Reynolds numbers of the flames investigated were 10,000 and 20,000. A simplified burner geometry based on a published design was chosen to aid modelers. Soot was sampled directly from the flame using a sampling probe with a 0.5-mm diameter orifice and diluted with N2 by a two-stage dilution process. The overall dilution ratio was not evaluated. An SMPS system was used to analyze soot particle concentrations in the diluted samples. Sampling conditions were optimized over a wide range of dilution ratios to eliminate the effect of agglomeration in the sampling probe. Two differential mobility analyzers (DMAs) with different size ranges were used separately in the SMPS measurements to characterize the entire size range of particles. In both flames, the PDFs were found to be mono-modal in nature near the jet exit. Further downstream, the profiles were flatter with a fall-off at larger particle diameters. The geometric mean of the soot size distributions was less than 10 nm for all cases and increased monotonically with axial distance in both flames.
Analyzing coastal environments by means of functional data analysis
Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.
2017-07-01
Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.
International Nuclear Information System (INIS)
Kleidon, Axel; Pavlick, Ryan; Reu, Bjoern; Adams, Jonathan
2009-01-01
Among the most pronounced large-scale geographic patterns of plant biodiversity are the increase in plant species richness towards the tropics, a more even distribution of the relative abundances of plant species in the tropics, and a nearly log-normal relative abundance distribution. Here we use an individual-based plant diversity model that relates climatic constraints to feasible plant growth strategies to show that all three basic diversity patterns can be predicted merely from the climatic constraints acting upon plant ecophysiological trade-offs. Our model predicts that towards objectively 'harsher' environments, the range of feasible growth strategies resulting in reproductive plants is reduced, thus resulting in lower functional plant species richness. The reduction of evenness is attributed to a more rapid decline in productivity from the most productive to less productive plant growth strategies since the particular setup of the strategy becomes more important in maintaining high productivity in harsher environments. This approach is also able to reproduce the increase in the deviation from a log-normal distribution towards more evenly distributed communities of the tropics. Our results imply that these general biodiversity relationships can be understood primarily by considering the climatic constraints on plant ecophysiological trade-offs.
Probability density functions of photochemicals over a coastal area of Northern Italy
International Nuclear Information System (INIS)
Georgiadis, T.; Fortezza, F.; Alberti, L.; Strocchi, V.; Marani, A.; Dal Bo', G.
1998-01-01
The present paper surveys the findings of experimental studies and analyses of statistical probability density functions (PDFs) applied to air pollutant concentrations to provide an interpretation of the ground-level distributions of photochemical oxidants in the coastal area of Ravenna (Italy). The atmospheric-pollution data set was collected from the local environmental monitoring network for the period 1978-1989. Results suggest that the statistical distribution of surface ozone, once normalised over the solar radiation PDF for the whole measurement period, follows a log-normal law as found for other pollutants. Although the Weibull distribution also offers a good fit of the experimental data, the area's meteorological features seem to favour the former distribution once the statistical index estimates have been analysed. Local transport phenomena are discussed to explain the data tail trends
Fontaine, Mathieu; Latarjet, Jacques; Payre, Jacqueline; Poupelin, Jean-Charles; Ravat, François
2017-03-01
The severe pain related to repeated burn dressing changes at bedside is often difficult to manage. However these dressings can be performed at bedside on spontaneously breathing non-intubated patients using powerful intravenous opioids with a quick onset and a short duration of action such as alfentanil. The purpose of this study is to demonstrate the efficacy and safety of the protocol which is used in our burn unit for pain control during burn dressing changes. Cohort study began after favorable opinion from local ethic committee has been collected. Patient's informed consent was collected. No fasting was required. Vital signs for patients were continuously monitored (non-invasive blood pressure, ECG monitoring, cutaneous oxygen saturation, respiratory rate) all over the process. Boluses of 500 (±250) mcg IV alfentanil were administered. A continuous infusion was added in case of insufficient analgesia. Adverse reactions were collected and pain intensity was measured throughout the dressing using a ten step verbal rating scale (VRS) ranging from 0 (no pain) to 10 (worst pain conceivable). 100 dressings (35 patients) were analyzed. Median age was 45 years and median burned area 10%. We observed 3 blood pressure drops, 5 oxygen desaturations (treated with stimulation without the necessity of ventilatory support) and one episode of nausea. Most of the patients (87%) were totally conscious during the dressing and 13% were awakened by verbal stimulation. Median total dose of alfentanil used was 2000μg for a median duration of 35min. Pain scores during the procedure were low or moderate (VRS mean=2.0 and maximal VRS=5). Median satisfaction collected 2h after the dressing was 10 on a ten step scale. Pain control with intravenous alfentanil alone is efficient and appears safe for most burn bedside repeated dressings in hospitalized patients. It achieves satisfactory analgesia during and after the procedure. It is now our standard analgesic method to provide repeated bedside dressings changes for burned patients. Copyright Â© 2016 Elsevier Ltd and ISBI. All rights reserved.
DEFF Research Database (Denmark)
Tran, Phuong Hoang; Hansen, Poul Erik; Nguyen, Hai Truong
2015-01-01
Erbium trifluoromethanesulfonate is found to be a good catalyst for the Friedel–Crafts acylation of arenes containing electron-donating substituents using aromatic carboxylic acids as the acylating agents under microwave irradiation. An effective, rapid and waste-free method allows the preparation...... of a wide range of aryl ketones in good yields and in short reaction times with minimum amounts of waste...
Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S
2017-05-30
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Hazard function analysis for flood planning under nonstationarity
Read, Laura K.; Vogel, Richard M.
2016-05-01
The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.
Plume particle collection and sizing from static firing of solid rocket motors
Sambamurthi, Jay K.
1995-01-01
A unique dart system has been designed and built at the NASA Marshall Space Flight Center to collect aluminum oxide plume particles from the plumes of large scale solid rocket motors, such as the space shuttle RSRM. The capability of this system to collect clean samples from both the vertically fired MNASA (18.3% scaled version of the RSRM) motors and the horizontally fired RSRM motor has been demonstrated. The particle mass averaged diameters, d43, measured from the samples for the different motors, ranged from 8 to 11 mu m and were independent of the dart collection surface and the motor burn time. The measured results agreed well with those calculated using the industry standard Hermsen's correlation within the standard deviation of the correlation . For each of the samples analyzed from both MNASA and RSRM motors, the distribution of the cumulative mass fraction of the plume oxide particles as a function of the particle diameter was best described by a monomodal log-normal distribution with a standard deviation of 0.13 - 0.15. This distribution agreed well with the theoretical prediction by Salita using the OD3P code for the RSRM motor at the nozzle exit plane.
Gelfand, I M; Graev, M I; Vilenkin, N Y; Pyatetskii-Shapiro, I I
Volume 1 is devoted to basics of the theory of generalized functions. The first chapter contains main definitions and most important properties of generalized functions as functional on the space of smooth functions with compact support. The second chapter talks about the Fourier transform of generalized functions. In Chapter 3, definitions and properties of some important classes of generalized functions are discussed; in particular, generalized functions supported on submanifolds of lower dimension, generalized functions associated with quadratic forms, and homogeneous generalized functions are studied in detail. Many simple basic examples make this book an excellent place for a novice to get acquainted with the theory of generalized functions. A long appendix presents basics of generalized functions of complex variables.
Sun, Ying
2011-01-01
This article proposes an informative exploratory tool, the functional boxplot, for visualizing functional data, as well as its generalization, the enhanced functional boxplot. Based on the center outward ordering induced by band depth for functional data, the descriptive statistics of a functional boxplot are: the envelope of the 50% central region, the median curve, and the maximum non-outlying envelope. In addition, outliers can be detected in a functional boxplot by the 1.5 times the 50% central region empirical rule, analogous to the rule for classical boxplots. The construction of a functional boxplot is illustrated on a series of sea surface temperatures related to the El Niño phenomenon and its outlier detection performance is explored by simulations. As applications, the functional boxplot and enhanced functional boxplot are demonstrated on children growth data and spatio-temporal U.S. precipitation data for nine climatic regions, respectively. This article has supplementary material online. © 2011 American Statistical Association.
Directory of Open Access Journals (Sweden)
Anatoliy Klimyk
2006-01-01
Full Text Available In the paper, properties of orbit functions are reviewed and further developed. Orbit functions on the Euclidean space E_n are symmetrized exponential functions. The symmetrization is fulfilled by a Weyl group corresponding to a Coxeter-Dynkin diagram. Properties of such functions will be described. An orbit function is the contribution to an irreducible character of a compact semisimple Lie group G of rank n from one of its Weyl group orbits. It is shown that values of orbit functions are repeated on copies of the fundamental domain F of the affine Weyl group (determined by the initial Weyl group in the entire Euclidean space E_n. Orbit functions are solutions of the corresponding Laplace equation in E_n, satisfying the Neumann condition on the boundary of F. Orbit functions determine a symmetrized Fourier transform and a transform on a finite set of points.
International Nuclear Information System (INIS)
Angelis De, F.; Haentjens, J.
1995-01-01
The Functional Displays are directly derived from the Man-Machine Design key document: Function-Based Task Analysis. The presentation defines and describes the goals-means structure of the plant function along with applicable control volumes and parameters of interest. The purpose of the subject is to show, through an example of a preliminary design, what the main parts of a function are. (3 figs.)
Chitil, Olaf
2009-01-01
Functional programming is a programming paradigm like object-oriented programming and logic programming. Functional programming comprises both a specific programming style and a class of programming languages that encourage and support this programming style. Functional programming enables the programmer to describe an algorithm on a high-level, in terms of the problem domain, without having to deal with machine-related details. A program is constructed from functions that only map inputs to ...
DEFF Research Database (Denmark)
Della Pia, Eduardo Antonio; Hansen, Randi Westh; Zoonens, Manuela
2014-01-01
Amphipols are amphipathic polymers that stabilize membrane proteins isolated from their native membrane. They have been functionalized with various chemical groups in the past years for protein labeling and protein immobilization. This large toolbox of functionalized amphipols combined with their...... surfaces for various applications in synthetic biology. This review summarizes the properties of functionalized amphipols suitable for synthetic biology approaches....
Serum thyrotropin (TSH) levels in patients with suppressed pituitary function
International Nuclear Information System (INIS)
Vasavada, P.; Chen, I.; Maxon, H.; Barnes, E.; Sperling, M.
1984-01-01
The diagnosis of borderline hyperthyroidism is difficult. A sensitive radioimmunoassay capable of detecting subnormal levels of serum TSH may be of value in confirming this diagnosis because of the suppressed pituitary function in this disease state. This sensitive assay may also be useful in monitoring the suppression of pituitary function in thyroid cancer patients receiving thyroid hormone therapy. A sensitive radioimmunoassay capable of detecting serum TSH levels as low as 0.25 μU/m1 with coefficients of variation less than 17.2% was used to measure serum TSH levels in 80 healthy subjects, 44 hyperthyroid patients, and 25 athyrotic thyroid cancer patients on daily suppressive doses of thyroxine. All healthy subjects had detectable TSH levels with a mean value of 1.17 and two standard deviation ranges of 0.41 - 2.70 μU/m1 (lognormal distribution). Although the mean +-1 SEM value of 0.63 +- 0.003 μUm1 for hyperthyroid patients and 0.76 +- 0.08 μU/ml for thyroid cancer patients were significantly lower than that of healthy subjects (t-test, p<0.05), subnormal levels of serum TSH were found in only 28.6% (12/42) and 24% (6/25) of hyperthyroid and thyroid cancer patients, respectively. TSH stimulation tests performed in 6 of the cancer patients all gave suppressed responses. Because of considerable overlap, serum TSH levels alone cannot distinguish hyperthyroidsm from euthyroidism. However, a sensitive TSH radioimmunoassay such as the one described here may be of value in evaluating the extent of pituitary suppression in thyroid cancer therapy
DEFF Research Database (Denmark)
Campi, Stefano; Gardner, Richard; Gronchi, Paolo
2012-01-01
Variants of the brightness function of a convex body K in n-dimensional Euclidean are investigated. The Lambertian lightness function L(K; v , w ) gives the total reflected light resulting from illumination by a light source at infinity in the direction w that is visible when looking...... in the direction v . The partial brightness function R( K ; v , w ) gives the area of the projection orthogonal to v of the portion of the surface of K that is both illuminated by a light source from the direction w and visible when looking in the direction v . A class of functions called lightness functions...... is introduced that includes L(K;.) and R(K;.) as special cases. Much of the theory of the brightness function like uniqueness, stability, and the existence and properties of convex bodies of maximal and minimal volume with finitely many function values equal to those of a given convex body, is extended...
Fitting Statistical Distributions Functions on Ozone Concentration Data at Coastal Areas
International Nuclear Information System (INIS)
Muhammad Yazid Nasir; Nurul Adyani Ghazali; Muhammad Izwan Zariq Mokhtar; Norhazlina Suhaimi
2016-01-01
Ozone is known as one of the pollutant that contributes to the air pollution problem. Therefore, it is important to carry out the study on ozone. The objective of this study is to find the best statistical distribution for ozone concentration. There are three distributions namely Inverse Gaussian, Weibull and Lognormal were chosen to fit one year hourly average ozone concentration data in 2010 at Port Dickson and Port Klang. Maximum likelihood estimation (MLE) method was used to estimate the parameters to develop the probability density function (PDF) graph and cumulative density function (CDF) graph. Three performance indicators (PI) that are normalized absolute error (NAE), prediction accuracy (PA), and coefficient of determination (R 2 ) were used to determine the goodness-of-fit criteria of the distribution. Result shows that Weibull distribution is the best distribution with the smallest error measure value (NAE) at Port Klang and Port Dickson is 0.08 and 0.31, respectively. The best score for highest adequacy measure (PA: 0.99) with the value of R 2 is 0.98 (Port Klang) and 0.99 (Port Dickson). These results provide useful information to local authorities for prediction purpose. (author)
Kantorovich, L V
1982-01-01
Functional Analysis examines trends in functional analysis as a mathematical discipline and the ever-increasing role played by its techniques in applications. The theory of topological vector spaces is emphasized, along with the applications of functional analysis to applied analysis. Some topics of functional analysis connected with applications to mathematical economics and control theory are also discussed. Comprised of 18 chapters, this book begins with an introduction to the elements of the theory of topological spaces, the theory of metric spaces, and the theory of abstract measure space
Sun, Ying; Genton, Marc G.
2011-01-01
data, the descriptive statistics of a functional boxplot are: the envelope of the 50% central region, the median curve, and the maximum non-outlying envelope. In addition, outliers can be detected in a functional boxplot by the 1.5 times the 50% central
Ludwig, L; McWhirter, L; Williams, S; Derry, C; Stone, J
2016-01-01
Functional coma - here defined as a prolonged motionless dissociative attack with absent or reduced response to external stimuli - is a relatively rare presentation. In this chapter we examine a wide range of terms used to describe states of unresponsiveness in which psychologic factors are relevant to etiology, such as depressive stupor, catatonia, nonepileptic "pseudostatus," and factitious disorders, and discuss the place of functional or psychogenic coma among these. Historically, diagnosis of functional coma has sometimes been reached after prolonged investigation and exclusion of other diagnoses. However, as is the case with other functional disorders, diagnosis should preferably be made on the basis of positive findings that provide evidence of inconsistency between an apparent comatose state and normal waking nervous system functioning. In our review of physical signs, we find some evidence for the presence of firm resistance to eye opening as reasonably sensitive and specific for functional coma, as well as the eye gaze sign, in which patients tend to look to the ground when turned on to one side. Noxious stimuli such as Harvey's sign (application of high-frequency vibrating tuning fork to the nasal mucosa) can also be helpful, although patients with this disorder are often remarkably unresponsive to usually painful stimuli, particularly as more commonly applied using sternal or nail bed pressure. The use of repeated painful stimuli is therefore not recommended. We also discuss the role of general anesthesia and other physiologic triggers to functional coma. © 2016 Elsevier B.V. All rights reserved.
... RESOURCES Medical Societies Patient Education About this Website Font Size + - Home > TREATMENTS > Rhinoplasty (Functional) Nasal/Sinus Irrigation ... performed to restore breathing, it typically necessitates some type of change to the appearance of the nose. ...
Schwingenschuh, P; Deuschl, G
2016-01-01
Functional tremor is the commonest reported functional movement disorder. A confident clinical diagnosis of functional tremor is often possible based on the following "positive" criteria: a sudden tremor onset, unusual disease course, often with fluctuations or remissions, distractibility of the tremor if attention is removed from the affected body part, tremor entrainment, tremor variability, and a coactivation sign. Many patients show excessive exhaustion during examination. Other somatizations may be revealed in the medical history and patients may show additional functional neurologic symptoms and signs. In cases where the clinical diagnosis remains challenging, providing a "laboratory-supported" level of certainty aids an early positive diagnosis. In rare cases, in which the distinction from Parkinson's disease is difficult, dopamine transporter single-photon emission computed tomography (DAT-SPECT) can be indicated. © 2016 Elsevier B.V. All rights reserved.
Because chemicals can adversely affect cognitive function in humans, considerable effort has been made to characterize their effects using animal models. Information from such models will be necessary to: evaluate whether chemicals identified as potentially neurotoxic by screenin...
DEFF Research Database (Denmark)
Danvy, Olivier
2000-01-01
A string-formatting function such as printf in C seemingly requires dependent types, because its control string determines the rest of its arguments. Examples: formula here We show how changing the representation of the control string makes it possible to program printf in ML (which does not allow...... dependent types). The result is well typed and perceptibly more efficient than the corresponding library functions in Standard ML of New Jersey and in Caml....
DEFF Research Database (Denmark)
Danvy, Olivier
1998-01-01
A string-formatting function such as printf in C seemingly requires dependent types, because its control string determines the rest of its arguments. We show how changing the representation of the control string makes it possible to program printf in ML (which does not allow dependent types......). The result is well typed and perceptibly more efficient than the corresponding library functions in Standard ML of New Jersey and in Caml....
Czech Academy of Sciences Publication Activity Database
Bustince, H.; Fernández, J.; Mesiar, Radko; Montero, J.; Orduna, R.
2010-01-01
Roč. 72, 3-4 (2010), s. 1488-1499 ISSN 0362-546X R&D Projects: GA ČR GA402/08/0618 Institutional research plan: CEZ:AV0Z10750506 Keywords : t-norm * Migrative property * Homogeneity property * Overlap function Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://library.utia.cas.cz/separaty/2009/E/mesiar-overlap functions.pdf
International Nuclear Information System (INIS)
Pfeifle, T.W.; Mellegard, K.D.; Munson, D.E.
1992-10-01
The modified Munson-Dawson (M-D) constitutive model that describes the creep behavior of salt will be used in performance assessment calculations to assess compliance of the Waste Isolation Pilot Plant (WIPP) facility with requirements governing the disposal of nuclear waste. One of these standards requires that the uncertainty of future states of the system, material model parameters, and data be addressed in the performance assessment models. This paper presents a method in which measurement uncertainty and the inherent variability of the material are characterized by treating the M-D model parameters as random variables. The random variables can be described by appropriate probability distribution functions which then can be used in Monte Carlo or structural reliability analyses. Estimates of three random variables in the M-D model were obtained by fitting a scalar form of the model to triaxial compression creep data generated from tests of WIPP salt. Candidate probability distribution functions for each of the variables were then fitted to the estimates and their relative goodness-of-fit tested using the Kolmogorov-Smirnov statistic. A sophisticated statistical software package obtained from BMDP Statistical Software, Inc. was used in the M-D model fitting. A separate software package, STATGRAPHICS, was used in fitting the candidate probability distribution functions to estimates of the variables. Skewed distributions, i.e., lognormal and Weibull, were found to be appropriate for the random variables analyzed
Nambudiripad, K B M
2014-01-01
After presenting the theory in engineers' language without the unfriendly abstraction of pure mathematics, several illustrative examples are discussed in great detail to see how the various functions of the Bessel family enter into the solution of technically important problems. Axisymmetric vibrations of a circular membrane, oscillations of a uniform chain, heat transfer in circular fins, buckling of columns of varying cross-section, vibrations of a circular plate and current density in a conductor of circular cross-section are considered. The problems are formulated purely from physical considerations (using, for example, Newton's law of motion, Fourier's law of heat conduction electromagnetic field equations, etc.) Infinite series expansions, recurrence relations, manipulation of expressions involving Bessel functions, orthogonality and expansion in Fourier-Bessel series are also covered in some detail. Some important topics such as asymptotic expansions, generating function and Sturm-Lioville theory are r...
Ghoussoub, Nassif
2013-01-01
The book describes how functional inequalities are often manifestations of natural mathematical structures and physical phenomena, and how a few general principles validate large classes of analytic/geometric inequalities, old and new. This point of view leads to "systematic" approaches for proving the most basic inequalities, but also for improving them, and for devising new ones--sometimes at will and often on demand. These general principles also offer novel ways for estimating best constants and for deciding whether these are attained in appropriate function spaces. As such, improvements of Hardy and Hardy-Rellich type inequalities involving radially symmetric weights are variational manifestations of Sturm's theory on the oscillatory behavior of certain ordinary differential equations. On the other hand, most geometric inequalities, including those of Sobolev and Log-Sobolev type, are simply expressions of the convexity of certain free energy functionals along the geodesics on the Wasserstein manifold of...
Bliss, Gilbert Ames
1933-01-01
This book, immediately striking for its conciseness, is one of the most remarkable works ever produced on the subject of algebraic functions and their integrals. The distinguishing feature of the book is its third chapter, on rational functions, which gives an extremely brief and clear account of the theory of divisors.... A very readable account is given of the topology of Riemann surfaces and of the general properties of abelian integrals. Abel's theorem is presented, with some simple applications. The inversion problem is studied for the cases of genus zero and genus unity. The chapter on t
Kleibeuker, JH; Thijs, JC
2004-01-01
Purpose of review Functional dyspepsia is a common disorder, most of the time of unknown etiology and with variable pathophysiology. Therapy has been and still is largely empirical. Data from recent studies provide new clues for targeted therapy based on knowledge of etiology and pathophysiologic
Distribution functions of magnetic nanoparticles determined by a numerical inversion method
International Nuclear Information System (INIS)
Bender, P; Balceris, C; Ludwig, F; Posth, O; Bogart, L K; Szczerba, W; Castro, A; Nilsson, L; Costo, R; Gavilán, H; González-Alonso, D; Pedro, I de; Barquín, L Fernández; Johansson, C
2017-01-01
In the present study, we applied a regularized inversion method to extract the particle size, magnetic moment and relaxation-time distribution of magnetic nanoparticles from small-angle x-ray scattering (SAXS), DC magnetization (DCM) and AC susceptibility (ACS) measurements. For the measurements the particles were colloidally dispersed in water. At first approximation the particles could be assumed to be spherically shaped and homogeneously magnetized single-domain particles. As model functions for the inversion, we used the particle form factor of a sphere (SAXS), the Langevin function (DCM) and the Debye model (ACS). The extracted distributions exhibited features/peaks that could be distinctly attributed to the individually dispersed and non-interacting nanoparticles. Further analysis of these peaks enabled, in combination with a prior characterization of the particle ensemble by electron microscopy and dynamic light scattering, a detailed structural and magnetic characterization of the particles. Additionally, all three extracted distributions featured peaks, which indicated deviations of the scattering (SAXS), magnetization (DCM) or relaxation (ACS) behavior from the one expected for individually dispersed, homogeneously magnetized nanoparticles. These deviations could be mainly attributed to partial agglomeration (SAXS, DCM, ACS), uncorrelated surface spins (DCM) and/or intra-well relaxation processes (ACS). The main advantage of the numerical inversion method is that no ad hoc assumptions regarding the line shape of the extracted distribution functions are required, which enabled the detection of these contributions. We highlighted this by comparing the results with the results obtained by standard model fits, where the functional form of the distributions was a priori assumed to be log-normal shaped. (paper)
Choi, B. H.; Min, B. I.; Yoshinobu, T.; Kim, K. O.; Pelinovsky, E.
2012-04-01
Data from a field survey of the 2011 tsunami in the Sanriku area of Japan is presented and used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated using a theoretical log-normal curve [Choi et al, 2002]. The characteristics of the distribution functions derived from the runup-heights data obtained during the 2011 event are compared with data from two previous gigantic tsunamis (1896 and 1933) that occurred in almost the same region. The number of observations during the last tsunami is very large (more than 5,247), which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and number of observations suggested by Kajiura [1983]. The distribution function of the 2011 event demonstrates the sensitivity to the number of observation points (many of them cannot be considered independent measurements) and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.
Directory of Open Access Journals (Sweden)
B. H. Choi
2012-05-01
Full Text Available Data from a field survey of the 2011 Tohoku-oki tsunami in the Sanriku area of Japan is used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated by a theoretical log-normal curve. The characteristics of the distribution functions of the 2011 event are compared with data from two previous catastrophic tsunamis (1896 and 1933 that occurred in almost the same region. The number of observations during the last tsunami is very large, which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and the number of observed runup heights suggested by Kajiura (1983 based on a small amount of data on previous tsunamis. The distribution function of the 2011 event demonstrates the sensitivity to the number of measurements (many of them cannot be considered independent measurements and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.
Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows
McKenzie, D.; Savage, S.
2011-01-01
The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.
Directory of Open Access Journals (Sweden)
Fani Nolimal
2000-12-01
Full Text Available The author first defines literacy as the ability of co-operation in all fields of life and points at the features of illiterate or semi-literate individuals. The main stress is laid upon the assessment of literacy and illiteracy. In her opinion the main weak ness of this kind of evaluation are its vague psycho-metric characteristics, which leads to results valid in a single geographical or cultural environment only. She also determines the factors causing illiteracy, and she states that the level of functional literacy is more and more becoming a national indicator of successfulness.
International Nuclear Information System (INIS)
Sorichter, S.
2009-01-01
The term lung function is often restricted to the assessment of volume time curves measured at the mouth. Spirometry includes the assessment of lung volumes which can be mobilised with the corresponding flow-volume curves. In addition, lung volumes that can not be mobilised, such as the residual volume, or only partially as FRC and TLC can be measured by body plethysmography combined with the determination of the airway resistance. Body plethysmography allows the correct positioning of forced breathing manoeuvres on the volume-axis, e.g. before and after pharmacotherapy. Adding the CO single breath transfer factor (T LCO ), which includes the measurement of the ventilated lung volume using He, enables a clear diagnosis of different obstructive, restrictive or mixed ventilatory defects with and without trapped air. Tests of reversibility and provocation, as well as the assessment of inspiratory mouth pressures (PI max , P 0.1 ) help to classify the underlying disorder and to clarify treatment strategies. For further information and to complete the diagnostic of disturbances of the ventilation, diffusion and/or perfusion (capillar-)arterial bloodgases at rest and under physical strain sometimes amended by ergospirometry are recommended. Ideally, lung function measurements are amended by radiological and nuclear medicine techniques. (orig.) [de
International Nuclear Information System (INIS)
Weber, J.; May, R.; Biland, L.; Endert, G.; Gottlob, R.; Justich, E.; Luebcke, P.; Mignon, G.; Moltz, L.; Partsch, H.; Petter, A.; Ritter, H.; Soerensen, R.; Widmer, L.K.; Widmer, M.T.; Zemp, E.
1990-01-01
The book presents a complete survey of the problems occurring in the venous system of the legs, pelvis, and abdomen. The material is arranged in the following main chapters: (1) Introduction to the phlebology of the low-pressure system in the lower part of the body; (2) Phlebographic methods; (3) Instrumented function studies and methods; (4) Pathologic findings; (5) Diagnostic methods and vein therapy; (6) Interventional radiology; (7) Expert opinions on venous lesions including insurance aspects. The first chapter encompasses a section briefly discussing the available instrumented diagnostic imaging methods. In view of the novel imaging methods, namely digital subtraction phlebology, sonography, CT and MRI, the classical phlebography remains the gold standard, so to speak: all currently available phlebographic methods for imaging the venes in the legs, pelvis and abdomen are explained and comparatively evaluated. Instrumented function tests such as Doppler effect ultrasound testing, plethysmography, peripheral and central phlebodynamometry (venous pressure measurement) are analysed for their diagnostic value and as alternative or supplementing techniques in comparison to phlebology. (orig./MG) With 843 figs., 101 tabs [de
Directory of Open Access Journals (Sweden)
Deuber Dominic
2018-04-01
Full Text Available A functional credential allows a user to anonymously prove possession of a set of attributes that fulfills a certain policy. The policies are arbitrary polynomially computable predicates that are evaluated over arbitrary attributes. The key feature of this primitive is the delegation of verification to third parties, called designated verifiers. The delegation protects the privacy of the policy: A designated verifier can verify that a user satisfies a certain policy without learning anything about the policy itself. We illustrate the usefulness of this property in different applications, including outsourced databases with access control. We present a new framework to construct functional credentials that does not require (non-interactive zero-knowledge proofs. This is important in settings where the statements are complex and thus the resulting zero-knowledge proofs are not efficient. Our construction is based on any predicate encryption scheme and the security relies on standard assumptions. A complexity analysis and an experimental evaluation confirm the practicality of our approach.
Directory of Open Access Journals (Sweden)
Rohit Tewari
2013-01-01
Full Text Available Coronary angiography underestimates or overestimates lesion severity, but still remains the cornerstone in the decision making for revascularization for an overwhelming majority of interventional cardiologists. Guidelines recommend and endorse non invasive functional evaluation ought to precede revascularization. In real world practice, this is adopted in less than 50% of patients who go on to have some form of revascularization. Fractional flow reserve (FFR is the ratio of maximal blood flow in a stenotic coronary relative to maximal flow in the same vessel, were it normal. Being independent of changes in heart rate, BP or prior infarction; and take into account the contribution of collateral blood flow. It is a majorly specific index with a reasonably high sensitivity (88%, specificity (100%, positive predictive value (100%, and overall accuracy (93%. Whilst FFR provides objective determination of ischemia and helps select appropriate candidates for revascularization (for both CABG and PCI in to cath lab itself before intervention, whereas intravascular ultrasound/optical coherence tomography guidance in PCI can secure the procedure by optimizing stent expansion. Functional angioplasty simply is incorporating both intravascular ultrasound and FFR into our daily Intervention practices.
... Home » Thyroid Function Tests Leer en Español Thyroid Function Tests FUNCTION HOW DOES THE THYROID GLAND FUNCTION? ... Cancer Thyroid Nodules in Children and Adolescents Thyroid Function Tests Resources Thyroid Function Tests Brochure PDF En ...
International Nuclear Information System (INIS)
Park, J. Y.; Hong, G. W.; Lee, H. J.
2002-05-01
Development of fabrication process of functional ceramic materials, evaluation of characteristics and experiments for understanding of irradiation behavior of ceramics were carried out for application of ceramics to the nuclear industry. The developed processes were the SiC surface coating technology with large area for improvement of wear resistance and corrosion resistance, the fabrication technology of SiC composites for excellent irradiation resistance, performance improvement technology of SiC fiber and nano-sized powder processing by combustion ignition and spray. Typical results were CVD SiC coating with diameter of 25cm and thickness of 100μm, highly dense SiC composite by F-CVI, heat-treating technology of SiC fiber using B4C power, and nano-sized powders of ODS-Cu, Li-based breeding materials, Ni-based metal powders with primary particle diameter of 20∼50nm. Furthermore, test equipment, data productions and damage evaluations were performed to understand corrosion resistance and wear resistance of alumina, silicon carbide and silicon nitride under PWR or PHWR operation conditions. Experimental procedures and basic technologies for evaluation of irradiation behavior were also established. Additionally, highly reactive precursor powders were developed by various technologies and the powders were applied to the fabrication of 100 m long Ag/Bi-2223 multi-filamentary wires. High Tc magnets and fly wheel for energy storage were developed, as well
Energy Technology Data Exchange (ETDEWEB)
Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)
2014-07-01
In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for
Special functions & their applications
Lebedev, N N
1972-01-01
Famous Russian work discusses the application of cylinder functions and spherical harmonics; gamma function; probability integral and related functions; Airy functions; hyper-geometric functions; more. Translated by Richard Silverman.
DEFF Research Database (Denmark)
Mailund, Thomas
Master functions and discover how to write functional programs in R. In this book, you'll make your functions pure by avoiding side-effects; you’ll write functions that manipulate other functions, and you’ll construct complex functions using simpler functions as building blocks. In Functional...... Programming in R, you’ll see how we can replace loops, which can have side-effects, with recursive functions that can more easily avoid them. In addition, the book covers why you shouldn't use recursion when loops are more efficient and how you can get the best of both worlds. Functional programming...... functions by combining simpler functions. You will: Write functions in R including infix operators and replacement functions Create higher order functions Pass functions to other functions and start using functions as data you can manipulate Use Filer, Map and Reduce functions to express the intent behind...
Resummed coefficient function for the shape function
Aglietti, U.
2001-01-01
We present a leading evaluation of the resummed coefficient function for the shape function. It is also shown that the coefficient function is short-distance-dominated. Our results allow relating the shape function computed on the lattice to the physical QCD distributions.
Time functions function best as functions of multiple times
Desain, P.; Honing, H.
1992-01-01
This article presents an elegant way of representing control functions at an abstractlevel. It introduces time functions that have multiple times as arguments. In this waythe generalized concept of a time function can support absolute and relative kinds of time behavior. Furthermore the
Lognormal Kalman filter for assimilating phase space density data in the radiation belts
Kondrashov, D.; Ghil, M.; Shprits, Y.
2011-11-01
Data assimilation combines a physical model with sparse observations and has become an increasingly important tool for scientists and engineers in the design, operation, and use of satellites and other high-technology systems in the near-Earth space environment. Of particular importance is predicting fluxes of high-energy particles in the Van Allen radiation belts, since these fluxes can damage spaceborne platforms and instruments during strong geomagnetic storms. In transiting from a research setting to operational prediction of these fluxes, improved data assimilation is of the essence. The present study is motivated by the fact that phase space densities (PSDs) of high-energy electrons in the outer radiation belt—both simulated and observed—are subject to spatiotemporal variations that span several orders of magnitude. Standard data assimilation methods that are based on least squares minimization of normally distributed errors may not be adequate for handling the range of these variations. We propose herein a modification of Kalman filtering that uses a log-transformed, one-dimensional radial diffusion model for the PSDs and includes parameterized losses. The proposed methodology is first verified on model-simulated, synthetic data and then applied to actual satellite measurements. When the model errors are sufficiently smaller then observational errors, our methodology can significantly improve analysis and prediction skill for the PSDs compared to those of the standard Kalman filter formulation. This improvement is documented by monitoring the variance of the innovation sequence.
An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient
Nobile, Fabio; Tamellini, Lorenzo; Tesei, Francesco; Tempone, Raul
2016-01-01
In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature
Modelling the Skinner Thesis : Consequences of a Lognormal or a Bimodal Resource Base Distribution
Auping, W.L.
2014-01-01
The copper case is often used as an example in resource depletion studies. Despite these studies, several profound uncertainties remain in the system. One of these uncertainties is the distribution of copper grades in the lithosphere. The Skinner thesis promotes the idea that copper grades may be
On a direct algorithm for the generation of log-normal pseudo-random numbers
Chamayou, J M F
1976-01-01
The random variable ( Pi /sub i=1//sup n/X/sub i//X/sub i+n/)/sup 1/ square root 2n/ is used to generate standard log normal variables Lambda (0, 1), where the X/sub i/ are independent uniform variables on (0, 1). (8 refs).
The Stochastic Galerkin Method for Darcy Flow Problem with Log-Normal Random
Czech Academy of Sciences Publication Activity Database
Beres, Michal; Domesová, Simona
2017-01-01
Roč. 15, č. 2 (2017), s. 267-279 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Darcy flow * Gaussian random field * Karhunen-Loeve decomposition * polynomial chaos * Stochastic Galerkin method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2280
An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis
Diwakar, Rekha
2017-01-01
Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…
Log-normal spray drop distribution...analyzed by two new computer programs
Gerald S. Walton
1968-01-01
Results of U.S. Forest Service research on chemical insecticides suggest that large drops are not as effective as small drops in carrying insecticides to target insects. Two new computer programs have been written to analyze size distribution properties of drops from spray nozzles. Coded in Fortran IV, the programs have been tested on both the CDC 6400 and the IBM 7094...
Lognormal distribution of natural radionuclides in freshwater ecosystems and coal-ash repositories
International Nuclear Information System (INIS)
Drndarski, N.; Lavi, N.
1997-01-01
This study summarizes and analyses data for natural radionuclides, 40 K, 226 Ra and 'Th, measured by gamma spectrometry in water samples, sediments and coal-ash samples collected from regional freshwater ecosystems and near-by coal-ash repositories during the last decade, 1986-1996, respectively. The frequency plots of natural radionuclide data, for which the hypothesis of the regional scale log normality was accepted, exhibited single population groups with exception of 226 Ra and 232 Th data for waters. Thus the presence of break points in the frequency distribution plots indicated that 226 Ra and 232 Th data for waters do not come from a single statistical population. Thereafter the hypothesis of log normality was accepted for the separate population groups of 226 Ra and '-32 Th in waters. (authors)
Possible Lognormal Distribution of Fermi-LAT Data of OJ 287 G. G. ...
Indian Academy of Sciences (India)
random noise is helpful in the search for periodicity and provides implication of the physical process in the jet or the accretion disk. OJ 287 was also monitored in the .... understanding the central engine of a blazar (Figures 1, 2 and 3). Acknowledgements. This work is partially supported by the National Natural Science ...
Wave-function functionals for the density
International Nuclear Information System (INIS)
Slamet, Marlina; Pan Xiaoyin; Sahni, Viraht
2011-01-01
We extend the idea of the constrained-search variational method for the construction of wave-function functionals ψ[χ] of functions χ. The search is constrained to those functions χ such that ψ[χ] reproduces the density ρ(r) while simultaneously leading to an upper bound to the energy. The functionals are thereby normalized and automatically satisfy the electron-nucleus coalescence condition. The functionals ψ[χ] are also constructed to satisfy the electron-electron coalescence condition. The method is applied to the ground state of the helium atom to construct functionals ψ[χ] that reproduce the density as given by the Kinoshita correlated wave function. The expectation of single-particle operators W=Σ i r i n , n=-2,-1,1,2, W=Σ i δ(r i ) are exact, as must be the case. The expectations of the kinetic energy operator W=-(1/2)Σ i ∇ i 2 , the two-particle operators W=Σ n u n , n=-2,-1,1,2, where u=|r i -r j |, and the energy are accurate. We note that the construction of such functionals ψ[χ] is an application of the Levy-Lieb constrained-search definition of density functional theory. It is thereby possible to rigorously determine which functional ψ[χ] is closer to the true wave function.
Nonlocal kinetic energy functionals by functional integration
Mi, Wenhui; Genova, Alessandro; Pavanello, Michele
2018-05-01
Since the seminal studies of Thomas and Fermi, researchers in the Density-Functional Theory (DFT) community are searching for accurate electron density functionals. Arguably, the toughest functional to approximate is the noninteracting kinetic energy, Ts[ρ], the subject of this work. The typical paradigm is to first approximate the energy functional and then take its functional derivative, δ/Ts[ρ ] δ ρ (r ) , yielding a potential that can be used in orbital-free DFT or subsystem DFT simulations. Here, this paradigm is challenged by constructing the potential from the second-functional derivative via functional integration. A new nonlocal functional for Ts[ρ] is prescribed [which we dub Mi-Genova-Pavanello (MGP)] having a density independent kernel. MGP is constructed to satisfy three exact conditions: (1) a nonzero "Kinetic electron" arising from a nonzero exchange hole; (2) the second functional derivative must reduce to the inverse Lindhard function in the limit of homogenous densities; (3) the potential is derived from functional integration of the second functional derivative. Pilot calculations show that MGP is capable of reproducing accurate equilibrium volumes, bulk moduli, total energy, and electron densities for metallic (body-centered cubic, face-centered cubic) and semiconducting (crystal diamond) phases of silicon as well as of III-V semiconductors. The MGP functional is found to be numerically stable typically reaching self-consistency within 12 iterations of a truncated Newton minimization algorithm. MGP's computational cost and memory requirements are low and comparable to the Wang-Teter nonlocal functional or any generalized gradient approximation functional.
Generalized Probability Functions
Directory of Open Access Journals (Sweden)
Alexandre Souto Martinez
2009-01-01
Full Text Available From the integration of nonsymmetrical hyperboles, a one-parameter generalization of the logarithmic function is obtained. Inverting this function, one obtains the generalized exponential function. Motivated by the mathematical curiosity, we show that these generalized functions are suitable to generalize some probability density functions (pdfs. A very reliable rank distribution can be conveniently described by the generalized exponential function. Finally, we turn the attention to the generalization of one- and two-tail stretched exponential functions. We obtain, as particular cases, the generalized error function, the Zipf-Mandelbrot pdf, the generalized Gaussian and Laplace pdf. Their cumulative functions and moments were also obtained analytically.
Functionality and homogeneity.
2011-01-01
Functionality and homogeneity are two of the five Sustainable Safety principles. The functionality principle aims for roads to have but one exclusive function and distinguishes between traffic function (flow) and access function (residence). The homogeneity principle aims at differences in mass,
Extraocular muscle function testing
... medlineplus.gov/ency/article/003397.htm Extraocular muscle function testing To use the sharing features on this page, please enable JavaScript. Extraocular muscle function testing examines the function of the eye muscles. ...
Congenital platelet function defects
... pool disorder; Glanzmann's thrombasthenia; Bernard-Soulier syndrome; Platelet function defects - congenital ... Congenital platelet function defects are bleeding disorders that cause reduced platelet function. Most of the time, people with these disorders have ...
Hepatic (Liver) Function Panel
... Educators Search English Español Blood Test: Hepatic (Liver) Function Panel KidsHealth / For Parents / Blood Test: Hepatic (Liver) ... kidneys ) is working. What Is a Hepatic (Liver) Function Panel? A liver function panel is a blood ...
... Patient Resources For Health Professionals Subscribe Search Platelet Function Tests Send Us Your Feedback Choose Topic At ... Also Known As Platelet Aggregation Studies PFT Platelet Function Assay PFA Formal Name Platelet Function Tests This ...
Terashima, Yuji
2008-01-01
In this paper, defining Poisson functions on super manifolds, we show that the graphs of Poisson functions are Dirac structures, and find Poisson functions which include as special cases both quasi-Poisson structures and twisted Poisson structures.
International Nuclear Information System (INIS)
Monks, R.; Riley, A.L.M.
1981-01-01
This invention relates to the investigation of body function, especially small bowel function but also liver function, using bile acids and bile salts or their metabolic precursors labelled with radio isotopes and selenium or tellurium. (author)
Functional bowel disorders and functional abdominal pain
Thompson, W; Longstreth, G; Drossman, D; Heaton, K; Irvine, E; Muller-Lissner, S
1999-01-01
The Rome diagnostic criteria for the functional bowel disorders and functional abdominal pain are used widely in research and practice. A committee consensus approach, including criticism from multinational expert reviewers, was used to revise the diagnostic criteria and update diagnosis and treatment recommendations, based on research results. The terminology was clarified and the diagnostic criteria and management recommendations were revised. A functional bowel disorder (FBD) is diagnosed ...
Functional microorganisms for functional food quality.
Gobbetti, M; Cagno, R Di; De Angelis, M
2010-09-01
Functional microorganisms and health benefits represent a binomial with great potential for fermented functional foods. The health benefits of fermented functional foods are expressed either directly through the interactions of ingested live microorganisms with the host (probiotic effect) or indirectly as the result of the ingestion of microbial metabolites synthesized during fermentation (biogenic effect). Since the importance of high viability for probiotic effect, two major options are currently pursued for improving it--to enhance bacterial stress response and to use alternative products for incorporating probiotics (e.g., ice cream, cheeses, cereals, fruit juices, vegetables, and soy beans). Further, it seems that quorum sensing signal molecules released by probiotics may interact with human epithelial cells from intestine thus modulating several physiological functions. Under optimal processing conditions, functional microorganisms contribute to food functionality through their enzyme portfolio and the release of metabolites. Overproduction of free amino acids and vitamins are two classical examples. Besides, bioactive compounds (e.g., peptides, γ-amino butyric acid, and conjugated linoleic acid) may be released during food processing above the physiological threshold and they may exert various in vivo health benefits. Functional microorganisms are even more used in novel strategies for decreasing phenomenon of food intolerance (e.g., gluten intolerance) and allergy. By a critical approach, this review will aim at showing the potential of functional microorganisms for the quality of functional foods.
DEFF Research Database (Denmark)
Agerbo, Heidi
2017-01-01
Approximately a decade ago, it was suggested that a new function should be added to the lexicographical function theory: the interpretive function(1). However, hardly any research has been conducted into this function, and though it was only suggested that this new function was relevant...... to incorporate into lexicographical theory, some scholars have since then assumed that this function exists(2), including the author of this contribution. In Agerbo (2016), I present arguments supporting the incorporation of the interpretive function into the function theory and suggest how non-linguistic signs...... can be treated in specific dictionary articles. However, in the current article, due to the results of recent research, I argue that the interpretive function should not be considered an individual main function. The interpretive function, contrary to some of its definitions, is not connected...
Every storage function is a state function
Trentelman, H.L.; Willems, J.C.
1997-01-01
It is shown that for linear dynamical systems with quadratic supply rates, a storage function can always be written as a quadratic function of the state of an associated linear dynamical system. This dynamical system is obtained by combining the dynamics of the original system with the dynamics of
Persistent Functional Languages: Toward Functional Relational Databases
Wevers, L.
2014-01-01
Functional languages provide new approaches to concurrency control, based on techniques such as lazy evaluation and memoization. We have designed and implemented a persistent functional language based on these ideas, which we plan to use for the implementation of a relational database system. With
Osborne, Harold
1985-01-01
Historical background concerning the nature and function of museums is provided, and the aesthetic functions of museums are discussed. The first major aesthetic function of museums is to preserve the artistic heritage of mankind and to make it widely available. The second major function is patronage. (RM)
Hierarchical wave functions revisited
International Nuclear Information System (INIS)
Li Dingping.
1997-11-01
We study the hierarchical wave functions on a sphere and on a torus. We simplify some wave functions on a sphere or a torus using the analytic properties of wave functions. The open question, the construction of the wave function for quasi electron excitation on a torus, is also solved in this paper. (author)
DEFF Research Database (Denmark)
Mailund, Thomas
Master functions and discover how to write functional programs in R. In this book, you'll make your functions pure by avoiding side-effects; you’ll write functions that manipulate other functions, and you’ll construct complex functions using simpler functions as building blocks. In Functional...... Programming in R, you’ll see how we can replace loops, which can have side-effects, with recursive functions that can more easily avoid them. In addition, the book covers why you shouldn't use recursion when loops are more efficient and how you can get the best of both worlds. Functional programming...... is a style of programming, like object-oriented programming, but one that focuses on data transformations and calculations rather than objects and state. Where in object-oriented programming you model your programs by describing which states an object can be in and how methods will reveal or modify...
DEFF Research Database (Denmark)
Mailund, Thomas
2017-01-01
Master functions and discover how to write functional programs in R. In this book, you'll make your functions pure by avoiding side-effects; you’ll write functions that manipulate other functions, and you’ll construct complex functions using simpler functions as building blocks. In Functional...... Programming in R, you’ll see how we can replace loops, which can have side-effects, with recursive functions that can more easily avoid them. In addition, the book covers why you shouldn't use recursion when loops are more efficient and how you can get the best of both worlds. Functional programming...... is a style of programming, like object-oriented programming, but one that focuses on data transformations and calculations rather than objects and state. Where in object-oriented programming you model your programs by describing which states an object can be in and how methods will reveal or modify...
DEFF Research Database (Denmark)
Bysted, Tommy Kristensen; Hamila, R.; Gabbouj, M.
1998-01-01
A new correlation function called the Teager correlation function is introduced in this paper. The connection between this function, the Teager energy operator and the conventional correlation function is established. Two applications are presented. The first is the minimization of the Teager error...... norm and the second one is the use of the instantaneous Teager correlation function for simultaneous estimation of TDOA and FDOA (Time and Frequency Difference of Arrivals)....
Properties of Ambiguity Functions
Mulcahy-Stanislawczyk, John
2014-01-01
The use of ambiguity functions in radar signal design and analysis is very common. Understanding the various properties and meanings of ambiguity functions allow a signal designer to understand the time delay and doppler shift properties of a given signal. Through the years, several different versions of the ambiguity function have been used. Each of these functions essentially have the same physical meaning; however, the use of different functions makes it difficult to be sure that certai...
Ergotic / epistemic / semiotic functions
Luciani , Annie
2007-01-01
International audience; Claude Cadoz has introduced a typology of human-environment relation, identifying three functions. This typology allows characterizing univocally, i.e. in a non-redundant manner, the computer devices and interfaces that allow human to interact with environment through and by computers. These three functions are: the epistemic function, the semiotic function, the ergotic function. Conversely to the terms epistemic and semiotic that are usual, the term ergotic has been s...
Variational functionals which admit discontinuous trial functions
International Nuclear Information System (INIS)
Nelson, P. Jr.
1975-01-01
It is argued that variational synthesis with discontinuous trial functions requires variational principles applicable to equations involving operators acting between distinct Hilbert spaces. A description is given of a Roussopoulos-type variational principle generalized to cover this situation. This principle is suggested as the basis for a unified approach to the derivation of variational functionals. In addition to esthetics, this approach has the advantage that the mathematical details increase the understanding of the derived functional, particularly the sense in which a synthesized solution should be regarded as an approximation to the true solution. By way of illustration, the generalized Roussopoulos principle is applied to derive a class of first-order diffusion functionals which admit trial functions containing approximations at an interface. These ''asymptotic'' interface quantities are independent of the limiting approximations from either side and permit use of different trial spectra at and on either side of an interface. The class of functionals derived contains as special cases both the Lagrange multiplier method of Buslik and two functionals of Lambropoulos and Luco. Some numerical results for a simple two-group model confirm that the ''multipliers'' can closely approximate the appropriate quantity in the region near an interface. (U.S.)
Gagné, Jonathan; Faherty, Jacqueline K.; Mamajek, Eric E.; Malo, Lison; Doyon, René; Filippazzo, Joseph C.; Weinberger, Alycia J.; Donaldson, Jessica K.; Lépine, Sébastien; Lafrenière, David; Artigau, Étienne; Burgasser, Adam J.; Looper, Dagny; Boucher, Anne; Beletsky, Yuri; Camnasio, Sara; Brunette, Charles; Arboit, Geneviève
2017-02-01
A determination of the initial mass function (IMF) of the current, incomplete census of the 10 Myr-old TW Hya association (TWA) is presented. This census is built from a literature compilation supplemented with new spectra and 17 new radial velocities from ongoing membership surveys, as well as a reanalysis of Hipparcos data that confirmed HR 4334 (A2 Vn) as a member. Although the dominant uncertainty in the IMF remains census incompleteness, a detailed statistical treatment is carried out to make the IMF determination independent of binning while accounting for small number statistics. The currently known high-likelihood members are fitted by a log-normal distribution with a central mass of {0.21}-0.06+0.11 M ⊙ and a characteristic width of {0.8}-0.1+0.2 dex in the 12 M Jup-2 M ⊙ range, whereas a Salpeter power law with α ={2.2}-0.5+1.1 best describes the IMF slope in the 0.1-2 M ⊙ range. This characteristic width is higher than other young associations, which may be due to incompleteness in the current census of low-mass TWA stars. A tentative overpopulation of isolated planetary-mass members similar to 2MASS J11472421-2040204 and 2MASS J11193254-1137466 is identified: this indicates that there might be as many as {10}-5+13 similar members of TWA with hot-start model-dependent masses estimated at ˜5-7 M Jup, most of which would be too faint to be detected in 2MASS. Our new radial velocity measurements corroborate the membership of 2MASS J11472421-2040204, and secure TWA 28 (M8.5 γ), TWA 29 (M9.5 γ), and TWA 33 (M4.5 e) as members. The discovery of 2MASS J09553336-0208403, a young L7-type interloper unrelated to TWA, is also presented.
DEFF Research Database (Denmark)
Markvorsen, Steen
2007-01-01
We consider a specific function of two variables whose graph surface resembles a blue lagoon. The function has a saddle point $p$, but when the function is restricted to any given straight line through $p$ it has a {\\em{strict local minimum}} along that line at $p$.......We consider a specific function of two variables whose graph surface resembles a blue lagoon. The function has a saddle point $p$, but when the function is restricted to any given straight line through $p$ it has a {\\em{strict local minimum}} along that line at $p$....
International Nuclear Information System (INIS)
1981-10-01
This book indicates quality function deployment with quality and deployment of quality function, process and prospect of quality function deployment and development, product process and conception of quality table, deployment of quality demand, design of quality table and application of concurrent multi design, progress design and quality development, main safe part and management of important function part, quality development and deployment of method of construction, quality deployment and economics, total system of quality function deployment and task of quality function deployment in the present and future.
DEFF Research Database (Denmark)
Raket, Lars Lau
We propose a direction it the field of statistics which we will call functional object analysis. This subfields considers the analysis of functional objects defined on continuous domains. In this setting we will focus on model-based statistics, with a particularly emphasis on mixed......-effect formulations, where the observed functional signal is assumed to consist of both fixed and random functional effects. This thesis takes the initial steps toward the development of likelihood-based methodology for functional objects. We first consider analysis of functional data defined on high...
International Nuclear Information System (INIS)
Khan, H.
1990-01-01
This thesis explores deep inelastic scattering of a lepton beam from a polarized nuclear target with spin J=1. After reviewing the formation for spin-1/2, the structure functions for a spin-1 target are defined in terms of the helicity amplitudes for forward compton scattering. A version of the convolution model, which incorporates relativistic and binding energy corrections is used to calculate the structure functions of a neutron target. A simple parameterization of these structure functions is given in terms of a few neutron wave function parameters and the free nucleon structure functions. This allows for an easy comparison of structure functions calculated using different neutron models. (author)
From functional architecture to functional connectomics.
Reid, R Clay
2012-07-26
"Receptive Fields, Binocular Interaction and Functional Architecture in the Cat's Visual Cortex" by Hubel and Wiesel (1962) reported several important discoveries: orientation columns, the distinct structures of simple and complex receptive fields, and binocular integration. But perhaps the paper's greatest influence came from the concept of functional architecture (the complex relationship between in vivo physiology and the spatial arrangement of neurons) and several models of functionally specific connectivity. They thus identified two distinct concepts, topographic specificity and functional specificity, which together with cell-type specificity constitute the major determinants of nonrandom cortical connectivity. Orientation columns are iconic examples of topographic specificity, whereby axons within a column connect with cells of a single orientation preference. Hubel and Wiesel also saw the need for functional specificity at a finer scale in their model of thalamic inputs to simple cells, verified in the 1990s. The difficult but potentially more important question of functional specificity between cortical neurons is only now becoming tractable with new experimental techniques. Copyright © 2012 Elsevier Inc. All rights reserved.
Sun, Ying
2012-08-03
This article proposes functional median polish, an extension of univariate median polish, for one-way and two-way functional analysis of variance (ANOVA). The functional median polish estimates the functional grand effect and functional main factor effects based on functional medians in an additive functional ANOVA model assuming no interaction among factors. A functional rank test is used to assess whether the functional main factor effects are significant. The robustness of the functional median polish is demonstrated by comparing its performance with the traditional functional ANOVA fitted by means under different outlier models in simulation studies. The functional median polish is illustrated on various applications in climate science, including one-way and two-way ANOVA when functional data are either curves or images. Specifically, Canadian temperature data, U. S. precipitation observations and outputs of global and regional climate models are considered, which can facilitate the research on the close link between local climate and the occurrence or severity of some diseases and other threats to human health. © 2012 International Biometric Society.
Positive random fields for modeling material stiffness and compliance
DEFF Research Database (Denmark)
Hasofer, Abraham Michael; Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob
1998-01-01
Positive random fields with known marginal properties and known correlation function are not numerous in the literature. The most prominent example is the log\\-normal field for which the complete distribution is known and for which the reciprocal field is also lognormal. It is of interest to supp...
DEFF Research Database (Denmark)
Gardner, Richard J.; Kiderlen, Markus
A structural theory of operations between real-valued (or extended-real-valued) functions on a nonempty subset A of Rn is initiated. It is shown, for example, that any operation ∗ on a cone of functions containing the constant functions, which is pointwise, positively homogeneous, monotonic......, and associative, must be one of 40 explicitly given types. In particular, this is the case for operations between pairs of arbitrary, or continuous, or differentiable functions. The term pointwise means that (f ∗g)(x) = F(f(x), g(x)), for all x ∈ A and some function F of two variables. Several results in the same...... spirit are obtained for operations between convex functions or between support functions. For example, it is shown that ordinary addition is the unique pointwise operation between convex functions satisfying the identity property, i.e., f ∗ 0 = 0 ∗ f = f, for all convex f, while other results classify Lp...
Kidney function tests are common lab tests used to evaluate how well the kidneys are working. Such tests include: ... Oh MS, Briefel G. Evaluation of renal function, water, electrolytes ... and Management by Laboratory Methods . 23rd ed. Philadelphia, ...
... digest food, store energy, and remove poisons. Liver function tests are blood tests that check to see ... as hepatitis and cirrhosis. You may have liver function tests as part of a regular checkup. Or ...
Functionalized diamond nanoparticles
Beaujuge, Pierre M.; El Tall, Omar; Raja, Inam U.
2014-01-01
A diamond nanoparticle can be functionalized with a substituted dienophile under ambient conditions, and in the absence of catalysts or additional reagents. The functionalization is thought to proceed through an addition reaction.
Smart hydrogel functional materials
Chu, Liang-Yin; Ju, Xiao-Jie
2014-01-01
This book systematically introduces smart hydrogel functional materials with the configurations ranging from hydrogels to microgels. It serves as an excellent reference for designing and fabricating artificial smart hydrogel functional materials.
Babusci, D.; Dattoli, G.; Germano, B.; Martinelli, M. R.; Ricci, P. E.
2011-01-01
We use the operator method to evaluate a class of integrals involving Bessel or Bessel-type functions. The technique we propose is based on the formal reduction of these family of functions to Gaussians.
Lott, Steven
2015-01-01
This book is for developers who want to use Python to write programs that lean heavily on functional programming design patterns. You should be comfortable with Python programming, but no knowledge of functional programming paradigms is needed.
Functionalized diamond nanoparticles
Beaujuge, Pierre M.
2014-10-21
A diamond nanoparticle can be functionalized with a substituted dienophile under ambient conditions, and in the absence of catalysts or additional reagents. The functionalization is thought to proceed through an addition reaction.
Ecological Functions of Landscapes
Kiryushin, V. I.
2018-01-01
Ecological functions of landscapes are considered a system of processes ensuring the development, preservation, and evolution of ecosystems and the biosphere as a whole. The concept of biogeocenosis can be considered a model that integrates biotic and environmental functions. The most general biogeocenotic functions specify the biodiversity, biotic links, self-organization, and evolution of ecosystems. Close interaction between biocenosis and the biotope (ecotope) is ensured by the continuous exchange of matter, energy, and information. Ecotope determines the biocenosis. The group of ecotopic functions includes atmospheric (gas exchange, heat exchange, hydroatmospheric, climate-forming), lithospheric (geodynamic, geophysical, and geochemical), hydrologic and hydrogeologic functions of landscape and ecotopic functions of soils. Bioecological functions emerge as a result of the biotope and ecotope interaction; these are the bioproductive, destructive, organoaccumulative, biochemical (gas, concentration, redox, biochemical, biopedological), pedogenetic, and energy functions
Sun, Ying; Genton, Marc G.
2012-01-01
polish is demonstrated by comparing its performance with the traditional functional ANOVA fitted by means under different outlier models in simulation studies. The functional median polish is illustrated on various applications in climate science
Hybrid functional pseudopotentials
Yang, Jing; Tan, Liang Z.; Rappe, Andrew M.
2018-02-01
The consistency between the exchange-correlation functional used in pseudopotential construction and in the actual density functional theory calculation is essential for the accurate prediction of fundamental properties of materials. However, routine hybrid density functional calculations at present still rely on generalized gradient approximation pseudopotentials due to the lack of hybrid functional pseudopotentials. Here, we present a scheme for generating hybrid functional pseudopotentials, and we analyze the importance of pseudopotential density functional consistency for hybrid functionals. For the PBE0 hybrid functional, we benchmark our pseudopotentials for structural parameters and fundamental electronic gaps of the Gaussian-2 (G2) molecular dataset and some simple solids. Our results show that using our PBE0 pseudopotentials in PBE0 calculations improves agreement with respect to all-electron calculations.
Photon structure function - theory
International Nuclear Information System (INIS)
Bardeen, W.A.
1984-12-01
The theoretical status of the photon structure function is reviewed. Particular attention is paid to the hadronic mixing problem and the ability of perturbative QCD to make definitive predictions for the photon structure function. 11 references
International Nuclear Information System (INIS)
Korshunov, A D
2003-01-01
Monotone Boolean functions are an important object in discrete mathematics and mathematical cybernetics. Topics related to these functions have been actively studied for several decades. Many results have been obtained, and many papers published. However, until now there has been no sufficiently complete monograph or survey of results of investigations concerning monotone Boolean functions. The object of this survey is to present the main results on monotone Boolean functions obtained during the last 50 years
Directory of Open Access Journals (Sweden)
Mohsen Razzaghi
2000-01-01
Full Text Available A direct method for finding the solution of variational problems using a hybrid function is discussed. The hybrid functions which consist of block-pulse functions plus Chebyshev polynomials are introduced. An operational matrix of integration and the integration of the cross product of two hybrid function vectors are presented and are utilized to reduce a variational problem to the solution of an algebraic equation. Illustrative examples are included to demonstrate the validity and applicability of the technique.
Bloomberg, Jacob J.; Mulavara, Ajitkumar; Peters, Brian T.; Rescheke, Millard F.; Wood, Scott; Lawrence, Emily; Koffman, Igor; Ploutz-Snyder, Lori; Spiering, Barry A.; Feeback, Daniel L.;
2009-01-01
This slide presentation reviews the Functional Task Test (FTT), an interdisciplinary testing regimen that has been developed to evaluate astronaut postflight functional performance and related physiological changes. The objectives of the project are: (1) to develop a set of functional tasks that represent critical mission tasks for the Constellation Program, (2) determine the ability to perform these tasks after space flight, (3) Identify the key physiological factors that contribute to functional decrements and (4) Use this information to develop targeted countermeasures.
Pseudolinear functions and optimization
Mishra, Shashi Kant
2015-01-01
Pseudolinear Functions and Optimization is the first book to focus exclusively on pseudolinear functions, a class of generalized convex functions. It discusses the properties, characterizations, and applications of pseudolinear functions in nonlinear optimization problems.The book describes the characterizations of solution sets of various optimization problems. It examines multiobjective pseudolinear, multiobjective fractional pseudolinear, static minmax pseudolinear, and static minmax fractional pseudolinear optimization problems and their results. The authors extend these results to locally
Bialynicki-Birula, Iwo
2005-01-01
Photon wave function is a controversial concept. Controversies stem from the fact that photon wave functions can not have all the properties of the Schroedinger wave functions of nonrelativistic wave mechanics. Insistence on those properties that, owing to peculiarities of photon dynamics, cannot be rendered, led some physicists to the extreme opinion that the photon wave function does not exist. I reject such a fundamentalist point of view in favor of a more pragmatic approach. In my view, t...
On Functional Calculus Estimates
Schwenninger, F.L.
2015-01-01
This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm
Phylogenetic molecular function annotation
International Nuclear Information System (INIS)
Engelhardt, Barbara E; Jordan, Michael I; Repo, Susanna T; Brenner, Steven E
2009-01-01
It is now easier to discover thousands of protein sequences in a new microbial genome than it is to biochemically characterize the specific activity of a single protein of unknown function. The molecular functions of protein sequences have typically been predicted using homology-based computational methods, which rely on the principle that homologous proteins share a similar function. However, some protein families include groups of proteins with different molecular functions. A phylogenetic approach for predicting molecular function (sometimes called 'phylogenomics') is an effective means to predict protein molecular function. These methods incorporate functional evidence from all members of a family that have functional characterizations using the evolutionary history of the protein family to make robust predictions for the uncharacterized proteins. However, they are often difficult to apply on a genome-wide scale because of the time-consuming step of reconstructing the phylogenies of each protein to be annotated. Our automated approach for function annotation using phylogeny, the SIFTER (Statistical Inference of Function Through Evolutionary Relationships) methodology, uses a statistical graphical model to compute the probabilities of molecular functions for unannotated proteins. Our benchmark tests showed that SIFTER provides accurate functional predictions on various protein families, outperforming other available methods.
DEFF Research Database (Denmark)
Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina
2016-01-01
spaces, in addition to their similarity in the vector space. Prioritized Weighted Feature Distance (PWFD) works similarly as WFD, but provides the ability to give priorities to desirable features. The accuracy of the proposed functions are compared with other similarity functions on several data sets....... Our results show that the proposed functions work better than other methods proposed in the literature....
Feldman, Carol Fleisher
1977-01-01
Author advocates the view that meaning is necessarily dependent upon the communicative function of language and examines the objections, particularly those of Noam Chomsky, to this view. Argues that while Chomsky disagrees with the idea that communication is the essential function of language, he implicitly agrees that it has a function.…
Automatic differentiation of functions
International Nuclear Information System (INIS)
Douglas, S.R.
1990-06-01
Automatic differentiation is a method of computing derivatives of functions to any order in any number of variables. The functions must be expressible as combinations of elementary functions. When evaluated at specific numerical points, the derivatives have no truncation error and are automatically found. The method is illustrated by simple examples. Source code in FORTRAN is provided
Nonparametric Transfer Function Models
Liu, Jun M.; Chen, Rong; Yao, Qiwei
2009-01-01
In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584
International Nuclear Information System (INIS)
Son, Mi Jung; Park, Jin Han; Lim, Ki Moon
2007-01-01
We introduce a new class of functions called weakly clopen function which includes the class of almost clopen functions due to Ekici [Ekici E. Generalization of perfectly continuous, regular set-connected and clopen functions. Acta Math Hungar 2005;107:193-206] and is included in the class of weakly continuous functions due to Levine [Levine N. A decomposition of continuity in topological spaces. Am Math Mon 1961;68:44-6]. Some characterizations and several properties concerning weakly clopenness are obtained. Furthermore, relationships among weak clopenness, almost clopenness, clopenness and weak continuity are investigated
Implementing function spreadsheets
DEFF Research Database (Denmark)
Sestoft, Peter
2008-01-01
: that of turning an expression into a named function. Hence they proposed a way to define a function in terms of a worksheet with designated input and output cells; we shall call it a function sheet. The goal of our work is to develop implementations of function sheets and study their application to realistic...... examples. Therefore, we are also developing a simple yet comprehensive spreadsheet core implementation for experimentation with this technology. Here we report briefly on our experiments with function sheets as well as other uses of our spreadsheet core implementation....
Transfer function combinations
Zhou, Liang; Schott, Mathias; Hansen, Charles
2012-01-01
Direct volume rendering has been an active area of research for over two decades. Transfer function design remains a difficult task since current methods, such as traditional 1D and 2D transfer functions, are not always effective for all data sets. Various 1D or 2D transfer function spaces have been proposed to improve classification exploiting different aspects, such as using the gradient magnitude for boundary location and statistical, occlusion, or size metrics. In this paper, we present a novel transfer function method which can provide more specificity for data classification by combining different transfer function spaces. In this work, a 2D transfer function can be combined with 1D transfer functions which improve the classification. Specifically, we use the traditional 2D scalar/gradient magnitude, 2D statistical, and 2D occlusion spectrum transfer functions and combine these with occlusion and/or size-based transfer functions to provide better specificity. We demonstrate the usefulness of the new method by comparing to the following previous techniques: 2D gradient magnitude, 2D occlusion spectrum, 2D statistical transfer functions and 2D size based transfer functions. © 2012 Elsevier Ltd.
Transfer function combinations
Zhou, Liang
2012-10-01
Direct volume rendering has been an active area of research for over two decades. Transfer function design remains a difficult task since current methods, such as traditional 1D and 2D transfer functions, are not always effective for all data sets. Various 1D or 2D transfer function spaces have been proposed to improve classification exploiting different aspects, such as using the gradient magnitude for boundary location and statistical, occlusion, or size metrics. In this paper, we present a novel transfer function method which can provide more specificity for data classification by combining different transfer function spaces. In this work, a 2D transfer function can be combined with 1D transfer functions which improve the classification. Specifically, we use the traditional 2D scalar/gradient magnitude, 2D statistical, and 2D occlusion spectrum transfer functions and combine these with occlusion and/or size-based transfer functions to provide better specificity. We demonstrate the usefulness of the new method by comparing to the following previous techniques: 2D gradient magnitude, 2D occlusion spectrum, 2D statistical transfer functions and 2D size based transfer functions. © 2012 Elsevier Ltd.
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Directory of Open Access Journals (Sweden)
Jiri Patera
2008-01-01
Full Text Available We review and further develop the theory of $E$-orbit functions. They are functions on the Euclidean space $E_n$ obtained from the multivariate exponential function by symmetrization by means of an even part $W_{e}$ of a Weyl group $W$, corresponding to a Coxeter-Dynkin diagram. Properties of such functions are described. They are closely related to symmetric and antisymmetric orbit functions which are received from exponential functions by symmetrization and antisymmetrization procedure by means of a Weyl group $W$. The $E$-orbit functions, determined by integral parameters, are invariant withrespect to even part $W^{aff}_{e}$ of the affine Weyl group corresponding to $W$. The $E$-orbit functions determine a symmetrized Fourier transform, where these functions serve as a kernel of the transform. They also determine a transform on a finite set of points of the fundamental domain $F^{e}$ of the group $W^{aff}_{e}$ (the discrete $E$-orbit function transform.
Directory of Open Access Journals (Sweden)
Anatoliy Klimyk
2007-02-01
Full Text Available In the paper, properties of antisymmetric orbit functions are reviewed and further developed. Antisymmetric orbit functions on the Euclidean space $E_n$ are antisymmetrized exponential functions. Antisymmetrization is fulfilled by a Weyl group, corresponding to a Coxeter-Dynkin diagram. Properties of such functions are described. These functions are closely related to irreducible characters of a compact semisimple Lie group $G$ of rank $n$. Up to a sign, values of antisymmetric orbit functions are repeated on copies of the fundamental domain $F$ of the affine Weyl group (determined by the initial Weyl group in the entire Euclidean space $E_n$. Antisymmetric orbit functions are solutions of the corresponding Laplace equation in $E_n$, vanishing on the boundary of the fundamental domain $F$. Antisymmetric orbit functions determine a so-called antisymmetrized Fourier transform which is closely related to expansions of central functions in characters of irreducible representations of the group $G$. They also determine a transform on a finite set of points of $F$ (the discrete antisymmetric orbit function transform. Symmetric and antisymmetric multivariate exponential, sine and cosine discrete transforms are given.
B Plant function analysis report
International Nuclear Information System (INIS)
Lund, D.P.
1995-09-01
The document contains the functions, function definitions, function interfaces, function interface definitions, Input Computer Automated Manufacturing Definition (IDEFO) diagrams, and a function hierarchy chart that describe what needs to be performed to deactivate B Plant
Directory of Open Access Journals (Sweden)
Kohli J. K.
2014-06-01
Full Text Available A new class of functions called ‘Rδ-supercontinuous functions’ is introduced. Their basic properties are studied and their place in the hierarchy of strong variants of continuity which already exist in the literature is elaborated. The class of Rδ-supercontinuous functions (Math. Bohem., to appear properly contains the class of Rz-supercontinuous functions which in its turn properly contains the class of Rcl- supercontinuous functions (Demonstratio Math. 46(1 (2013, 229-244 and so includes all Rcl-supercontinuous (≡clopen continuous functions (Applied Gen. Topol. 8(2 (2007, 293-300; Indian J. Pure Appl. Math. 14(6 (1983, 767-772 and is properly contained in the class of R-supercontinuous functions (Demonstratio Math. 43(3 (2010, 703-723.
International Nuclear Information System (INIS)
Engel, J.
2007-01-01
The Hohenberg-Kohn theorem and Kohn-Sham procedure are extended to functionals of the localized intrinsic density of a self-bound system such as a nucleus. After defining the intrinsic-density functional, we modify the usual Kohn-Sham procedure slightly to evaluate the mean-field approximation to the functional, and carefully describe the construction of the leading corrections for a system of fermions in one dimension with a spin-degeneracy equal to the number of particles N. Despite the fact that the corrections are complicated and nonlocal, we are able to construct a local Skyrme-like intrinsic-density functional that, while different from the exact functional, shares with it a minimum value equal to the exact ground-state energy at the exact ground-state intrinsic density, to next-to-leading order in 1/N. We briefly discuss implications for real Skyrme functionals
Functional analysis and applications
Siddiqi, Abul Hasan
2018-01-01
This self-contained textbook discusses all major topics in functional analysis. Combining classical materials with new methods, it supplies numerous relevant solved examples and problems and discusses the applications of functional analysis in diverse fields. The book is unique in its scope, and a variety of applications of functional analysis and operator-theoretic methods are devoted to each area of application. Each chapter includes a set of problems, some of which are routine and elementary, and some of which are more advanced. The book is primarily intended as a textbook for graduate and advanced undergraduate students in applied mathematics and engineering. It offers several attractive features making it ideally suited for courses on functional analysis intended to provide a basic introduction to the subject and the impact of functional analysis on applied and computational mathematics, nonlinear functional analysis and optimization. It introduces emerging topics like wavelets, Gabor system, inverse pro...
Counting with symmetric functions
Mendes, Anthony
2015-01-01
This monograph provides a self-contained introduction to symmetric functions and their use in enumerative combinatorics. It is the first book to explore many of the methods and results that the authors present. Numerous exercises are included throughout, along with full solutions, to illustrate concepts and also highlight many interesting mathematical ideas. The text begins by introducing fundamental combinatorial objects such as permutations and integer partitions, as well as generating functions. Symmetric functions are considered in the next chapter, with a unique emphasis on the combinatorics of the transition matrices between bases of symmetric functions. Chapter 3 uses this introductory material to describe how to find an assortment of generating functions for permutation statistics, and then these techniques are extended to find generating functions for a variety of objects in Chapter 4. The next two chapters present the Robinson-Schensted-Knuth algorithm and a method for proving Pólya’s enu...
Relativistic plasma dispersion functions
International Nuclear Information System (INIS)
Robinson, P.A.
1986-01-01
The known properties of plasma dispersion functions (PDF's) for waves in weakly relativistic, magnetized, thermal plasmas are reviewed and a large number of new results are presented. The PDF's required for the description of waves with small wave number perpendicular to the magnetic field (Dnestrovskii and Shkarofsky functions) are considered in detail; these functions also arise in certain quantum electrodynamical calculations involving strongly magnetized plasmas. Series, asymptotic series, recursion relations, integral forms, derivatives, differential equations, and approximations for these functions are discussed as are their analytic properties and connections with standard transcendental functions. In addition a more general class of PDF's relevant to waves of arbitrary perpendicular wave number is introduced and a range of properties of these functions are derived
dftools: Distribution function fitting
Obreschkow, Danail
2018-05-01
dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.
Manfredi; Feix
2000-10-01
The properties of an alternative definition of quantum entropy, based on Wigner functions, are discussed. Such a definition emerges naturally from the Wigner representation of quantum mechanics, and can easily quantify the amount of entanglement of a quantum state. It is shown that smoothing of the Wigner function induces an increase in entropy. This fact is used to derive some simple rules to construct positive-definite probability distributions which are also admissible Wigner functions.
Manfredi, G.; Feix, M. R.
2002-01-01
The properties of an alternative definition of quantum entropy, based on Wigner functions, are discussed. Such definition emerges naturally from the Wigner representation of quantum mechanics, and can easily quantify the amount of entanglement of a quantum state. It is shown that smoothing of the Wigner function induces an increase in entropy. This fact is used to derive some simple rules to construct positive definite probability distributions which are also admissible Wigner functions
Fathi, Albert
2015-07-01
In this paper we revisit our joint work with Antonio Siconolfi on time functions. We will give a brief introduction to the subject. We will then show how to construct a Lipschitz time function in a simplified setting. We will end with a new result showing that the Aubry set is not an artifact of our proof of existence of time functions for stably causal manifolds.
SPLINE, Spline Interpolation Function
International Nuclear Information System (INIS)
Allouard, Y.
1977-01-01
1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10
International Nuclear Information System (INIS)
Martin, F.
1981-03-01
The x dependence of hadron structure functions is investigated. If quarks can exist in very low mass states (10 MeV for d and u quarks) the pion structure function is predicted to behave like (1-x) and not (1-x) 2 in a x-region around 1. Relativistic and non-relativistic quark bound state pictures of hadrons are considered together with their relation with the Q 2 evolution of structure functions. Good agreement with data is in general obtained
Calculus of bivariant function
PTÁČNÍK, Jan
2011-01-01
This thesis deals with the introduction of function of two variables and differential calculus of this function. This work should serve as a textbook for students of elementary school's teacher. Each chapter contains a summary of basic concepts and explanations of relationships, then solved model exercises of the topic and finally the exercises, which should solve the student himself. Thesis have transmit to students basic knowledges of differential calculus of functions of two variables, inc...
Functional esophageal disorders
Clouse, R; Richter, J; Heading, R; Janssens, J; Wilson, J
1999-01-01
The functional esophageal disorders include globus, rumination syndrome, and symptoms that typify esophageal diseases (chest pain, heartburn, and dysphagia). Factors responsible for symptom production are poorly understood. The criteria for diagnosis rest not only on compatible symptoms but also on exclusion of structural and metabolic disorders that might mimic the functional disorders. Additionally, a functional diagnosis is precluded by the presence of a pathology-based motor disorder or p...
Functional Programming With Relations
Hutton, Graham
1991-01-01
While programming in a relational framework has much to offer over the functional style in terms of expressiveness, computing with relations is less efficient, and more semantically troublesome. In this paper we propose a novel blend of the functional and relational styles. We identify a class of "causal relations", which inherit some of the bi-directionality properties of relations, but retain the efficiency and semantic foundations of the functional style.
International Nuclear Information System (INIS)
Bardeen, W.A.
1980-11-01
Theoretical understanding of the photon structure function is reviewed. As an illustration of the pointlike component, the parton model is briefly discussed. However, the systematic study of the photon structure function is presented through the framework of the operator product expansion. Perturbative QCD is used as the theoretical basis for the calculation of leading contributions to the operator product expansion. The influence of higher order QCD effects on these results is discussed. Recent results for the polarized structure functions are discussed
International Nuclear Information System (INIS)
Isawa, Toyoharu
1994-01-01
The function of the lungs is primarily the function as a gas exchanger: the venous blood returning to the lungs is arterialized with oxygen in the lungs and the arterialized blood is sent back again to the peripheral tissues of the whole body to be utilized for metabolic oxygenation. Besides the gas exchanging function which we call ''respiratory lung function'' the lungs have functions that have little to do with gas exchange itself. We categorically call the latter function of the lungs as ''nonrespiratory lung function''. The lungs consist of the conductive airways, the gas exchanging units like the alveoli, and the interstitial space that surrounds the former two compartments. The interstitial space contains the blood and lymphatic capillaries, collagen and elastic fibers and cement substances. The conductive airways and the gas exchanging units are directly exposed to the atmosphere that contains various toxic and nontoxic gases, fume and biological or nonbiological particles. Because the conductive airways are equipped with defense mechanisms like mucociliary clearance or coughs to get rid of these toxic gases, particles or locally produced biological debris, we are usually free from being succumbed to ill effects of inhaled materials. By use of nuclear medicine techniques, we can now evaluate mucociliary clearance function, and other nonrespiratory lung functions as well in vivo
Subordination by convex functions
Directory of Open Access Journals (Sweden)
Rosihan M. Ali
2006-01-01
Full Text Available For a fixed analytic function g(z=z+∑n=2∞gnzn defined on the open unit disk and γ<1, let Tg(γ denote the class of all analytic functions f(z=z+∑n=2∞anzn satisfying ∑n=2∞|angn|≤1−γ. For functions in Tg(γ, a subordination result is derived involving the convolution with a normalized convex function. Our result includes as special cases several earlier works.
Renormalization Group Functional Equations
Curtright, Thomas L
2011-01-01
Functional conjugation methods are used to analyze the global structure of various renormalization group trajectories. With minimal assumptions, the methods produce continuous flows from step-scaling {\\sigma} functions, and lead to exact functional relations for the local flow {\\beta} functions, whose solutions may have novel, exotic features, including multiple branches. As a result, fixed points of {\\sigma} are sometimes not true fixed points under continuous changes in scale, and zeroes of {\\beta} do not necessarily signal fixed points of the flow, but instead may only indicate turning points of the trajectories.
Perceptual Audio Hashing Functions
Directory of Open Access Journals (Sweden)
Emin Anarım
2005-07-01
Full Text Available Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.
Energy Technology Data Exchange (ETDEWEB)
Isawa, Toyoharu [Tohoku University Research Institute for Chest Disease and Cancer, Sendai (Japan)
1994-07-01
The function of the lungs is primarily the function as a gas exchanger: the venous blood returning to the lungs is arterialized with oxygen in the lungs and the arterialized blood is sent back again to the peripheral tissues of the whole body to be utilized for metabolic oxygenation. Besides the gas exchanging function which we call ''respiratory lung function'' the lungs have functions that have little to do with gas exchange itself. We categorically call the latter function of the lungs as ''nonrespiratory lung function''. The lungs consist of the conductive airways, the gas exchanging units like the alveoli, and the interstitial space that surrounds the former two compartments. The interstitial space contains the blood and lymphatic capillaries, collagen and elastic fibers and cement substances. The conductive airways and the gas exchanging units are directly exposed to the atmosphere that contains various toxic and nontoxic gases, fume and biological or nonbiological particles. Because the conductive airways are equipped with defense mechanisms like mucociliary clearance or coughs to get rid of these toxic gases, particles or locally produced biological debris, we are usually free from being succumbed to ill effects of inhaled materials. By use of nuclear medicine techniques, we can now evaluate mucociliary clearance function, and other nonrespiratory lung functions as well in vivo.
DEFF Research Database (Denmark)
Lind, Morten
2011-01-01
Multilevel Flow Modeling (MFM) has been proposed as a tool for representing goals and functions of complex industrial plants and suggested as a basis for reasoning about control situations. Lind presents an introduction to MFM but do not describe how control functions are used in the modeling....... The purpose of the present paper is to serve as a companion paper to this introduction by explaining the basic principles used in MFM for representation of control functions. A theoretical foundation for modeling control functions is presented and modeling examples are given for illustration....
Regulated functions and integrability
Directory of Open Access Journals (Sweden)
Ján Gunčaga
2009-04-01
Full Text Available Properties of functions defined on a bounded closed interval, weaker than continuity, have been considered by many mathematicians. Functions having both sides limits at each point are called regulated and were considered by J. Dieudonné [2], D. Fraňková [3] and others (see for example S. Banach [1], S. Saks [8]. The main class of functions we deal with consists of piece-wise constant ones. These functions play a fundamental role in the integration theory which had been developed by Igor Kluvanek (see Š. Tkacik [9]. We present an outline of this theory.
Introduction to functional methods
International Nuclear Information System (INIS)
Faddeev, L.D.
1976-01-01
The functional integral is considered in relation to Feynman diagrams and phase space. The holomorphic form of the functional integral is then discussed. The main problem of the lectures, viz. the construction of the S-matrix by means of the functional integral, is considered. The functional methods described explicitly take into account the Bose statistics of the fields involved. The different procedure used to treat fermions is discussed. An introduction to the problem of quantization of gauge fields is given. (B.R.H.)
Artin, Emil
2015-01-01
This brief monograph on the gamma function was designed by the author to fill what he perceived as a gap in the literature of mathematics, which often treated the gamma function in a manner he described as both sketchy and overly complicated. Author Emil Artin, one of the twentieth century's leading mathematicians, wrote in his Preface to this book, ""I feel that this monograph will help to show that the gamma function can be thought of as one of the elementary functions, and that all of its basic properties can be established using elementary methods of the calculus."" Generations of teachers
Normal Functions As A New Way Of Defining Computable Functions
Directory of Open Access Journals (Sweden)
Leszek Dubiel
2004-01-01
Full Text Available Report sets new method of defining computable functions. This is formalization of traditional function descriptions, so it allows to define functions in very intuitive way. Discovery of Ackermann function proved that not all functions that can be easily computed can be so easily described with Hilbert’s system of recursive functions. Normal functions lack this disadvantage.
Normal Functions as a New Way of Defining Computable Functions
Directory of Open Access Journals (Sweden)
Leszek Dubiel
2004-01-01
Full Text Available Report sets new method of defining computable functions. This is formalization of traditional function descriptions, so it allows to define functions in very intuitive way. Discovery of Ackermann function proved that not all functions that can be easily computed can be so easily described with Hilbert's system of recursive functions. Normal functions lack this disadvantage.
Pick, Luboš; John, Oldrich; Fucík, Svatopluk
2012-01-01
This is the first part of the second revised and extended edition of a well established monograph. It is an introduction to function spaces defined in terms of differentiability and integrability classes. It provides a catalogue of various spaces and benefits as a handbook for those who use function spaces to study other topics such as partial differential equations. Volum
Directory of Open Access Journals (Sweden)
J.K. Kohli
2009-04-01
Full Text Available A strong variant of continuity called ‘F-supercontinuity’ is introduced. The class of F-supercontinuous functions strictly contains the class of z-supercontinuous functions (Indian J. Pure Appl. Math. 33 (7 (2002, 1097–1108 which in turn properly contains the class of cl-supercontinuous functions ( clopen maps (Appl. Gen. Topology 8 (2 (2007, 293–300; Indian J. Pure Appl. Math. 14 (6 (1983, 762–772. Further, the class of F-supercontinuous functions is properly contained in the class of R-supercontinuous functions which in turn is strictly contained in the class of continuous functions. Basic properties of F-supercontinuous functions are studied and their place in the hierarchy of strong variants of continuity, which already exist in the mathematical literature, is elaborated. If either domain or range is a functionally regular space (Indagationes Math. 15 (1951, 359–368; 38 (1976, 281–288, then the notions of continuity, F-supercontinuity and R-supercontinuity coincide.
International Nuclear Information System (INIS)
Bergqvist, I.
1976-01-01
Methods for extracting photon strength functions are briefly discussed. We follow the Brink-Axel approach to relate the strength functions to the giant resonances observed in photonuclear work and summarize the available data on the E1, E2 and M1 resonances. Some experimental and theoretical problems are outlined. (author)
A Functional HAZOP Methodology
DEFF Research Database (Denmark)
Liin, Netta; Lind, Morten; Jensen, Niels
2010-01-01
A HAZOP methodology is presented where a functional plant model assists in a goal oriented decomposition of the plant purpose into the means of achieving the purpose. This approach leads to nodes with simple functions from which the selection of process and deviation variables follow directly...
Functional Magnetic Resonance Imaging
Voos, Avery; Pelphrey, Kevin
2013-01-01
Functional magnetic resonance imaging (fMRI), with its excellent spatial resolution and ability to visualize networks of neuroanatomical structures involved in complex information processing, has become the dominant technique for the study of brain function and its development. The accessibility of in-vivo pediatric brain-imaging techniques…
DEFF Research Database (Denmark)
Gauravaram, Praveen; Knudsen, Lars Ramkilde
2010-01-01
functions, also called message authentication codes (MACs) serve data integrity and data origin authentication in the secret key setting. The building blocks of hash functions can be designed using block ciphers, modular arithmetic or from scratch. The design principles of the popular Merkle...
Functional Assessment Inventory Manual.
Crewe, Nancy M.; Athelstan, Gary T.
This manual, which provides extensive new instructions for administering the Functional Assessment Inventory (FAI), is intended to enable counselors to begin using the inventory without undergoing any special training. The first two sections deal with the need for functional assessment and issues in the development and use of the inventory. The…
Functional and cognitive grammars
Institute of Scientific and Technical Information of China (English)
Anna Siewierska
2011-01-01
This paper presents a comprehensive review of the functional approach and cognitive approach to the nature of language and its relation to other aspects of human cognition. The paper starts with a brief discussion of the origins and the core tenets of the two approaches in Section 1. Section 2 discusses the similarities and differences between the three full-fledged structural functional grammars subsumed in the functional approach： Halliday＇s Systemic Functional Grammar （SFG）, Dik＇s Functional Grammar （FG）, and Van Valin＇s Role and Reference Grammar （RRG）. Section 3 deals with the major features of the three cognitive frameworks： Langacker＇s Cognitive Grammar （CG）, Goldberg＇s Cognitive Construction Grammar （CCG）, and Croft＇s Radical Construction Grammar （RCG）. Section 4 compares the two approaches and attempts to provide a unified functional-cognitive grammar. In the last section, the author concludes the paper with remarks on the unidirectional shift from functional grammar to cognitive grammar that may indicate a reinterpretation of the traditional relationship between functional and cognitive models of grammar.
Heterogeneity in kinesin function
Reddy, Babu J N; Tripathy, Suvranta; Vershinin, Michael; Tanenbaum, Marvin E; Xu, Jing; Mattson-Hoss, Michelle; Arabi, Karim; Chapman, Dail; Doolin, Tory; Hyeon, Changbong; Gross, Steven P
2017-01-01
The kinesin family proteins are often studied as prototypical molecular motors; a deeper understanding of them can illuminate regulation of intracellular transport. It is typically assumed that they function identically. Here we find that this assumption of homogeneous function appears incorrect:
International Nuclear Information System (INIS)
Moneta, M.
1999-01-01
Thermal dielectric functions ε(k,ω) for homogeneous electron gas were determined and discussed. The ground state of the gas is described by the Fermi-Dirac momentum distribution. The low and high temperature limits of ε(k,ω) were related to the Lindhard dielectric function and to ε(k, omega) derived for Boltzmann and for classical momentum distributions, respectively. (author)
Monadic Functional Reactive Programming
A.J. van der Ploeg (Atze); C Shan
2013-01-01
htmlabstractFunctional Reactive Programming (FRP) is a way to program reactive systems in functional style, eliminating many of the problems that arise from imperative techniques. In this paper, we present an alternative FRP formulation that is based on the notion of a reactive computation: a
DEFF Research Database (Denmark)
Knudsen, Lars Ramkilde; Rechberger, Christian; Thomsen, Søren Steffen
2007-01-01
to the state. We propose two concrete hash functions, Grindahl-256 and Grindahl-512 with claimed security levels with respect to collision, preimage and second preimage attacks of 2^128 and 2^256, respectively. Both proposals have lower memory requirements than other hash functions at comparable speeds...
Neurophysiology of functional imaging
van Eijsden, Pieter; Hyder, Fahmeed; Rothman, Douglas L.; Shulman, Robert G.
2009-01-01
The successes of PET and fMRI in non-invasively localizing sensory functions had encouraged efforts to transform the subjective concepts of cognitive psychology into objective physical measures. The assumption was that mental functions could be decomposed into non-overlapping, context-independent
properties and luminosity functions
Directory of Open Access Journals (Sweden)
Hektor Monteiro
2007-01-01
Full Text Available In this article, we present an investigation of a sample of 1072 stars extracted from the Villanova Catalog of Spectroscopically Identified White Dwarfs (2005 on-line version, studying their distribution in the Galaxy, their physical properties and their luminosity functions. The distances and physical properties of the white dwarfs are determined through interpolation of their (B-V or (b-y colors in model grids. The solar position relative to the Galactic plane, luminosity function, as well as separate functions for each white dwarf spectral type are derived and discussed. We show that the binary fraction does not vary significantly as a function of distance from the Galactic disk out to 100 pc. We propose that the formation rates of DA and non-DAs have changed over time and/or that DAs evolve into non-DA types. The luminosity functions for DAs and DBs have peaks possibly related to a star burst event.
DEFF Research Database (Denmark)
Vedel, Mette
2016-01-01
the triad value function. Next, the applicability and validity of the concept is examined in a case study of four closed vertical supply chain triads. Findings - The case study demonstrates that the triad value function facilitates the analysis and understanding of an apparent paradox; that distributors...... are not dis-intermediated in spite of their limited contribution to activities in the triads. The results indicate practical adequacy of the triad value function. Research limitations/implications - The triad value function is difficult to apply in the study of expanded networks as the number of connections...... expands exponentially with the number of ties in the network. Moreover, it must be applied in the study of service triads and open vertical supply chain triads to further verify the practical adequacy of the concept. Practical implications - The triad value function cannot be used normatively...
Pair Correlation Function Integrals
DEFF Research Database (Denmark)
Wedberg, Nils Hejle Rasmus Ingemar; O'Connell, John P.; Peters, Günther H.J.
2011-01-01
We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long......-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O’Connell, G. H. Peters......, and J. Abildskov, Mol. Simul. 36, 1243 (2010); Fluid Phase Equilib. 302, 32 (2011)], but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report...
DEFF Research Database (Denmark)
Rosenstand, Claus Andreas Foss; Laursen, Per Kyed
2013-01-01
How does one manage functional power relations between leading functions in vision driven digital media creation, and this from idea to master during the creation cycle? Functional power is informal, and it is understood as roles, e.g. project manager, that provide opportunities to contribute...... to the product quality. The area of interest is the vision driven digital media industry in general; however, the point of departure is the game industry due to its aesthetic complexity. The article's contribution to the area is a power graph, which shows the functional power of the leading functions according...... to a general digital media creation cycle. This is used to point out potential power conflicts and their consequences. It is concluded that there is normally more conflict potential in vision driven digital media creation than in digital media creation in general or in software development. The method...
Ramsay, J O
1997-01-01
Scientists today collect samples of curves and other functional observations. This monograph presents many ideas and techniques for such data. Included are expressions in the functional domain of such classics as linear regression, principal components analysis, linear modelling, and canonical correlation analysis, as well as specifically functional techniques such as curve registration and principal differential analysis. Data arising in real applications are used throughout for both motivation and illustration, showing how functional approaches allow us to see new things, especially by exploiting the smoothness of the processes generating the data. The data sets exemplify the wide scope of functional data analysis; they are drwan from growth analysis, meterology, biomechanics, equine science, economics, and medicine. The book presents novel statistical technology while keeping the mathematical level widely accessible. It is designed to appeal to students, to applied data analysts, and to experienced researc...
Directory of Open Access Journals (Sweden)
Liran eCarmel
2012-04-01
Full Text Available The intron-exon architecture of many eukaryotic genes raises the intriguing question of whether this unique organization serves any function, or is it simply a result of the spread of functionless introns in eukaryotic genomes. In this review, we show that introns in contemporary species fulfill a broad spectrum of functions, and are involved in virtually every step of mRNA processing. We propose that this great diversity of intronic functions supports the notion that introns were indeed selfish elements in early eukaryotes, but then independently gained numerous functions in different eukaryotic lineages. We suggest a novel criterion of evolutionary conservation, dubbed intron positional conservation, which can identify functional introns.
International Nuclear Information System (INIS)
Read, R.J.; Schierbeek, A.J.
1988-01-01
A phased translation function, which takes advantage of prior phase information to determine the position of an oriented mulecular replacement model, is examined. The function is the coefficient of correlation between the electron density computed with the prior phases and the electron density of the translated model, evaluated in reciprocal space as a Fourier transform. The correlation coefficient used in this work is closely related to an overlap function devised by Colman, Fehlhammer and Bartels. Tests with two protein structures, one of which was solved with the help of the phased translation function, show that little phase information is required to resolve the translation problem, and that the function is relatively insensitive to misorientation of the model. (orig.)
Submodular functions and optimization
Fujishige, Satoru
2005-01-01
It has widely been recognized that submodular functions play essential roles in efficiently solvable combinatorial optimization problems. Since the publication of the 1st edition of this book fifteen years ago, submodular functions have been showing further increasing importance in optimization, combinatorics, discrete mathematics, algorithmic computer science, and algorithmic economics, and there have been made remarkable developments of theory and algorithms in submodular functions. The 2nd edition of the book supplements the 1st edition with a lot of remarks and with new two chapters: "Submodular Function Minimization" and "Discrete Convex Analysis." The present 2nd edition is still a unique book on submodular functions, which is essential to students and researchers interested in combinatorial optimization, discrete mathematics, and discrete algorithms in the fields of mathematics, operations research, computer science, and economics. Key features: - Self-contained exposition of the theory of submodular ...
Minguzzi, E.
2010-09-01
Every time function on spacetime gives a (continuous) total preordering of the spacetime events which respects the notion of causal precedence. The problem of the existence of a (semi-)time function on spacetime and the problem of recovering the causal structure starting from the set of time functions are studied. It is pointed out that these problems have an analog in the field of microeconomics known as utility theory. In a chronological spacetime the semi-time functions correspond to the utilities for the chronological relation, while in a K-causal (stably causal) spacetime the time functions correspond to the utilities for the K + relation (Seifert’s relation). By exploiting this analogy, we are able to import some mathematical results, most notably Peleg’s and Levin’s theorems, to the spacetime framework. As a consequence, we prove that a K-causal (i.e. stably causal) spacetime admits a time function and that the time or temporal functions can be used to recover the K + (or Seifert) relation which indeed turns out to be the intersection of the time or temporal orderings. This result tells us in which circumstances it is possible to recover the chronological or causal relation starting from the set of time or temporal functions allowed by the spacetime. Moreover, it is proved that a chronological spacetime in which the closure of the causal relation is transitive (for instance a reflective spacetime) admits a semi-time function. Along the way a new proof avoiding smoothing techniques is given that the existence of a time function implies stable causality, and a new short proof of the equivalence between K-causality and stable causality is given which takes advantage of Levin’s theorem and smoothing techniques.
Multidisciplinary team functioning.
Kovitz, K E; Dougan, P; Riese, R; Brummitt, J R
1984-01-01
This paper advocates the need to move beyond interdisciplinary team composition as a minimum criterion for multidisciplinary functioning in child abuse treatment. Recent developments within the field reflect the practice of shared professional responsibility for detection, case management and treatment. Adherence to this particular model for intervention requires cooperative service planning and implementation as task related functions. Implicitly, this model also carries the potential to incorporate the supportive functioning essential to effective group process. However, explicit attention to the dynamics and process of small groups has been neglected in prescriptive accounts of multidisciplinary child abuse team organization. The present paper therefore focuses upon the maintenance and enhancement aspects of multidisciplinary group functioning. First, the development and philosophy of service for the Alberta Children's Hospital Child Abuse Program are reviewed. Second, composition of the team, it's mandate for service, and the population it serves are briefly described. Third, the conceptual framework within which the program functions is outlined. Strategies for effective group functioning are presented and the difficulties encountered with this model are highlighted. Finally, recommendations are offered for planning and implementing a multidisciplinary child abuse team and for maintaining its effective group functioning.
The Enzyme Function Initiative†
Gerlt, John A.; Allen, Karen N.; Almo, Steven C.; Armstrong, Richard N.; Babbitt, Patricia C.; Cronan, John E.; Dunaway-Mariano, Debra; Imker, Heidi J.; Jacobson, Matthew P.; Minor, Wladek; Poulter, C. Dale; Raushel, Frank M.; Sali, Andrej; Shoichet, Brian K.; Sweedler, Jonathan V.
2011-01-01
The Enzyme Function Initiative (EFI) was recently established to address the challenge of assigning reliable functions to enzymes discovered in bacterial genome projects; in this Current Topic we review the structure and operations of the EFI. The EFI includes the Superfamily/Genome, Protein, Structure, Computation, and Data/Dissemination Cores that provide the infrastructure for reliably predicting the in vitro functions of unknown enzymes. The initial targets for functional assignment are selected from five functionally diverse superfamilies (amidohydrolase, enolase, glutathione transferase, haloalkanoic acid dehalogenase, and isoprenoid synthase), with five superfamily-specific Bridging Projects experimentally testing the predicted in vitro enzymatic activities. The EFI also includes the Microbiology Core that evaluates the in vivo context of in vitro enzymatic functions and confirms the functional predictions of the EFI. The deliverables of the EFI to the scientific community include: 1) development of a large-scale, multidisciplinary sequence/structure-based strategy for functional assignment of unknown enzymes discovered in genome projects (target selection, protein production, structure determination, computation, experimental enzymology, microbiology, and structure-based annotation); 2) dissemination of the strategy to the community via publications, collaborations, workshops, and symposia; 3) computational and bioinformatic tools for using the strategy; 4) provision of experimental protocols and/or reagents for enzyme production and characterization; and 5) dissemination of data via the EFI’s website, enzymefunction.org. The realization of multidisciplinary strategies for functional assignment will begin to define the full metabolic diversity that exists in nature and will impact basic biochemical and evolutionary understanding, as well as a wide range of applications of central importance to industrial, medicinal and pharmaceutical efforts. PMID
The Enzyme Function Initiative.
Gerlt, John A; Allen, Karen N; Almo, Steven C; Armstrong, Richard N; Babbitt, Patricia C; Cronan, John E; Dunaway-Mariano, Debra; Imker, Heidi J; Jacobson, Matthew P; Minor, Wladek; Poulter, C Dale; Raushel, Frank M; Sali, Andrej; Shoichet, Brian K; Sweedler, Jonathan V
2011-11-22
The Enzyme Function Initiative (EFI) was recently established to address the challenge of assigning reliable functions to enzymes discovered in bacterial genome projects; in this Current Topic, we review the structure and operations of the EFI. The EFI includes the Superfamily/Genome, Protein, Structure, Computation, and Data/Dissemination Cores that provide the infrastructure for reliably predicting the in vitro functions of unknown enzymes. The initial targets for functional assignment are selected from five functionally diverse superfamilies (amidohydrolase, enolase, glutathione transferase, haloalkanoic acid dehalogenase, and isoprenoid synthase), with five superfamily specific Bridging Projects experimentally testing the predicted in vitro enzymatic activities. The EFI also includes the Microbiology Core that evaluates the in vivo context of in vitro enzymatic functions and confirms the functional predictions of the EFI. The deliverables of the EFI to the scientific community include (1) development of a large-scale, multidisciplinary sequence/structure-based strategy for functional assignment of unknown enzymes discovered in genome projects (target selection, protein production, structure determination, computation, experimental enzymology, microbiology, and structure-based annotation), (2) dissemination of the strategy to the community via publications, collaborations, workshops, and symposia, (3) computational and bioinformatic tools for using the strategy, (4) provision of experimental protocols and/or reagents for enzyme production and characterization, and (5) dissemination of data via the EFI's Website, http://enzymefunction.org. The realization of multidisciplinary strategies for functional assignment will begin to define the full metabolic diversity that exists in nature and will impact basic biochemical and evolutionary understanding, as well as a wide range of applications of central importance to industrial, medicinal, and pharmaceutical efforts.
Polysheroidal periodic functions
International Nuclear Information System (INIS)
Truskova, N.F.
1985-01-01
Separation of variables in the Helmholtz N-dimensional (N≥4) equation in polyspheroidal coordinate systems leads to the necessity of solving equations going over into equations for polyspheroidal periodic functions used for solving the two-centre problem in quantum mechanics, the three-body problem with Coulomb interaction, etc. For these functions the expansions are derived in terms of the Jacobi polynomials and Bessel functions. Their basic properties, asymptotics are considered. The algorithm of their computer calculations is developed. The results of numerical calculations are given
Directory of Open Access Journals (Sweden)
D. Singh
2007-10-01
Full Text Available Basic properties of cl-supercontinuity, a strong variant of continuity, due to Reilly and Vamanamurthy [Indian J. Pure Appl. Math., 14 (1983, 767–772], who call such maps clopen continuous, are studied. Sufficient conditions on domain or range for a continuous function to be cl-supercontinuous are observed. Direct and inverse transfer of certain topological properties under cl-supercontinuous functions are studied and existence or nonexistence of certain cl-supercontinuous function with specified domain or range is outlined.
Directory of Open Access Journals (Sweden)
Carlos A. Berenstein
1980-01-01
Full Text Available We show that any mean-periodic function f can be represented in terms of exponential-polynomial solutions of the same convolution equation f satisfies, i.e., u∗f=0(μ∈E′(ℝn. This extends to n-variables the work of L. Schwartz on mean-periodicity and also extends L. Ehrenpreis' work on partial differential equations with constant coefficients to arbitrary convolutors. We also answer a number of open questions about mean-periodic functions of one variable. The basic ingredient is our work on interpolation by entire functions in one and several complex variables.
Functional Amyloids in Reproduction.
Hewetson, Aveline; Do, Hoa Quynh; Myers, Caitlyn; Muthusubramanian, Archana; Sutton, Roger Bryan; Wylie, Benjamin J; Cornwall, Gail A
2017-06-29
Amyloids are traditionally considered pathological protein aggregates that play causative roles in neurodegenerative disease, diabetes and prionopathies. However, increasing evidence indicates that in many biological systems nonpathological amyloids are formed for functional purposes. In this review, we will specifically describe amyloids that carry out biological roles in sexual reproduction including the processes of gametogenesis, germline specification, sperm maturation and fertilization. Several of these functional amyloids are evolutionarily conserved across several taxa, including human, emphasizing the critical role amyloids perform in reproduction. Evidence will also be presented suggesting that, if altered, some functional amyloids may become pathological.
Characterisation of a uranium fire aerosol
International Nuclear Information System (INIS)
Leuscher, A.H.
1976-01-01
Uranium swarf, which can burn spontaneously in air, creates an aerosol which is chemically toxic and radiotoxic. The uptake of uranium oxide in the respiratory system is determined to a large extent by the characteristics of the aerosol. A study has been made of the methods by which aerosols can be characterised. The different measured and defined characteristics of particles are given. The normal and lognormal particle size distributions are discussed. Shape factors interrelating characteristics are explained. Experimental techniques for the characterisation of an aerosol are discussed, as well as the instruments that have been used in this study; namely the Andersen impactor, point-to-plane electrostatic precipitator and the Pollak counter. Uranium swarf was made to burn with a heated filament, and the resulting aerosol was measured. Optical and electron microscopy have been used for the determination of the projected area diameters, and the aerodynamic diameters have been determined with the impactor. The uranium fire aerosol can be represented by a bimodal, or monomodal, lognormal particle size distribution depending on the way in which the swarf burns. The determined activity median aerodynamic diameter of the two peaks were 0,49μm and 6,0μm respectively [af
THE PSEUDO-SMARANDACHE FUNCTION
David Gorski
2007-01-01
The Pseudo-Smarandache Function is part of number theory. The function comes from the Smarandache Function. The Pseudo-Smarandache Function is represented by Z(n) where n represents any natural number.
Coded Network Function Virtualization
DEFF Research Database (Denmark)
Al-Shuwaili, A.; Simone, O.; Kliewer, J.
2016-01-01
Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...
Functional Use Database (FUse)
U.S. Environmental Protection Agency — There are five different files for this dataset: 1. A dataset listing the reported functional uses of chemicals (FUse) 2. All 729 ToxPrint descriptors obtained from...
DEFF Research Database (Denmark)
Törpel, Bettina
2006-01-01
The objective of this paper is the design of computer supported joint action spaces. It is argued against a view of functionality as residing in computer applications. In such a view the creation of functionality is equivalent to the creation of computer applications. Functionality, in the view...... advocated in this paper, emerges in the specific dynamic interplay of actors, objectives, structures, practices and means. In this view, functionality is the result of creating, harnessing and inhabiting computer supported joint action spaces. The successful creation and further development of a computer...... supported joint action space comprises a whole range of appropriate design contributions. The approach is illustrated by the example of the creation of the computer supported joint action space "exchange network of voluntary union educators". As part of the effort a group of participants created...
... Liver Function Tests Clinical Trials Liver Transplant FAQs Medical Terminology Diseases of the Liver Alagille Syndrome Alcohol-Related ... the Liver The Progression of Liver Disease FAQs Medical Terminology HOW YOU CAN HELP Sponsorship Ways to Give ...
Introduction to structure functions
International Nuclear Information System (INIS)
Kwiecinski, J.
1996-07-01
The theory of deep inelastic scattering structure functions is reviewed with an emphasis put on the QCD expectations of their behaviour in the region of small values of Bjorken parameter x. (author). 56 refs
Bioprinting: Functional droplet networks
Durmus, Naside Gozde; Tasoglu, Savas; Demirci, Utkan
2013-06-01
Tissue-mimicking printed networks of droplets separated by lipid bilayers that can be functionalized with membrane proteins are able to spontaneously fold and transmit electrical currents along predefined paths.
Center for Functional Nanomaterials
Federal Laboratory Consortium — The Center for Functional Nanomaterials (CFN) explores the unique properties of materials and processes at the nanoscale. The CFN is a user-oriented research center...
density functional theory approach
Indian Academy of Sciences (India)
YOGESH ERANDE
2017-07-27
Jul 27, 2017 ... a key role in all optical switching devices, since their optical properties can be .... optimized in the gas phase using Density Functional Theory. (DFT).39 The ...... The Mediation of Electrostatic Effects by Sol- vents J. Am. Chem.
Reasoning about Function Objects
Nordio, Martin; Calcagno, Cristiano; Meyer, Bertrand; Müller, Peter; Tschannen, Julian
Modern object-oriented languages support higher-order implementations through function objects such as delegates in C#, agents in Eiffel, or closures in Scala. Function objects bring a new level of abstraction to the object-oriented programming model, and require a comparable extension to specification and verification techniques. We introduce a verification methodology that extends function objects with auxiliary side-effect free (pure) methods to model logical artifacts: preconditions, postconditions and modifies clauses. These pure methods can be used to specify client code abstractly, that is, independently from specific instantiations of the function objects. To demonstrate the feasibility of our approach, we have implemented an automatic prover, which verifies several non-trivial examples.
... Spread the Word Shop AAP Find a Pediatrician Family Life Medical Home Family Dynamics Adoption & Foster Care ... Español Text Size Email Print Share Normal Functioning Family Page Content Article Body Is there any way ...
Fundamentals of functional analysis
Farenick, Douglas
2016-01-01
This book provides a unique path for graduate or advanced undergraduate students to begin studying the rich subject of functional analysis with fewer prerequisites than is normally required. The text begins with a self-contained and highly efficient introduction to topology and measure theory, which focuses on the essential notions required for the study of functional analysis, and which are often buried within full-length overviews of the subjects. This is particularly useful for those in applied mathematics, engineering, or physics who need to have a firm grasp of functional analysis, but not necessarily some of the more abstruse aspects of topology and measure theory normally encountered. The reader is assumed to only have knowledge of basic real analysis, complex analysis, and algebra. The latter part of the text provides an outstanding treatment of Banach space theory and operator theory, covering topics not usually found together in other books on functional analysis. Written in a clear, concise manner,...
International Nuclear Information System (INIS)
Arnold, V.I.
2006-03-01
To describe the topological structure of a real smooth function one associates to it the graph, formed by the topological variety, whose points are the connected components of the level hypersurface of the function. For a Morse function, such a graph is a tree. Generically, it has T triple vertices, T + 2 endpoints, 2T + 2 vertices and 2T + 1 arrows. The main goal of the present paper is to study the statistics of the graphs, corresponding to T triple points: what is the growth rate of the number φ(T) of different graphs? Which part of these graphs is representable by the polynomial functions of corresponding degree? A generic polynomial of degree n has at most (n - 1) 2 critical points on R 2 , corresponding to 2T + 2 = (n - 1) 2 + 1, that is to T = 2k(k - 1) saddle-points for degree n = 2k
McGraw, John T [Placitas, NM; Zimmer, Peter C [Albuquerque, NM; Ackermann, Mark R [Albuquerque, NM
2012-01-24
Methods and apparatus for a structure function monitor provide for generation of parameters characterizing a refractive medium. In an embodiment, a structure function monitor acquires images of a pupil plane and an image plane and, from these images, retrieves the phase over an aperture, unwraps the retrieved phase, and analyzes the unwrapped retrieved phase. In an embodiment, analysis yields atmospheric parameters measured at spatial scales from zero to the diameter of a telescope used to collect light from a source.
Clayton, Anita H; Harsh, Veronica
2016-03-01
Women experience multiple changes in social and reproductive statuses across the life span which can affect sexual functioning. Various phases of the sexual response cycle may be impacted and can lead to sexual dysfunction. Screening for sexual problems and consideration of contributing factors such as neurobiology, reproductive life events, medical problems, medication use, and depression can help guide appropriate treatment and thereby improve the sexual functioning and quality of life of affected women. Treatment options include psychotropic medications, hormone therapy, and psychotherapy.
Inequalities for Humbert functions
Directory of Open Access Journals (Sweden)
Ayman Shehata
2014-04-01
Full Text Available This paper is motivated by an open problem of Luke’s theorem. We consider the problem of developing a unified point of view on the theory of inequalities of Humbert functions and of their general ratios are obtained. Some particular cases and refinements are given. Finally, we obtain some important results involving inequalities of Bessel and Whittaker’s functions as applications.
Mahamood, Rasheedat Modupe
2017-01-01
This book presents the concept of functionally graded materials as well as their use and different fabrication processes. The authors describe the use of additive manufacturing technology for the production of very complex parts directly from the three dimension computer aided design of the part by adding material layer after layer. A case study is also presented in the book on the experimental analysis of functionally graded material using laser metal deposition process.
[Functional (psychogenic) vertigo].
Diukova, G M; Zamergrad, M V; Golubev, V L; Adilova, S M; Makarov, S A
Psychogenic (functional) vertigo is in second place by frequency after benign positional paroxysmal vertigo. It is often difficult to make the diagnosis, diagnostic program is expensive and traditional treatment often is not effective. This literature review covers current concepts on the terminology, clinical signs, pathogenesis and treatment approaches with regard to functional vertigo. Special attention is given to cerebral mechanisms of the pathogenesis including cognitive aspects.
NEUROFEEDBACK USING FUNCTIONAL SPECTROSCOPY
Hinds, Oliver; Wighton, Paul; Tisdall, M. Dylan; Hess, Aaron; Breiter, Hans; van der Kouwe, André
2014-01-01
Neurofeedback based on real-time measurement of the blood oxygenation level-dependent (BOLD) signal has potential for treatment of neurological disorders and behavioral enhancement. Commonly employed methods are based on functional magnetic resonance imaging (fMRI) sequences that sacrifice speed and accuracy for whole-brain coverage, which is unnecessary in most applications. We present multi-voxel functional spectroscopy (MVFS): a system for computing the BOLD signal from multiple volumes of...
Griffel, DH
2002-01-01
A stimulating introductory text, this volume examines many important applications of functional analysis to mechanics, fluid mechanics, diffusive growth, and approximation. Detailed enough to impart a thorough understanding, the text is also sufficiently straightforward for those unfamiliar with abstract analysis. Its four-part treatment begins with distribution theory and discussions of Green's functions. Essentially independent of the preceding material, the second and third parts deal with Banach spaces, Hilbert space, spectral theory, and variational techniques. The final part outlines the
Pancreatic Exocrine Function Testing
Berk, J. Edward
1982-01-01
It is important to understand which pancreatic function tests are available and how to interpret them when evaluating patients with malabsorption. Available direct tests are the secretin stimulation test, the Lundh test meal, and measurement of serum or fecal enzymes. Indirect tests assess pancreatic exocrine function by measuring the effect of pancreatic secretion on various nutrients. These include triglycerides labeled with carbon 14, cobalamin labeled with cobalt 57 and cobalt 58, and par...
Purely Functional Structured Programming
Obua, Steven
2010-01-01
The idea of functional programming has played a big role in shaping today's landscape of mainstream programming languages. Another concept that dominates the current programming style is Dijkstra's structured programming. Both concepts have been successfully married, for example in the programming language Scala. This paper proposes how the same can be achieved for structured programming and PURELY functional programming via the notion of LINEAR SCOPE. One advantage of this proposal is that m...
Exponential and Logarithmic Functions
Todorova, Tamara
2010-01-01
Exponential functions find applications in economics in relation to growth and economic dynamics. In these fields, quite often the choice variable is time and economists are trying to determine the best timing for certain economic activities to take place. An exponential function is one in which the independent variable appears in the exponent. Very often that exponent is time. In highly mathematical courses, it is a truism that students learn by doing, not by reading. Tamara Todorova’s Pr...
Handbook of functional equations functional inequalities
2014-01-01
As Richard Bellman has so elegantly stated at the Second International Conference on General Inequalities (Oberwolfach, 1978), “There are three reasons for the study of inequalities: practical, theoretical, and aesthetic.” On the aesthetic aspects, he said, “As has been pointed out, beauty is in the eye of the beholder. However, it is generally agreed that certain pieces of music, art, or mathematics are beautiful. There is an elegance to inequalities that makes them very attractive.” The content of the Handbook focuses mainly on both old and recent developments on approximate homomorphisms, on a relation between the Hardy–Hilbert and the Gabriel inequality, generalized Hardy–Hilbert type inequalities on multiple weighted Orlicz spaces, half-discrete Hilbert-type inequalities, on affine mappings, on contractive operators, on multiplicative Ostrowski and trapezoid inequalities, Ostrowski type inequalities for the Riemann–Stieltjes integral, means and related functional inequalities, Weighted G...
Functional Neuroimaging in Psychopathy.
Del Casale, Antonio; Kotzalidis, Georgios D; Rapinesi, Chiara; Di Pietro, Simone; Alessi, Maria Chiara; Di Cesare, Gianluigi; Criscuolo, Silvia; De Rossi, Pietro; Tatarelli, Roberto; Girardi, Paolo; Ferracuti, Stefano
2015-01-01
Psychopathy is associated with cognitive and affective deficits causing disruptive, harmful and selfish behaviour. These have considerable societal costs due to recurrent crime and property damage. A better understanding of the neurobiological bases of psychopathy could improve therapeutic interventions, reducing the related social costs. To analyse the major functional neural correlates of psychopathy, we reviewed functional neuroimaging studies conducted on persons with this condition. We searched the PubMed database for papers dealing with functional neuroimaging and psychopathy, with a specific focus on how neural functional changes may correlate with task performances and human behaviour. Psychopathy-related behavioural disorders consistently correlated with dysfunctions in brain areas of the orbitofrontal-limbic (emotional processing and somatic reaction to emotions; behavioural planning and responsibility taking), anterior cingulate-orbitofrontal (correct assignment of emotional valence to social stimuli; violent/aggressive behaviour and challenging attitude) and prefrontal-temporal-limbic (emotional stimuli processing/response) networks. Dysfunctional areas more consistently included the inferior frontal, orbitofrontal, dorsolateral prefrontal, ventromedial prefrontal, temporal (mainly the superior temporal sulcus) and cingulated cortices, the insula, amygdala, ventral striatum and other basal ganglia. Emotional processing and learning, and several social and affective decision-making functions are impaired in psychopathy, which correlates with specific changes in neural functions. © 2015 S. Karger AG, Basel.
Functional integration over geometries
International Nuclear Information System (INIS)
Mottola, E.
1995-01-01
The geometric construction of the functional integral over coset spaces M/G is reviewed. The inner product on the cotangent space of infinitesimal deformations of M defines an invariant distance and volume form, or functional integration measure on the full configuration space. Then, by a simple change of coordinates parameterizing the gauge fiber G, the functional measure on the coset space M/G is deduced. This change of integration variables leads to a Jacobian which is entirely equivalent to the Faddeev--Popov determinant of the more traditional gauge fixed approach in non-abelian gauge theory. If the general construction is applied to the case where G is the group of coordinate reparameterizations of spacetime, the continuum functional integral over geometries, i.e. metrics modulo coordinate reparameterizations may be defined. The invariant functional integration measure is used to derive the trace anomaly and effective action for the conformal part of the metric in two and four dimensional spacetime. In two dimensions this approach generates the Polyakov--Liouville action of closed bosonic non-critical string theory. In four dimensions the corresponding effective action leads to novel conclusions on the importance of quantum effects in gravity in the far infrared, and in particular, a dramatic modification of the classical Einstein theory at cosmological distance scales, signaled first by the quantum instability of classical de Sitter spacetime. Finite volume scaling relations for the functional integral of quantum gravity in two and four dimensions are derived, and comparison with the discretized dynamical triangulation approach to the integration over geometries are discussed. Outstanding unsolved problems in both the continuum definition and the simplicial approach to the functional integral over geometries are highlighted
Renal Function in Hypothyroidism
International Nuclear Information System (INIS)
Khalid, S.; Khalid, M; Elfaki, M.; Hassan, N.; Suliman, S.M.
2007-01-01
Background Hypothyroidism induces significant changes in the function of organ systems such as the heart, muscles and brain. Renal function is also influenced by thyroid status. Physiological effects include changes in water and electrolyte metabolism, notably hyponatremia, and reliable alterations of renal hemodynamics, including decrements in renal blood flow, renal plasma flow, glomerular filtration rate (GFR). Objective Renal function is profoundly influenced by thyroid status; the purpose of the present study was to determine the relationship between renal function and thyroid status of patients with hypothyroidism. Design and Patients In 5 patients with primary hypothyroidism and control group renal functions are measured by serum creatinine and glomerular filtration rate (GFR) using modified in diet renal disease (MDRD) formula. Result In hypothyroidism, mean serum creatinine increased and mean estimated GFR decreased, compared to the control group mean serum creatinine decreased and mean estimated GFR Increased. The hypothyroid patients showed elevated serum creatinine levels (> 1.1mg/dl) compared to control group (p value .000). In patients mean estimated GFR decreased, compared to mean estimated GFR increased in the control group (p value= .002).
Directory of Open Access Journals (Sweden)
Samson Z Assefa
2015-08-01
Full Text Available Sleep is a ubiquitous component of animal life including birds and mammals. The exact function of sleep has been one of the mysteries of biology. A considerable number of theories have been put forward to explain the reason(s for the necessity of sleep. To date, while a great deal is known about what happens when animals sleep, there is no definitive comprehensive explanation as to the reason that sleep is an inevitable part of animal functioning. It is well known that sleep is a homeostatically regulated body process, and that prolonged sleep deprivation is fatal in animals. In this paper, we present some of the theories as to the functions of sleep and provide a review of some hypotheses as to the overall physiologic function of sleep. To better understand the purpose for sleeping, we review the effects of sleep deprivation on physical, neurocognitive and psychic function. A better understanding of the purpose for sleeping will be a great advance in our understanding of the nature of the animal kingdom, including our own.
Renal Function in Hypothyroidism
International Nuclear Information System (INIS)
Khalid, A. S; Ahmed, M.I; Elfaki, H.M; Hassan, N.; Suliman, S. M.
2006-12-01
Background hypothyroidism induces significant changes in the function of organ systems such as the heart, muscles and brain. Renal function is also influenced by thyroid status. Physiological effects include changes in water and electrolyte metabolism, notably hyponatraemia, and reliable alterations of renal hemodynamics, including decrements in renal blood flow, renal plasma flow, glomerular filtration rate (GFR). Objective renal function is profoundly influenced by thyroid status, the purpose of the present study was to determine the relationship between renal function and thyroid status of patients with hypothyroidism. Design and patients in 5 patients with primary hypothyroidism and control group renal functions are measured by serum creatinine and glomerular filtration rate(GFR) using modified in diet renal disease (MDRD) formula. Result in hypothyroidism, mean serum creatinine increased and mean estimated GFR decreased, compared to the control group mean serum creatinine decreased and mean estimated GFR increased. The hypothyroid patients showed elevated serum creatinine levels(>1.1 mg/d1) compared to control group (p value= 000). In patients mean estimated GFR increased in the control group (p value=.002).Conclusion thus the kidney, in addition to the brain, heart and muscle, is an important target of the action of thyroid hormones.(Author)
The tensor distribution function.
Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M
2009-01-01
Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.
International Nuclear Information System (INIS)
Kornblum, J.J.
1974-01-01
The search for a quantitative neutron source function for the lunar surface region is justified because it contributes to our understanding of the history of the lunar surface and of nuclear process occurring on the moon since its formation. A knowledge of the neutron source function and neutron flux distribution is important for the interpretation of many experimental measurements. This dissertation uses the available pertinent experimental measurements together with theoretical calculations to obtain an estimate of the lunar neutron source function below 15 MeV. Based upon reasonable assumptions a lunar neutron source function having adjustable parameters is assumed for neutrons below 15 MeV. The lunar neutron source function is composed of several components resulting from the action of cosmic rays with lunar material. A comparison with previous neutron calculations is made and significant differences are discussed. Application of the results to the problem of lunar soil histories is examined using the statistical model for soil development proposed by Fireman. The conclusion is drawn that the moon is losing mass
Functional (psychogenic) stereotypies.
Baizabal-Carvallo, José Fidel; Jankovic, Joseph
2017-07-01
Functional (psychogenic) movement disorders (FMDs) may present with a broad spectrum of phenomenology including stereotypic movements. We aimed to characterize the phenomenology of functional stereotypies and compare these features with those observed in 65 patients with tardive dyskinesia (TD). From a cohort of 184 patients with FMDs, we identified 19 (10.3%) with functional stereotypies (FS). There were 15 women and 4 men, with a mean age at onset of 38.6 ± 17.4 years. Among the patients with FS, there were 9 (47%) with orolingual dyskinesia/stereotypy, 9 (47%) with limb stereotypies, 6 (32%) with trunk stereotypies, and 2 (11%) with respiratory dyskinesia as part of orofacial-laryngeal-trunk stereotypy. These patients showed signs commonly seen in FMDs such as sudden onset (84%), prominent distractibility (58%), and periods of unexplained improvement (84%) that were not reported in patients with TD. Besides a much lower frequency of exposure to potential offending drugs, patients with FS differed from those with classic TD by a younger age at onset, lack of self-biting, uncommon chewing movements, more frequent lingual movements without mouth dyskinesia, and associated functional tremor and abnormal speech. Lack of self-biting showed the highest sensitivity (1.0) and abnormal speech showed the highest specificity (0.9) for the diagnosis of functional orolingual dyskinesia. FS represent part of the clinical spectrum of FMDs. Clinical and demographic features are helpful in distinguishing patients with FS from those with TD.
Functional Anorectal Disorders.
Rao, Satish Sc; Bharucha, Adil E; Chiarioni, Giuseppe; Felt-Bersma, Richelle; Knowles, Charles; Malcolm, Allison; Wald, Arnold
2016-03-25
This report defines criteria and reviews the epidemiology, pathophysiology, and management of common anorectal disorders: fecal incontinence (FI), functional anorectal pain and functional defecation disorders. FI is defined as the recurrent uncontrolled passage of fecal material for at least 3 months. The clinical features of FI are useful for guiding diagnostic testing and therapy. Anorectal manometry and imaging are useful for evaluating anal and pelvic floor structure and function. Education, antidiarrheals and biofeedback therapy are the mainstay of management; surgery may be useful in refractory cases. Functional anorectal pain syndromes are defined by clinical features and categorized into three subtypes. In proctalgia fugax, the pain is typically fleeting and lasts for seconds to minutes. In levator ani syndrome (LAS) and unspecified anorectal pain the pain lasts more than 30 minutes, but in LAS there is puborectalis tenderness. Functional defecation disorders are defined by >2 symptoms of chronic constipation or irritable bowel syndrome with constipation, and with >2 features of impaired evacuation i.e., abnormal evacuation pattern on manometry, abnormal balloon expulsion test or impaired rectal evacuation by imaging. It includes two subtypes; dyssynergic defecation and inadequate defecatory propulsion. Pelvic floor biofeedback therapy is effective for treating LAS and defecatory disorders. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.
Blair, Clancy
2016-01-01
Executive functions are thinking skills that assist with reasoning, planning, problem solving, and managing one’s life. The brain areas that underlie these skills are interconnected with and influenced by activity in many different brain areas, some of which are associated with emotion and stress. One consequence of the stress-specific connections is that executive functions, which help us to organize our thinking, tend to be disrupted when stimulation is too high and we are stressed out, or too low when we are bored and lethargic. Given their central role in reasoning and also in managing stress and emotion, scientists have conducted studies, primarily with adults, to determine whether executive functions can be improved by training. By and large, results have shown that they can be, in part through computer-based videogame-like activities. Evidence of wider, more general benefits from such computer-based training, however, is mixed. Accordingly, scientists have reasoned that training will have wider benefits if it is implemented early, with very young children as the neural circuitry of executive functions is developing, and that it will be most effective if embedded in children’s everyday activities. Evidence produced by this research, however, is also mixed. In sum, much remains to be learned about executive function training. Without question, however, continued research on this important topic will yield valuable information about cognitive development. PMID:27906522
Staunton, H
2001-01-01
Theories on the function of REM sleep and dreaming, with which it has a contingent relationship, remain diverse. They include facilitation of memory storage, reverse learning, anatomical and functional brain maturation, catecholamine restoration, psychoanalytical (wish fulfilment or otherwise). It is possible that one function is grafted onto another as the personality develops. Given a close relationship between REM sleep and dreaming, and given that the neonate spends 18 hours asleep per day, of which 12 hours are spent in REM sleep, it is logical to look in the neonate for a primary function of dreaming. The two constants in the dreaming process are: 1) the dreamer is always present as first person observer; 2) there is always a topographical setting. Based on the foregoing, it is proposed that a major function of REM sleep is the development and maintenance of a sense of personal identity, through creating a 'being there' environment at regular intervals during prolonged periods of absence from a waking state in topographical surrounds. The infant cannot forget who he/she is. Thus, he/she develops a clear sense of his/her own identity, or the 'I'ness of me', and a sense of his/her separateness from the topographical world. At the same time, by largely forgetting the dreams, he/she is not burdened by the need for an elaborate method of storage of the vicarious and bizarre experiences.
Effect of functional overreaching on executive functions.
Dupuy, O; Renaud, M; Bherer, L; Bosquet, L
2010-09-01
The aim of this study was to investigate whether cognitive performance was a valid marker of overreaching. 10 well-trained male endurance athletes increased their training load by 100% for 2 weeks. They performed a maximal graded test, a constant speed test, a reaction time task and a computerized version of the Stroop color word-test before and after this overload period. Regarding performance results, five participants were considered as overreached and the five remaining were considered as well-trained. We found no significant differences between groups in performing the Stroop test. Noteworthy, we found a small increase in response time in the more complex condition in overreached athletes (1 188+/-261 to 1 297+/-231 ms, effect size=0.44), while it decreased moderately in the well-trained athletes (1 066+/-175 to 963+/-171 ms, effect size=-0.59). Furthermore, we found an interaction between time and group on initiation time of the reaction time task, since it increased in overreached athletes after the overload period (246+/-24 to 264+/-26 ms, p<0.05), while it remained unchanged in well-trained participants. Participants made very few anticipation errors, whatever the group or the period (error rate <2%).We concluded that an unaccustomed increase in training volume which is accompanied by a decrement in physical performance induces a deterioration of some executive functions. Georg Thieme Verlag KG Stuttgart . New York.
Functional illiteracy in Slovenia
Directory of Open Access Journals (Sweden)
Ester Možina
1999-12-01
Full Text Available The author draws attention to the fact that, in determining functional illiteracy, there remain many terminological disagreements and diverse opinions regarding illiteracy. Furthermore, there are also different methods for measuring writing abilities, thus leading to disparate results. The introductory section presents the dilemmas relating to the term of functional illiteracy, while the second part is concerned with the various methods for measuring literacy. Thus, the author also critically assesses the research studies aimed at evaluating the scope of literacy amongst adults in Slovenia during the past decade. ln this paper, she has adopted a methodology which would not determine what is functional and what is not in our society, in order to avoid limiting the richness of individual writing praxis.
International Nuclear Information System (INIS)
McCready, V.R.; Leach, M.O.; Sutton; Ell, P.
1986-01-01
The object of this book is to discuss and evaluate an area of Nuclear Magnetic Resonance which to date has been less emphasized than it might be, namely the use of NMR for functional studies. The book commences with a discussion of the areas in which the NMR techniques might be needed due to deficiencies in other techniques. The physics of NMR especially relating to functional measurement are then explained. Technical factors in producing functional images are discussed and the use of paramagnetic substances for carrying out flow studies are detailed. Particular attention is paid to specific studies in the various organs. The book ends with a survey of imaging in each organ and the relation of NMR images to other techniques such as ultrasound, nuclear medicine and X-rays
Quantal density functional theory
Sahni, Viraht
2016-01-01
This book deals with quantal density functional theory (QDFT) which is a time-dependent local effective potential theory of the electronic structure of matter. The treated time-independent QDFT constitutes a special case. In the 2nd edition, the theory is extended to include the presence of external magnetostatic fields. The theory is a description of matter based on the ‘quantal Newtonian’ first and second laws which is in terms of “classical” fields that pervade all space, and their quantal sources. The fields, which are explicitly defined, are separately representative of electron correlations due to the Pauli exclusion principle, Coulomb repulsion, correlation-kinetic, correlation-current-density, and correlation-magnetic effects. The book further describes Schrödinger theory from the new physical perspective of fields and quantal sources. It also describes traditional Hohenberg-Kohn-Sham DFT, and explains via QDFT the physics underlying the various energy functionals and functional derivatives o...
Protein Functionalized Nanodiamond Arrays
Directory of Open Access Journals (Sweden)
Liu YL
2010-01-01
Full Text Available Abstract Various nanoscale elements are currently being explored for bio-applications, such as in bio-images, bio-detection, and bio-sensors. Among them, nanodiamonds possess remarkable features such as low bio-cytotoxicity, good optical property in fluorescent and Raman spectra, and good photostability for bio-applications. In this work, we devise techniques to position functionalized nanodiamonds on self-assembled monolayer (SAMs arrays adsorbed on silicon and ITO substrates surface using electron beam lithography techniques. The nanodiamond arrays were functionalized with lysozyme to target a certain biomolecule or protein specifically. The optical properties of the nanodiamond-protein complex arrays were characterized by a high throughput confocal microscope. The synthesized nanodiamond-lysozyme complex arrays were found to still retain their functionality in interacting with E. coli.
Choice probability generating functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2010-01-01
This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....
Functional Programming Using F#
DEFF Research Database (Denmark)
Hansen, Michael Reichhardt; Rischel, Hans
This comprehensive introduction to the principles of functional programming using F# shows how to apply basic theoretical concepts to produce succinct and elegant programs. It demonstrates the role of functional programming in a wide spectrum of applications including databases and systems....... Coverage also includes advanced features in the .NET library, the imperative features of F# and topics such as text processing, sequences, computation expressions and asynchronous computation. With a broad spectrum of examples and exercises, the book is perfect for courses in functional programming...... and for self-study. Enhancing its use as a text is an accompanying website with downloadable programs, lecture slides, a mini-projects and links to further F# sources....
Spaces of continuous functions
Groenewegen, G L M
2016-01-01
The space C(X) of all continuous functions on a compact space X carries the structure of a normed vector space, an algebra and a lattice. On the one hand we study the relations between these structures and the topology of X, on the other hand we discuss a number of classical results according to which an algebra or a vector lattice can be represented as a C(X). Various applications of these theorems are given. Some attention is devoted to related theorems, e.g. the Stone Theorem for Boolean algebras and the Riesz Representation Theorem. The book is functional analytic in character. It does not presuppose much knowledge of functional analysis; it contains introductions into subjects such as the weak topology, vector lattices and (some) integration theory.
Expanding Pseudorandom Functions
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Nielsen, Jesper Buus
2002-01-01
Given any weak pseudorandom function, we present a general and efficient technique transforming such a function to a new weak pseudorandom function with an arbitrary length output. This implies, among other things, an encryption mode for block ciphers. The mode is as efficient as known (and widely...... used) encryption modes as CBC mode and counter (CTR) mode, but is provably secure against chosen-plaintext attack (CPA) already if the underlying symmetric cipher is secure against known-plaintext attack (KPA). We prove that CBC, CTR and Jutla’s integrity aware modes do not have this property....... In particular, we prove that when using a KPA secure block cipher, then: CBC mode is KPA secure, but need not be CPA secure, Jutla’s modes need not be CPA secure, and CTR mode need not be even KPA secure. The analysis is done in a concrete security framework....
International Nuclear Information System (INIS)
Levine, R.D.
1988-01-01
Statistical considerations are applied to quantum mechanical amplitudes. The physical motivation is the progress in the spectroscopy of highly excited states. The corresponding wave functions are strongly mixed. In terms of a basis set of eigenfunctions of a zeroth-order Hamiltonian with good quantum numbers, such wave functions have contributions from many basis states. The vector x is considered whose components are the expansion coefficients in that basis. Any amplitude can be written as a dagger x x. It is argued that the components of x and hence other amplitudes can be regarded as random variables. The maximum entropy formalism is applied to determine the corresponding distribution function. Two amplitudes a dagger x x and b dagger x x are independently distributed if b dagger x a = 0. It is suggested that the theory of quantal measurements implies that, in general, one can one determine the distribution of amplitudes and not the amplitudes themselves
DEFF Research Database (Denmark)
Nielsen, Line A.; Zois, Nora Elisabeth; Pedersen, Henrik D.
2007-01-01
Background: Clinical studies investigating platelet function in dogs have had conflicting results that may be caused by normal physiologic variation in platelet response to agonists. Objectives: The objective of this study was to investigate platelet function in clinically healthy dogs of 4...... different breeds by whole-blood aggregometry and with a point-of-care platelet function analyzer (PFA-100), and to evaluate the effect of acetylsalicylic acid (ASA) administration on the results from both methods. Methods: Forty-five clinically healthy dogs (12 Cavalier King Charles Spaniels [CKCS], 12...... applied. However, the importance of these breed differences remains to be investigated. The PFA-100 method with Col + Epi as agonists, and ADP-induced platelet aggregation appear to be sensitive to ASA in dogs....
International Nuclear Information System (INIS)
Khrapko, R.I.
1985-01-01
A uniform description of various path-dependent functions is presented with the help of expansion of the type of the Taylor series. So called ''path-integrals'' and ''path-tensor'' are introduced which are systems of many-component quantities whose values are defined for arbitrary paths in coordinated region of space in such a way that they contain a complete information on the path. These constructions are considered as elementary path-dependent functions and are used instead of power monomials in the usual Taylor series. Coefficients of such an expansion are interpreted as partial derivatives dependent on the order of the differentiations or else as nonstandard cavariant derivatives called two-point derivatives. Some examples of pathdependent functions are presented.Space curvature tensor is considered whose geometrica properties are determined by the (non-transitive) translator of parallel transport of a general type. Covariant operation leading to the ''extension'' of tensor fiels is pointed out
DEFF Research Database (Denmark)
Rumessen, J J; Gudmand-Høyer, E
1988-01-01
Twenty-five patients with functional bowel disease were given fructose, sorbitol, fructose-sorbitol mixtures, and sucrose. The occurrence of malabsorption was evaluated by means of hydrogen breath tests and the gastrointestinal symptoms, if any, were recorded. One patient could not be evaluated...... because of lack of H2 production. Based on a cut-off level of 10 ppm rise of H2 concentration, malabsorption was apparent in 13 patients, in 7 of which the calculated absorption capacities were below 15 g. In contrast, in patients given 50 g of sucrose, malabsorption could not be detected. Ingestion...... with functional bowel disease. The findings may have direct influence on the dietary guidance given to a major group of patients with functional bowel disease and may make it possible to define separate entities in this disease complex....
Functional digital substraction sialography
International Nuclear Information System (INIS)
Yudin, L.; Saidkarimova, I.
1994-01-01
The proposed method of functional digital substraction sialography makes possible to reduce the amount of contrast substance used, by comparison to conventional sialography. The processing of the obtained image during different phases of the contrast visualisation of the gland allows for detecting and recording eventual functional disorders. A more clear-cut visualization of subtracted contrast images against the background of the eliminated bone structures is attained. A total of 37 patients presenting various diseases of salivary glands are covered in this study. The procedure applied contributes greatly to improving the quality of diagnosing information afforded by the counter-stained image of salivary glands at reduced radiation exposure and shorter time of examination. An essential advantage is the possibility provided to disclose functional and morphological changes, especially in the early phases of the disease. 11 refs. (orig.)
DEFF Research Database (Denmark)
Glaveanu, Vlad Petre
2014-01-01
of their manifold functions that integrates aesthetic and utilitarian, individual and social roles. Ornaments help us to identify and locate, tell or communicate, remind and organise our action, they guide our attention, express and individualise, can generate an experience, beautify as well as re......-present. These functions are illustrated with examples from a study of Easter egg decoration practices in northern Romania. In the end, the ‘meta-function’ of emergence is discussed and consideration is given to the spatial and temporal contexts of ornaments. Future opportunities for theorising ornamentation...
Sarason, Donald
2007-01-01
Complex Function Theory is a concise and rigorous introduction to the theory of functions of a complex variable. Written in a classical style, it is in the spirit of the books by Ahlfors and by Saks and Zygmund. Being designed for a one-semester course, it is much shorter than many of the standard texts. Sarason covers the basic material through Cauchy's theorem and applications, plus the Riemann mapping theorem. It is suitable for either an introductory graduate course or an undergraduate course for students with adequate preparation. The first edition was published with the title Notes on Co
International Nuclear Information System (INIS)
Virchaux, M.
1992-11-01
The present status of experimental measurements of the nucleon structure functions is reviewed. The results from nearly all deep inelastic experiments are in good mutual agreement. Principles of the analysis of these structure function data in the framework of QCD are described. The specific features of the perturbative QCD predictions are observed in the data. This provides quantitative tests of the validity of QCD as well as determinations of the various parton distributions in the nucleon and some of the most precise measurements of the strong coupling constant αs. The future of this field of experimental physics is sketched
Harmonic supergraphs. Green functions
International Nuclear Information System (INIS)
Galperin, A.; Ivanov, E.; Gievetsky, V.; Sokatchev, E.
1985-01-01
The quantization procedure in the harmonic superspace approach is worked out. Harmonic distributions are introduced and are used to construct the analytic superspace delta-functions and the Green functions for the hypermultiplet and the N=2 Yang-Mills superfields. The gauge fixing is described and the relevant Faddeev-Popov ghosts are defined. The corresponding BRST transformations are found. The harmonic superspace quantization of the N=2 gauge theory turns out to be rather simple and has many parallels with that for the standard (N=0) Yang-Mills theory. In particular, no ghosts-forghosts are needed
Chromatin Structure and Function
Wolffe, Alan P
1999-01-01
The Third Edition of Chromatin: Structure and Function brings the reader up-to-date with the remarkable progress in chromatin research over the past three years. It has been extensively rewritten to cover new material on chromatin remodeling, histone modification, nuclear compartmentalization, DNA methylation, and transcriptional co-activators and co-repressors. The book is written in a clear and concise fashion, with 60 new illustrations. Chromatin: Structure and Function provides the reader with a concise and coherent account of the nature, structure, and assembly of chromatin and its active
Unpolarized Structure Functions
International Nuclear Information System (INIS)
Christy, M.E.; Melnitchouk, W.
2011-01-01
Over the past decade measurements of unpolarized structure functions with unprecedented precision have significantly advanced our knowledge of nucleon structure. These have for the first time allowed quantitative tests of the phenomenon of quark-hadron duality, and provided a deeper understanding of the transition from hadron to quark degrees of freedom in inclusive scattering. Dedicated Rosenbluth-separation experiments have yielded high-precision transverse and longitudinal structure functions in regions previously unexplored, and new techniques have enabled the first glimpses of the structure of the free neutron, without contamination from nuclear effects.
International Nuclear Information System (INIS)
Kim, Jinsoo; Lee, Soojoon; Chi, Dong Pyo
2002-01-01
The limitation on the size of quantum computers makes it important to reuse qubits for auxiliary registers even though they are entangled with others and are occupied by other computational processes. We construct a quantum algorithm that performs the functional phase rotation, which is the generalized form of the conventional conditional phase transforms, using the functional evaluation oracle. The constructed algorithm works without any a priori knowledge of the state of an auxiliary register at the beginning and it recovers the initial state of an auxiliary register at the end. This provides ample scope to choose qubits for auxiliary registers at will. (author)
DEFF Research Database (Denmark)
Lorentzen, Jakob; Poulsen, Ingrid
2005-01-01
Early Functional Abilities (EFA), - en skala til evaluering af behandlingsforløb af svært hjerneskadede patienter i forbindelse med tidlig rehabilitering. Formål At monitorere og dokumentere rehabiliteringsforløbet for svært hjerneskadede patienter, hvor funktionsniveauet endnu ikke kan registreres...
Wine and endothelial function.
Caimi, G; Carollo, C; Lo Presti, R
2003-01-01
In recent years many studies have focused on the well-known relationship between wine consumption and cardiovascular risk. Wine exerts its protective effects through various changes in lipoprotein profile, coagulation and fibrinolytic cascades, platelet aggregation, oxidative mechanisms and endothelial function. The last has earned more attention for its implications in atherogenesis. Endothelium regulates vascular tone by a delicate balancing among vasorelaxing (nitric oxide [NO]) and vasoconstrincting (endothelins) factors produced by endothelium in response to various stimuli. In rat models, wine and other grape derivatives exerted an endothelium-dependent vasorelaxing capacity especially associated with the NO-stimulating activity of their polyphenol components. In experimental conditions, reservatrol (a stilbene polyphenol) protected hearts and kidneys from ischemia-reperfusion injury through antioxidant activity and upregulation of NO production. Wine polyphenols are also able to induce the expression of genes involved in the NO pathway within the arterial wall. The effects of wine on endothelial function in humans are not yet clearly understood. A favorable action of red wine or dealcoholized wine extract or purple grape juice on endothelial function has been observed by several authors, but discrimination between ethanol and polyphenol effects is controversial. It is, however likely that regular and prolonged moderate wine drinking positively affects endothelial function. The beneficial effects of wine on cardiovascular health are greater if wine is associated with a healthy diet. The most recent nutritional and epidemiologic studies show that the ideal diet closely resembles the Mediterranean diet.
DEFF Research Database (Denmark)
Bang, Anne Louise
2007-01-01
sensing of fabrics in function. It is proposed that tactile and visual sensing of fabrics is a way to investigate and express emotional utility values. The further purpose is to use experiments with repertory grid models as part of the mapping of the entire research project and also as a basis...
DEFF Research Database (Denmark)
Sinden, Richard R.; E. Pearson, Christopher; N. Potaman, Vladimir
1998-01-01
This chapter discusses the structure and function of DNA. DNA occupies a critical role in cells, because it is the source of all intrinsic genetic information. Chemically, DNA is a very stable molecule, a characteristic important for a macromolecule that may have to persist in an intact form...
DEFF Research Database (Denmark)
Winther, Sally
. The first part of this thesis explores this by identifying and investigating two novel kinase regulators of brown adipocyte function. Study 1 demonstrates that spleen tyrosine kinase is a hitherto undescribed regulator of brown adipocyte differentiation and activation. Study 2 identifies glycogen synthase...
Deimling, Klaus
1985-01-01
topics. However, only a modest preliminary knowledge is needed. In the first chapter, where we introduce an important topological concept, the so-called topological degree for continuous maps from subsets ofRn into Rn, you need not know anything about functional analysis. Starting with Chapter 2, where infinite dimensions first appear, one should be familiar with the essential step of consider ing a sequence or a function of some sort as a point in the corresponding vector space of all such sequences or functions, whenever this abstraction is worthwhile. One should also work out the things which are proved in § 7 and accept certain basic principles of linear functional analysis quoted there for easier references, until they are applied in later chapters. In other words, even the 'completely linear' sections which we have included for your convenience serve only as a vehicle for progress in nonlinearity. Another point that makes the text introductory is the use of an essentially uniform mathematical languag...
Smith, Walter T., Jr.; Patterson, John M.
1984-01-01
Literature on analytical methods related to the functional groups of 17 chemical compounds is reviewed. These compounds include acids, acid azides, alcohols, aldehydes, ketones, amino acids, aromatic hydrocarbons, carbodiimides, carbohydrates, ethers, nitro compounds, nitrosamines, organometallic compounds, peroxides, phenols, silicon compounds,…
Empirical microeconomics action functionals
Baaquie, Belal E.; Du, Xin; Tanputraman, Winson
2015-06-01
A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).
Ligterink, N.E.
2007-01-01
Functional system dynamics is the analysis, modelling, and simulation of continuous systems usually described by partial differential equations. From the infinite degrees of freedom of such systems only a finite number of relevant variables have to be chosen for a practical model description. The
Objectification and Semiotic Function
Santi, George
2011-01-01
The objective of this paper is to study students' difficulties when they have to ascribe the same meaning to different representations of the same mathematical object. We address two theoretical tools that are at the core of Radford's cultural semiotic and Godino's onto-semiotic approaches: objectification and the semiotic function. The analysis…
Cobham recursive set functions
Czech Academy of Sciences Publication Activity Database
Beckmann, A.; Buss, S.; Friedman, S.-D.; Müller, M.; Thapen, Neil
2016-01-01
Roč. 167, č. 3 (2016), s. 335-369 ISSN 0168-0072 R&D Projects: GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : set function * polynomial time * Cobham recursion Subject RIV: BA - General Mathematics Impact factor: 0.647, year: 2016 http://www.sciencedirect.com/science/article/pii/S0168007215001293
Directory of Open Access Journals (Sweden)
Bialynicki-Birula Iwo
2014-01-01
Full Text Available Original definition of the Wigner function can be extended in a natural manner to relativistic domain in the framework of quantum field theory. Three such generalizations are described. They cover the cases of the Dirac particles, the photon, and the full electromagnetic field.
Production Functions Behaving Badly
DEFF Research Database (Denmark)
Fredholm, Thomas
This paper reconsiders Anwar Shaikh's critique of the neoclassical theory of growth and distribution based on its use of aggregate production functions. This is done by reconstructing and extending Franklin M. Fisher's 1971 computer simulations, which Shaikh used to support his critique. Together...
Modifiable Combining Functions
Cohen, Paul; Shafer, Glenn; Shenoy, Prakash P.
2013-01-01
Modifiable combining functions are a synthesis of two common approaches to combining evidence. They offer many of the advantages of these approaches and avoid some disadvantages. Because they facilitate the acquisition, representation, explanation, and modification of knowledge about combinations of evidence, they are proposed as a tool for knowledge engineers who build systems that reason under uncertainty, not as a normative theory of evidence.
Grover, Madhusudan; Drossman, Douglas A
2010-10-01
Functional abdominal pain syndrome (FAPS) is a relatively less common functional gastrointestinal (GI) disorder defined by the presence of constant or frequently recurring abdominal pain that is not associated with eating, change in bowel habits, or menstrual periods (Drossman Gastroenterology 130:1377-1390, 2006), which points to a more centrally targeted (spinal and supraspinal) basis for the symptoms. However, FAPS is frequently confused with irritable bowel syndrome and other functional GI disorders in which abdominal pain is associated with eating and bowel movements. FAPS also differs from chronic abdominal pain associated with entities such as chronic pancreatitis or chronic inflammatory bowel disease, in which the pain is associated with peripherally acting factors (eg, gut inflammation or injury). Given the central contribution to the pain experience, concomitant psychosocial disturbances are common and strongly influence the clinical expression of FAPS, which also by definition is associated with loss of daily functioning. These factors make it critical to use a biopsychosocial construct to understand and manage FAPS, because gut-directed treatments are usually not successful in managing this condition.
Micrononcasual Euclidean wave functions
International Nuclear Information System (INIS)
Enatsu, H.; Takenaka, A.; Okazaki, M.
1978-01-01
A theory which describes the internal attributes of hadrons in terms of space-time wave functions is presented. In order to develop the theory on the basis of a rather realistic model, covariant wave equations are first derived for the deuteron, in which the co-ordinates of the centre of mass of two nucleons can be defined unambiguously. Then the micro-noncasual behaviour of virtual mesons mediating between the two nucleons is expressed by means of wave functions depending only on the relative Euclidean co-ordinates with respect to the centre of mass of the two nucleons; the wave functions are assumed to obey the 0 4 and SU 2 x SU 2 groups. The properties of the wave functions under space inversion, time reversal and particle-antiparticle conjugation are investigated. It is found that the internal attributes of the mesons, such as spin, isospin, strangeness, intrinsic parity, charge parity and G-parity are explained consistently. The theory is applicable also to the case of baryons
Functional ingredients from microalgae
Buono, S.; Langellotti, A.L.; Martello, A.; Rinna, F.; Fogliano, V.
2014-01-01
A wide variety of natural sources are under investigation to evaluate their possible use for new functional ingredient formulation. Some records attested the traditional and ancient use of wild harvested microalgae as human food but their cultivation for different purposes started about 40 years
Update on Functional Heartburn
Yamasaki, Takahisa; O’Neil, Jessica
2017-01-01
The definition of functional heartburn has been refined over the years. It is currently described, based upon Rome IV criteria, as typical heartburn symptoms in the presence of normal upper endoscopy findings (including normal biopsies), normal esophageal pH testing, and a negative association between symptoms and reflux events. Functional heartburn is very common, affecting women more than men, and with reflux hypersensitivity makes up the majority of heartburn patients who fail twice-daily proton pump inhibitor therapy. These disorders overlap with other functional gastrointestinal disorders and are often accompanied by psychological comorbidities. Diagnosis is made by using endoscopy with esophageal biopsies, wireless pH capsule, pH-impedance monitoring, and high-resolution esophageal manometry. Additional diagnostic tools that may be of value include magnification endoscopy, chromoendoscopy, narrow-band imaging, autofluorescence imaging, mucosal impedance, impedance baseline values, and histopathology scores. Functional heartburn is primarily treated with neuromodulators. Psychological intervention and complementary and alternative medicine may also play important roles in the treatment of these patients. PMID:29339948
Efficient Generic Functional Programming
Alimarine, A.; Smetsers, J.E.W.
2005-01-01
Generic functions are defined by induction on the structural representation of types. As a consequence, by defining just a single generic operation, one acquires this operation over any particular data type. An instance on a specific type is generated by interpretation of the type's structure. A
Alimarine, A.; Smetsers, J.E.W.
2004-01-01
Generic functions are defined by induction on the structural representation of types. As a consequence, by defining just a single generic operation, one acquires this operation over any particular type. An instance on a specific type is generated by interpretation of the type's structure. A direct
Logarithmic-function generator
Caron, P. R.
1975-01-01
Solid-state logarithmic-function generator is compact and provides improved accuracy. Generator includes a stable multivibrator feeding into RC circuit. Resulting exponentially decaying voltage is compared with input signal. Generator output is proportional to time required for exponential voltage to decay from preset reference level to level of input signal.
Functional Communication Training
Durand, V. Mark; Moskowitz, Lauren
2015-01-01
Thirty years ago, the first experimental demonstration was published showing that educators could improve significant challenging behavior in children with disabilities by replacing these behaviors with forms of communication that served the same purpose, a procedure called functional communication training (FCT). Since the publication of that…
Generalized elementary functions
Czech Academy of Sciences Publication Activity Database
Monteiro, Giselle Antunes; Slavík, A.
2014-01-01
Roč. 411, č. 2 (2014), s. 838-852 ISSN 0022-247X Institutional support: RVO:67985840 Keywords : elementary functions * Kurzweil-Stieltjes integral * generalized linear ordinary differential equations * time scale calculus Subject RIV: BA - General Mathematics Impact factor: 1.120, year: 2014 http://www.sciencedirect.com/science/article/pii/S0022247X13009141
Executive functions in synesthesia
Rouw, Romke; van Driel, Joram; Knip, Koen; Richard Ridderinkhof, K.
2013-01-01
In grapheme-color synesthesia, a number or letter can evoke two different and possibly conflicting (real and synesthetic) color sensations at the same time. In this study, we investigate the relationship between synesthesia and executive control functions. First, no general skill differences were
Mars Sample Handling Functionality
Meyer, M. A.; Mattingly, R. L.
2018-04-01
The final leg of a Mars Sample Return campaign would be an entity that we have referred to as Mars Returned Sample Handling (MRSH.) This talk will address our current view of the functional requirements on MRSH, focused on the Sample Receiving Facility (SRF).
Laboratory Density Functionals
Giraud, B. G.
2007-01-01
We compare several definitions of the density of a self-bound system, such as a nucleus, in relation with its center-of-mass zero-point motion. A trivial deconvolution relates the internal density to the density defined in the laboratory frame. This result is useful for the practical definition of density functionals.
Directory of Open Access Journals (Sweden)
Ramesh Kalindri
2014-06-01
Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.
The functionality of creativity
Sligte, D.J.
2013-01-01
This dissertation is about the functionality of creativity. Why do people invest time and effort in being creative? Being creative is inherently risky, as you need to come up with something new that departs from what is already known, and there is a risk of ridicule, being singled out, or simply
Automatic Functional Harmonic Analysis
de Haas, W.B.; Magalhães, J.P.; Wiering, F.; Veltkamp, R.C.
2013-01-01
Music scholars have been studying tonal harmony intensively for centuries, yielding numerous theories and models. Unfortunately, a large number of these theories are formulated in a rather informal fashion and lack mathematical precision. In this article we present HarmTrace, a functional model of
Ligterink, N.E.
2007-01-01
Functional system dynamics is the analysis, modelling, and simulation of continuous systems usually described by partial differential equations. From the infinite degrees of freedom of such systems only a finite number of relevant variables have to be chosen for a practical model description. The proper input and output of the system are an important part of the relevant variables.
Baranov G. V.
2016-01-01
the article reveals the importance of communication with the public in the implementation of human rights and the ideals of mankind; characterized by the specificity of public relations in the information culture of belief; PR functions are explained on the criterion of optimization of activity of social interactions on the basis of cultural ideals.
Semiclassical multicomponent wave function
Mostovoy, M.V.
A consistent method for obtaining the semiclassical multicomponent wave function for any value of adiabatic parameter is discussed and illustrated by examining the motion of a neutral particle in a nonuniform magnetic field. The method generalizes the Bohr-Sommerfeld quantization rule to
Mapping functional connectivity
Peter Vogt; Joseph R. Ferrari; Todd R. Lookingbill; Robert H. Gardner; Kurt H. Riitters; Katarzyna Ostapowicz
2009-01-01
An objective and reliable assessment of wildlife movement is important in theoretical and applied ecology. The identification and mapping of landscape elements that may enhance functional connectivity is usually a subjective process based on visual interpretations of species movement patterns. New methods based on mathematical morphology provide a generic, flexible,...
Comparisons of power transfer functions and flow transfer functions
International Nuclear Information System (INIS)
Grimm, K.N.; Meneghetti, D.
1987-01-01
Transfer functions may be used to calculate component feedbacks or temperature increments by convolution of the transfer function with the appropriate fractional change in system-quantity. Power-change transfer functions have been reported. The corresponding flow transfer functions for this case, and comparison with the power transfer functions, are reported here. Results of feedback simulation of ramped flow transients using flow transfer functions are also described
Completely monotonic functions related to logarithmic derivatives of entire functions
DEFF Research Database (Denmark)
Pedersen, Henrik Laurberg
2011-01-01
The logarithmic derivative l(x) of an entire function of genus p and having only non-positive zeros is represented in terms of a Stieltjes function. As a consequence, (-1)p(xml(x))(m+p) is a completely monotonic function for all m ≥ 0. This generalizes earlier results on complete monotonicity...... of functions related to Euler's psi-function. Applications to Barnes' multiple gamma functions are given....
EXECUTIVE FUNCTIONING IN SCHIZOPHRENIA
Directory of Open Access Journals (Sweden)
Gricel eOrellana
2013-06-01
Full Text Available The executive function (EF is a set of abilities, which allows us to invoke voluntary control of our behavioral responses. These functions enable human beings to develop and carry out plans, make up analogies, obey social rules, solve problems, adapt to unexpected circumstances, do many tasks simultaneously and locate episodes in time and place. EF includes divided attention and sustained attention, working memory, set-shifting, flexibility, planning and the regulation of goal directed behavior and can be defined as a brain function underlying the human faculty to act or think not only in reaction to external events but also in relation with internal goals and states. EF is mostly associated with dorsolateral prefrontal cortex (PFC. Besides EF, PFC is involved in self-regulation of behavior, i.e. the ability to regulate behavior according to internal goals and constraints, particularly in less structured situations. Self-regulation of behavior is subtended by ventral medial /orbital PFC. Impairment of EF is one of the most commonly observed deficits in schizophrenia through the various disease stages. Impairment in tasks measuring conceptualization, planning, cognitive flexibility, verbal fluency, ability to solve complex problems and working memory occur in schizophrenia. Disorders detected by executive tests are consistent with evidence from functional neuroimaging, which have shown PFC dysfunction in patients while performing these kinds of tasks. Schizophrenics also exhibit deficit in odor identifying, decision-making and self-regulation of behavior suggesting dysfunction of the orbital PFC. However, impairment in executive tests is explained by dysfunction of prefronto-striato-thalamic, prefronto-parietal and prefronto-temporal neural networks mainly. Disorders in executive functions may be considered central facts with respect to schizophrenia and it has been suggested that negative symptoms may be explained by that executive dysfunction.
West, J. B.; Elliott, A. R.; Guy, H. J.; Prisk, G. K.
1997-01-01
The lung is exquisitely sensitive to gravity, and so it is of interest to know how its function is altered in the weightlessness of space. Studies on National Aeronautics and Space Administration (NASA) Spacelabs during the last 4 years have provided the first comprehensive data on the extensive changes in pulmonary function that occur in sustained microgravity. Measurements of pulmonary function were made on astronauts during space shuttle flights lasting 9 and 14 days and were compared with extensive ground-based measurements before and after the flights. Compared with preflight measurements, cardiac output increased by 18% during space flight, and stroke volume increased by 46%. Paradoxically, the increase in stroke volume occurred in the face of reductions in central venous pressure and circulating blood volume. Diffusing capacity increased by 28%, and the increase in the diffusing capacity of the alveolar membrane was unexpectedly large based on findings in normal gravity. The change in the alveolar membrane may reflect the effects of uniform filling of the pulmonary capillary bed. Distributions of blood flow and ventilation throughout the lung were more uniform in space, but some unevenness remained, indicating the importance of nongravitational factors. A surprising finding was that airway closing volume was approximately the same in microgravity and in normal gravity, emphasizing the importance of mechanical properties of the airways in determining whether they close. Residual volume was unexpectedly reduced by 18% in microgravity, possibly because of uniform alveolar expansion. The findings indicate that pulmonary function is greatly altered in microgravity, but none of the changes observed so far will apparently limit long-term space flight. In addition, the data help to clarify how gravity affects pulmonary function in the normal gravity environment on Earth.
Blair, Clancy
2017-01-01
Executive functions are thinking skills that assist with reasoning, planning, problem solving, and managing one's life. The brain areas that underlie these skills are interconnected with and influenced by activity in many different brain areas, some of which are associated with emotion and stress. One consequence of the stress-specific connections is that executive functions, which help us to organize our thinking, tend to be disrupted when stimulation is too high and we are stressed out, or too low when we are bored and lethargic. Given their central role in reasoning and also in managing stress and emotion, scientists have conducted studies, primarily with adults, to determine whether executive functions can be improved by training. By and large, results have shown that they can be, in part through computer-based videogame-like activities. Evidence of wider, more general benefits from such computer-based training, however, is mixed. Accordingly, scientists have reasoned that training will have wider benefits if it is implemented early, with very young children as the neural circuitry of executive functions is developing, and that it will be most effective if embedded in children's everyday activities. Evidence produced by this research, however, is also mixed. In sum, much remains to be learned about executive function training. Without question, however, continued research on this important topic will yield valuable information about cognitive development. WIREs Cogn Sci 2017, 8:e1403. doi: 10.1002/wcs.1403 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements
International Nuclear Information System (INIS)
Scarpelli, M; Eickhoff, J; Perlman, S; Jeraj, R
2016-01-01
Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test was used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.
Half-Duplex and Full-Duplex AF and DF Relaying with Energy-Harvesting in Log-Normal Fading
Rabie, Khaled M.; Adebisi, Bamidele; Alouini, Mohamed-Slim
2017-01-01
, in both HD and FD scenarios, AF relaying performs only slightly worse than DF relaying which can make the former a more efficient solution when the processing energy cost at the DF relay is taken into account. It is also shown that FD relaying systems can
WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements
Energy Technology Data Exchange (ETDEWEB)
Scarpelli, M; Eickhoff, J; Perlman, S; Jeraj, R
2016-06-15
Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test was used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.
Quasi-extended asymptotic functions
International Nuclear Information System (INIS)
Todorov, T.D.
1979-01-01
The class F of ''quasi-extended asymptotic functions'' is introduced. It contains all extended asymptotic functions as well as some new asymptotic functions very similar to the Schwartz distributions. On the other hand, every two quasiextended asymptotic functions can be multiplied as opposed to the Schwartz distributions; in particular, the square delta 2 of an asymptotic function delta similar to Dirac's delta-function, is constructed as an example
Energy Technology Data Exchange (ETDEWEB)
Yuan, Rong [Univ. of California, Berkeley, CA (United States)
2007-01-01
Linear elastic fracture mechanics is widely used in industry because it established simple and explicit relationships between the permissible loading conditions and the critical crack size that is allowed in a structure. Stress intensity factors are the above-mentioned functional expressions that relate load with crack size through geometric functions or weight functions. Compliance functions are to determine the crack/flaw size in a structure when optical inspection is inconvenient. As a result, geometric functions, weight functions and compliance functions have been intensively studied to determine the stress intensity factor expressions for different geometries. However, the relations between these functions have received less attention. This work is therefore to investigate the intrinsic relationships between these functions. Theoretical derivation was carried out and the results were verified on single-edge cracked plate under tension and bending. It is found out that the geometric function is essentially the non-dimensional weight function at the loading point. The compliance function is composed of two parts: a varying part due to crack extension and a constant part from the intact structure if no crack exists. The derivative of the compliance function at any location is the product of the geometric function and the weight function at the evaluation point. Inversely, the compliance function can be acquired by the integration of the product of the geometric function and the weight function with respect to the crack size. The integral constant is just the unchanging compliance from the intact structure. Consequently, a special application of the relations is to obtain the compliance functions along a crack once the geometric function and weight functions are known. Any of the three special functions can be derived once the other two functions are known. These relations may greatly simplify the numerical process in obtaining either geometric functions, weight
DEFF Research Database (Denmark)
2016-01-01
Renowned experts present the latest knowledge Although a very fragile structure, the skin barrier is probably one of the most important organs of the body. Inward/out it is responsible for body integrity and outward/in for keeping microbes, chemicals, and allergens from penetrating the skin. Since...... the role of barrier integrity in atopic dermatitis and the relationship to filaggrin mutations was discovered a decade ago, research focus has been on the skin barrier, and numerous new publications have become available. This book is an interdisciplinary update offering a wide range of information...... on the subject. It covers new basic research on skin markers, including results on filaggrin and on methods for the assessment of the barrier function. Biological variation and aspects of skin barrier function restoration are discussed as well. Further sections are dedicated to clinical implications of skin...
Cooper, Shaun
2017-01-01
Theta functions were studied extensively by Ramanujan. This book provides a systematic development of Ramanujan’s results and extends them to a general theory. The author’s treatment of the subject is comprehensive, providing a detailed study of theta functions and modular forms for levels up to 12. Aimed at advanced undergraduates, graduate students, and researchers, the organization, user-friendly presentation, and rich source of examples, lends this book to serve as a useful reference, a pedagogical tool, and a stimulus for further research. Topics, especially those discussed in the second half of the book, have been the subject of much recent research; many of which are appearing in book form for the first time. Further results are summarized in the numerous exercises at the end of each chapter.
Functional Esophageal Disorders.
Aziz, Qasim; Fass, Ronnie; Gyawali, C Prakash; Miwa, Hiroto; Pandolfino, John E; Zerbib, Frank
2016-02-15
Functional esophageal disorders consist of a disease category that present with esophageal symptoms (heartburn, chest pain, dysphagia, globus) not explained by mechanical obstruction (stricture, tumor, eosinophilic esophagitis), major motor disorders (achalasia, EGJ outflow obstruction, absent contractility, distal esophageal spasm, jackhammer esophagus), or gastroesophageal reflux disease (GERD). While mechanisms responsible are unclear, it is theorized that visceral hypersensitivity and hypervigilance play an important role in symptom generation, in the context of normal or borderline function. Treatments directed at improving borderline motor dysfunction or reducing reflux burden to sub-normal levels have limited success in symptom improvement. In contrast, strategies focused on modulating peripheral triggering and central perception are mechanistically viable and clinically meaningful. However, outcome data from these treatment options are limited. Future research needs to focus on understanding mechanisms underlying visceral hypersensitivity and hypervigilance so that appropriate targets and therapies can be developed. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Das, M.P.
1984-07-01
The state of the art of the density functional formalism (DFT) is reviewed. The theory is quantum statistical in nature; its simplest version is the well-known Thomas-Fermi theory. The DFT is a powerful formalism in which one can treat the effect of interactions in inhomogeneous systems. After some introductory material, the DFT is outlined from the two basic theorems, and various generalizations of the theorems appropriate to several physical situations are pointed out. Next, various approximations to the density functionals are presented and some practical schemes, discussed; the approximations include an electron gas of almost constant density and an electron gas of slowly varying density. Then applications of DFT in various diverse areas of physics (atomic systems, plasmas, liquids, nuclear matter) are mentioned, and its strengths and weaknesses are pointed out. In conclusion, more recent developments of DFT are indicated
Differentiation of real functions
Bruckner, Andrew
1994-01-01
Topics related to the differentiation of real functions have received considerable attention during the last few decades. This book provides an efficient account of the present state of the subject. Bruckner addresses in detail the problems that arise when dealing with the class \\Delta ' of derivatives, a class that is difficult to handle for a number of reasons. Several generalized forms of differentiation have assumed importance in the solution of various problems. Some generalized derivatives are excellent substitutes for the ordinary derivative when the latter is not known to exist; others are not. Bruckner studies generalized derivatives and indicates "geometric" conditions that determine whether or not a generalized derivative will be a good substitute for the ordinary derivative. There are a number of classes of functions closely linked to differentiation theory, and these are examined in some detail. The book unifies many important results from the literature as well as some results not previously pub...
Functionalization of Carbon Nanotubes
Khare, Bishun N. (Inventor); Meyyappan, Meyya (Inventor)
2009-01-01
Method and system for functionalizing a collection of carbon nanotubes (CNTs). A selected precursor gas (e.g., H2 or F2 or CnHm) is irradiated to provide a cold plasma of selected target species particles, such as atomic H or F, in a first chamber. The target species particles are d irected toward an array of CNTs located in a second chamber while suppressing transport of ultraviolet radiation to the second chamber. A CNT array is functionalized with the target species particles, at or below room temperature, to a point of saturation, in an exposure time interval no longer than about 30 sec. *Discrimination against non-target species is provided by (i) use of a target species having a lifetime that is much greater than a lifetime of a non-target species and/or (2) use of an applied magnetic field to discriminate between charged particle trajectories for target species and for non-target species.
Functional abdominal pain syndrome.
Clouse, Ray E; Mayer, Emeran A; Aziz, Qasim; Drossman, Douglas A; Dumitrascu, Dan L; Mönnikes, Hubert; Naliboff, Bruce D
2006-04-01
Functional abdominal pain syndrome (FAPS) differs from the other functional bowel disorders; it is less common, symptoms largely are unrelated to food intake and defecation, and it has higher comorbidity with psychiatric disorders. The etiology and pathophysiology are incompletely understood. Because FAPS likely represents a heterogeneous group of disorders, peripheral neuropathic pain mechanisms, alterations in endogenous pain modulation systems, or both may be involved in any one patient. The diagnosis of FAPS is made on the basis of positive symptom criteria and a longstanding history of symptoms; in the absence of alarm symptoms, an extensive diagnostic evaluation is not required. Management is based on a therapeutic physician-patient relationship and empirical treatment algorithms using various classes of centrally acting drugs, including antidepressants and anticonvulsants. The choice, dose, and combination of drugs are influenced by psychiatric comorbidities. Psychological treatment options include psychotherapy, relaxation techniques, and hypnosis. Refractory FAPS patients may benefit from a multidisciplinary pain clinic approach.
DEFF Research Database (Denmark)
Sørensen, A S; Hansen, H; Høgenhaven, H
1988-01-01
Two groups of epilepsy patients (28 patients with temporal lobe epilepsy and 15 patients with primary generalized epilepsy) entered a study of personality traits related to epilepsy, based on a modification of Bellak's semistructured interview for assessment of ego strength. Two groups of subjects...... than 15 years when the disease began. The number of anticonvulsants administered did not influence the results. No difference on adaptive level of ego functioning was found between the group with primary generalized epilepsy and the group with temporal lobe epilepsy. Similarly, the temporal lobe...... served as controls: 15 patients with a non-neurological but relapsing disorder, psoriasis, and 15 healthy volunteers. Compared with the group of healthy volunteers, a decreased adaptive level of ego functioning was found in the epilepsy groups, regardless of seizure types and EEG findings, and...
DEFF Research Database (Denmark)
Bech-Larsen, Tino; Scholderer, Joachim
2007-01-01
reading of the main principles of the harmonized regulation COM/2003/0424, this situation is about to change. This article reviews the regulatory aspects, the results of consumer research and the marketing strategies regarding the use of health claims for functional foods in Europe, and it comments......The fact that the European markets for functional foods generally are less developed, compared to the US and the Japanese markets, has often been attributed to a restrictive and inconsistent health claim legislation in and between the European countries. With the European Parliament's second...... on the lack of correspondence between the new regulation and the marketing experiences and research as regard consumer reactions to health claims....
International Nuclear Information System (INIS)
Szasz, L.; Berrios-Pagan, I.; McGinn, G.
1975-01-01
A new Density-Functional formula is constructed for atoms. The kinetic energy of the electron is divided into two parts: the kinetic self-energy and the orthogonalization energy. Calculations were made for the total energies of neutral atoms, positive ions and for the He isoelectronic series. For neutral atoms the results match the Hartree-Fock energies within 1% for atoms with N 36 the results generally match the HF energies within 0.1%. For positive ions the results are fair; for the molecular applications a simplified model is developed in which the kinetic energy consists of the Weizsaecker term plus the Fermi energy reduced by a continuous function. (orig.) [de
Pancreatic exocrine function testing
International Nuclear Information System (INIS)
Goff, J.S.
1981-01-01
It is important to understand which pancreatic function tests are available and how to interpret them when evaluating patients with malabsorption. Available direct tests are the secretin stimulation test, the Lundh test meal, and measurement of serum or fecal enzymes. Indirect tests assess pancreatic exocrine function by measuring the effect of pancreatic secretion on various nutrients. These include triglycerides labeled with carbon 14, cobalamin labeled with cobalt 57 and cobalt 58, and para-aminobenzoic acid bound to a dipeptide. Of all these tests the secretin stimulation test is the most accurate and reliable if done by experienced personnel. However, the indirect tests are simpler to do and appear to be comparable to the secretin test at detecting pancreatic exocrine insufficiency. These indirect tests are becoming clinically available and clinicians should familiarize themselves with the strengths and weaknesses of each
International Nuclear Information System (INIS)
McCready, V.R.; Leach, M.; Ell, P.J.
1987-01-01
This volume is based on a series of lectures delivered at a one-day teaching symposium on functional and metabolic aspects of NMR measurements held at the Middlesex Hospital Medical School on 1st September 1985 as a part of the European Nuclear Medicine Society Congress. Currently the major emphasis in medical NMR in vivo is on its potential to image and display abnormalities in conventional radiological images, providing increased contrast between normal and abnormal tissue, improved definition of vasculature, and possibly an increased potential for differential diagnosis. Although these areas are undeniably of major importance, it is probable that NMR will continue to complement conventional measurement methods. The major potential benefits to be derived from in vivo NMR measurements are likely to arise from its use as an instrument for functional and metabolic studies in both clinical research and in the everyday management of patients. It is to this area that this volume is directed
Towards the Innovation Function
Directory of Open Access Journals (Sweden)
Paulo Antônio Zawislak
2008-12-01
Full Text Available This paper explores the main elements that influence innovation and the relationships among them. It is pointed out that innovation results from an entrepreneurial action inside an established institutional context, sustained by resources, abilities and competences and with the support of the necessary financial capital. Therefore, it is proposed that innovation is a function (just as the microeconomic production function composed of entrepreneurship, institutions, capabilities and capital. Each one of these elements is explored individually, so that later the relationships among them can be analyzed. It is still suggested that the size of the firm is a moderator in the relationship between these elements and innovation. The study’s contribution it is the development of a conceptual model.
Directory of Open Access Journals (Sweden)
Daniel A. Barone
2015-06-01
Full Text Available The importance of sleep can be ascertained by noting the effects of its loss, which tends to be chronic and partial, on cognition, mood, alertness, and overall health. Many theories have been put forth to explain the function of sleep in humans, including proposals based on energy conservation, ecological adaptations, neurocognitive function, neural plasticity, nervous system and physical health, and performance. Most account for only a portion of sleep behavior and few are based on strong experimental support. In this review, we present theories proposing why sleep is necessary and supporting data demonstrating the effects of inadequate sleep, with the intention of gleaning further information as to its necessity, which remains one of the most perplexing mysteries in biology.
Auroal electron distribution function
International Nuclear Information System (INIS)
Kaufmann, R.L.; Dusenbery, P.B.; Thomas, B.J.; Arnoldy, R.L.
1978-01-01
The electron velocity distribution function is presented in the energy range 25 eV 8 cm/s (E=300 eV) are nearly isotropic in pitch angle throughout the flight. Upgoing electrons show almost no pitch angle dependence beyond 120 0 , and their fluxes decline smoothly as energy increases, with little or no evidence of a plateau. Preliminary results of numerical integrations, to study bulk properties and stability of the plasma are presented
LENUS (Irish Health Repository)
Lang, E E
2010-06-01
Vestibular symptoms of vertigo, dizziness and dysequilibrium are common complaints which can be disabling both physically and psychologically. Routine examination of the ear nose and throat and neurological system are often normal in these patients. An accurate history and thorough clinical examination can provide a diagnosis in the majority of patients. However, in a subgroup of patients, vestibular function testing may be invaluable in arriving at a correct diagnosis and ultimately in the optimal treatment of these patients.
Analyzing Pseudophosphatase Function.
Hinton, Shantá D
2016-01-01
Pseudophosphatases regulate signal transduction cascades, but their mechanisms of action remain enigmatic. Reflecting this mystery, the prototypical pseudophosphatase STYX (phospho-serine-threonine/tyrosine-binding protein) was named with allusion to the river of the dead in Greek mythology to emphasize that these molecules are "dead" phosphatases. Although proteins with STYX domains do not catalyze dephosphorylation, this in no way precludes their having other functions as integral elements of signaling networks. Thus, understanding their roles in signaling pathways may mark them as potential novel drug targets. This chapter outlines common strategies used to characterize the functions of pseudophosphatases, using as an example MK-STYX [mitogen-activated protein kinase (MAPK) phospho-serine-threonine/tyrosine binding], which has been linked to tumorigenesis, apoptosis, and neuronal differentiation. We start with the importance of "restoring" (when possible) phosphatase activity in a pseudophosphatase so that the active mutant may be used as a comparison control throughout immunoprecipitation and mass spectrometry analyses. To this end, we provide protocols for site-directed mutagenesis, mammalian cell transfection, co-immunoprecipitation, phosphatase activity assays, and immunoblotting that we have used to investigate MK-STYX and the active mutant MK-STYXactive. We also highlight the importance of utilizing RNA interference (RNAi) "knockdown" technology to determine a cellular phenotype in various cell lines. Therefore, we outline our protocols for introducing short hairpin RNA (shRNA) expression plasmids into mammalians cells and quantifying knockdown of gene expression with real-time quantitative PCR (qPCR). A combination of cellular, molecular, biochemical, and proteomic techniques has served as powerful tools in identifying novel functions of the pseudophosphatase MK-STYX. Likewise, the information provided here should be a helpful guide to elucidating the
Cybernetic functioning in stuttering
Directory of Open Access Journals (Sweden)
Ursula Zsilavecz
1981-11-01
Full Text Available The aim of this study was to evaluate different kinds of masking noise and DAF, in order to identify the condition which would elicit the highest incidence of fluency in a group of stutterers. The study demonstrates that masking noise and DAF can be effectively applied as an aid in a therapy programme, viz. noise can effectively be put to use so as to encourage and reinforce somesthesia. Stuttering is viewed as defective functioning in the cybernetic system.
Function integrated track system
Hohnecker, Eberhard
2010-01-01
The paper discusses a function integrated track system that focuses on the reduction of acoustic emissions from railway lines. It is shown that the combination of an embedded rail system (ERS), a sound absorbing track surface, and an integrated mini sound barrier has significant acoustic advantages compared to a standard ballast superstructure. The acoustic advantages of an embedded rail system are particularly pronounced in the case of railway bridges. Finally, it is shown that a...
On hereditarily rational functions
Nowak, Krzysztof Jan
2012-01-01
In this paper, we give a short proof of a theorem by Koll\\'{a}r on hereditarily rational functions. This is an answer to his appeal to find an elementary proof which does not rely so much on resolution of singularities. Our approach does not make use of desingularization techniques. Instead, we apply a stronger version of the \\L{}ojasiewicz inequality. Moreover, this allows us to sharpen Koll\\'{a}r's theorem.
Generalised twisted partition functions
Petkova, V B
2001-01-01
We consider the set of partition functions that result from the insertion of twist operators compatible with conformal invariance in a given 2D Conformal Field Theory (CFT). A consistency equation, which gives a classification of twists, is written and solved in particular cases. This generalises old results on twisted torus boundary conditions, gives a physical interpretation of Ocneanu's algebraic construction, and might offer a new route to the study of properties of CFT.
Network Function Virtualisation
Aakarshan Singh; Kamal Grover; Palak Bansal; Taranveer Singh Seekhon
2017-01-01
This paper is written to give basic knowledge of Network function virtualisation in network system. In this paper the work on NFV done till now has been collaborated. It describes how the challenges faced by industry lead to NFV and what is meaning of NFV and NFV architecture model. It also explains NFV Infrastructure is managed and the forwarding path on which packets traverse in NFV. A relationship of NFV with SDN and current research ongoing on NFV policies is discussed.
Dynamics of cholinergic function
International Nuclear Information System (INIS)
Hanin, I.
1986-01-01
This book presents information on the following topics; cholinergic pathways - anatomy of the central nervous system; aging, DSAT and other clinical conditions; cholinergic pre- and post-synaptic receptors; acetylcholine release; cholinesterases, anticholinesterases and reactivators; acetylcholine synthesis, metabolism and precursors; second messenger messenger mechanisms; interaction of acetylcholine with other neurotransmitter systems; cholinergic mechanisms in physiological function, including cardiovascular events; and neurotoxic agents and false transmitters
Oden, J Tinsley
2010-01-01
The textbook is designed to drive a crash course for beginning graduate students majoring in something besides mathematics, introducing mathematical foundations that lead to classical results in functional analysis. More specifically, Oden and Demkowicz want to prepare students to learn the variational theory of partial differential equations, distributions, and Sobolev spaces and numerical analysis with an emphasis on finite element methods. The 1996 first edition has been used in a rather intensive two-semester course. -Book News, June 2010
Choice Probability Generating Functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....
[Endocrine function in obesity].
Álvarez-Castro, Paula; Sangiao-Alvarellos, Susana; Brandón-Sandá, Iria; Cordido, Fernando
2011-10-01
Obesity is associated to significant disturbances in endocrine function. Hyper insulinemia and insulin resistance are the best known changes in obesity, but their mechanisms and clinical significance are not clearly established. Adipose tissue is considered to be a hormone-secreting endocrine organ; and increased leptin secretion from the adipocyte, a satiety signal, is a well-established endocrine change in obesity. In obesity there is a decreased GH secretion. Impairment of somatotropic function in obesity is functional and may be reversed in certain circumstances. The pathophysiological mechanism responsible for low GH secretion in obesity is probably multifactorial. There are many data suggesting that a chronic state of somatostatin hypersecretion results in inhibition of GH release. Increased FFA levels, as well as a deficient ghrelin secretion, probably contribute to the impaired GH secretion. In women, abdominal obesity is associated to hyperandrogenism and low sex hormone-binding globulin levels. Obese men, particularly those with morbid obesity, have decreased testosterone and gonadotropin levels. Obesity is associated to an increased cortisol production rate, which is compensated for by a higher cortisol clearance, resulting in plasma free cortisol levels that do not change when body weight increases. Ghrelin is the only known circulating orexigenic factor, and has been found to be decreased in obese people. In obesity there is also a trend to increased TSH and free T3 levels. Copyright © 2011 SEEN. Published by Elsevier Espana. All rights reserved.
Orexins and gastrointestinal functions.
Baccari, M C
2010-03-01
Orexin A (OXA) and orexin B (OXB) are recently discovered neuropeptides that appear to play a role in various distinct functions such as arousal and the sleep-wake cycle as well as on appetite and regulation of feeding and energy homeostasis. Orexins were first described as neuropeptides expressed by a specific population of neurons in the lateral hypothalamic area, a region classically implicated in feeding behaviour. Orexin neurons project to numerous brain regions, where orexin receptors have been shown to be widely distributed: both OXA and OXB act through two subtypes of receptors (OX1R and OX2R) that belong to the G protein-coupled superfamily of receptors. Growing evidence indicates that orexins act in the central nervous system also to regulate gastrointestinal functions: animal studies have indeed demonstrated that centrally-injected orexins or endogenously released orexins in the brain stimulates gastric secretion and influence gastrointestinal motility. The subsequent identification of orexins and their receptors in the enteric nervous system (including the myenteric and the submucosal plexuses) as well as in mucosa and smooth muscles has suggested that these neuropeptides may also play a local action. In this view, emerging studies indicate that orexins also exert region-specific contractile or relaxant effects on isolated gut preparations. The aim of the proposed review is to summarize both centrally- and peripherally-mediated actions of orexins on gastrointestinal functions and to discuss the related physiological role on the basis of the most recent findings.
Directory of Open Access Journals (Sweden)
John W. Erdman
2015-10-01
Full Text Available Lutein is one of the most prevalent carotenoids in nature and in the human diet. Together with zeaxanthin, it is highly concentrated as macular pigment in the foveal retina of primates, attenuating blue light exposure, providing protection from photo-oxidation and enhancing visual performance. Recently, interest in lutein has expanded beyond the retina to its possible contributions to brain development and function. Only primates accumulate lutein within the brain, but little is known about its distribution or physiological role. Our team has begun to utilize the rhesus macaque (Macaca mulatta model to study the uptake and bio-localization of lutein in the brain. Our overall goal has been to assess the association of lutein localization with brain function. In this review, we will first cover the evolution of the non-human primate model for lutein and brain studies, discuss prior association studies of lutein with retina and brain function, and review approaches that can be used to localize brain lutein. We also describe our approach to the biosynthesis of 13C-lutein, which will allow investigation of lutein flux, localization, metabolism and pharmacokinetics. Lastly, we describe potential future research opportunities.
Directory of Open Access Journals (Sweden)
G.R. Finnie
1997-05-01
Full Text Available Accurate estimation of the size and development effort for software projects requires estimation models which can be used early enough in the development life cycle to be of practical value. Function Point Analysis (FPA has become possibly the most widely used estimation technique in practice. However the technique was developed in the data processing environment of the 1970's and, despite undergoing considerable reassessment and formalisation, still attracts criticism for the weighting scoring it employs and for the way in which the function point score is adapted for specific system characteristics. This paper reviews the validity of the weighting scheme and the value of adjusting for system characteristics by studying their effect in a sample of 299 software developments. In general the value adjustment scheme does not appear to cater for differences in productivity. The weighting scheme used to adjust system components in terms of being simple, average or complex also appears suspect and should be redesigned to provide a more realistic estimate of system functionality.
Hexagonalization of correlation functions
Energy Technology Data Exchange (ETDEWEB)
Fleury, Thiago [Instituto de Física Teórica, UNESP - University Estadual Paulista,ICTP South American Institute for Fundamental Research,Rua Dr. Bento Teobaldo Ferraz 271, 01140-070, São Paulo, SP (Brazil); Komatsu, Shota [Perimeter Institute for Theoretical Physics,31 Caroline St N Waterloo, Ontario N2L 2Y5 (Canada)
2017-01-30
We propose a nonperturbative framework to study general correlation functions of single-trace operators in N=4 supersymmetric Yang-Mills theory at large N. The basic strategy is to decompose them into fundamental building blocks called the hexagon form factors, which were introduced earlier to study structure constants using integrability. The decomposition is akin to a triangulation of a Riemann surface, and we thus call it hexagonalization. We propose a set of rules to glue the hexagons together based on symmetry, which naturally incorporate the dependence on the conformal and the R-symmetry cross ratios. Our method is conceptually different from the conventional operator product expansion and automatically takes into account multi-trace operators exchanged in OPE channels. To illustrate the idea in simple set-ups, we compute four-point functions of BPS operators of arbitrary lengths and correlation functions of one Konishi operator and three short BPS operators, all at one loop. In all cases, the results are in perfect agreement with the perturbative data. We also suggest that our method can be a useful tool to study conformal integrals, and show it explicitly for the case of ladder integrals.
Carbasugars: Synthesis and Functions
Kobayashi, Yoshiyuki
It is well recognized that glycosidase inhibitors are not only tools to elucidate the mechanism of a living system manipulated by glycoconjugates but also potential clinical drugs and insecticides by inducing the failure of glycoconjugates to perform their function. In this chapter, the syntheses and functions of natural glycosidase inhibitors (cyclophelitol , allosamidine , and trehazoilin ), which possess highly oxygenated and functionalized cyclohexanes or cyclopentanes in their structures and are defined as carbasugars , and the structure and activity relationships (SAR) of their derivatives are described. Also, recently much attention has been focused on neuraminidase inhibitors as anti-influenza drugs since relenza , which was derived from sialic acid, and also, tamiflu , which is the artificial carbasugar designed as a transition state analogue in the hydrolysis pathway of substrates by neuraminidase, were launched in the market. Herein, the medicinal chemistry efforts to discover tamiflu and some efficient syntheses applicable to process chemistry are described. Finally, useful synthetic methodologies for carbasugar formation from sugars are also introduced in this chapter.
[Functional hypothalamic amenorrhea].
Stárka, Luboslav; Dušková, Michaela
2015-10-01
Functional hypothalamic amenorrhea (FHA) besides pregnancy and syndrome of polycystic ovary is one of the most common causes of secondary amenorrhea. FHA results from the aberrations in pulsatile gonadotropin-releasing hormone (GnRH) secretion, which in turn causes impairment of the gonadotropins (follicle-stimulating hormone and luteinizing hormone). FHA is a form of the defence of organism in situations where life functions are more important than reproductive function. FHA is reversible; it can be normalized after ceasing the stress situation. There are three types of FHA: weight loss related, stress-related, and exercise-related amenorrhea. The final consequences are complex hormonal changes manifested by profound hypoestrogenism. Additionally, these patients present mild hypercortisolemia, low serum insulin levels, low insulin-like growth factor 1 (IGF-1) and low total triiodothyronine. Women health in this disorder is disturbed in several aspects including the skeletal system, cardiovascular system, and mental problems. Patients manifest a decrease in bone mass density, which is related to an increase in fracture risk. Therefore, osteopenia and osteoporosis are the main long-term complications of FHA. Cardiovascular complications include endothelial dysfunction and abnormal changes in the lipid profile. FHA patients present significantly higher depression and anxiety and also sexual problems compared to healthy subjects.
Laurberg, Peter; Knudsen, Nils; Andersen, Stig; Carlé, Allan; Pedersen, Inge Bülow; Karmisholt, Jesper
2012-10-01
Important interaction exists between thyroid function, weight control, and obesity. Several mechanisms seem to be involved, and in studies of groups of people the pattern of thyroid function tests depends on the balance of obesity and underlying thyroid disease in the cohort studied. Obese people with a normal thyroid gland tend to have activation of the hypothalamic-pituitary-thyroid axis with higher serum TSH and thyroid hormones in serum. On the other hand, small differences in thyroid function are associated with up to 5 kg difference in body weight. The weight loss after therapy of overt hypothyroidism is caused by excretion of water bound in tissues (myxoedema). Many patients treated for hyperthyroidism experience a gain of more weight than they lost during the active phase of the disease. The mechanism for this excessive weight gain has not been fully elucidated. New studies on the relation between L-T3 therapy and weight control are discussed. The interaction between weight control and therapy of thyroid disease is important to many patients and it should be studied in more detail.
[Chewing and cognitive function].
Hirano, Yoshiyuki; Onozuka, Minoru
2014-01-01
Chewing does not only crush food to aid swallowing and digestion; it also helps to relieve stress and regulate cognitive functions, including alertness and executive function. It is well known that chewing gum is used for sleepiness prevention during work, learning, and driving. In addition, it has been shown in the elderly that a decrease in the number of residual teeth is related to dementia onset. These findings suggest a link between chewing and maintaining memory and attention. Recently, many studies regarding the effects of chewing on memory and attention were conducted using functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). When a working memory task was used, the middle frontal gyrus in the dorsolateral prefrontal cortex showed greater activation in addition to producing higher alertness after chewing. Furthermore, using an attentional network test, reaction time shortened, and the anterior cingulate cortex and left frontal gyrus were both activated for the executive network. From these results, it is suggested that chewing elevates alertness, consequently leading to improvements in cognitive performance. In this review, we introduce findings concerning the effects of chewing on cognitive performance, and discuss the neuronal mechanisms underlying these effects.
Mitochondria and Endothelial Function
Kluge, Matthew A.; Fetterman, Jessica L.; Vita, Joseph A.
2013-01-01
In contrast to their role in other cell types with higher energy demands, mitochondria in endothelial cells primarily function in signaling cellular responses to environmental cues. This article provides an overview of key aspects of mitochondrial biology in endothelial cells, including subcellular location, biogenesis, dynamics, autophagy, ROS production and signaling, calcium homeostasis, regulated cell death, and heme biosynthesis. In each section, we introduce key concepts and then review studies showing the importance of that mechanism to endothelial control of vasomotor tone, angiogenesis, and inflammatory activation. We particularly highlight the small number of clinical and translational studies that have investigated each mechanism in human subjects. Finally, we review interventions that target different aspects of mitochondrial function and their effects on endothelial function. The ultimate goal of such research is the identification of new approaches for therapy. The reviewed studies make it clear that mitochondria are important in endothelial physiology and pathophysiology. A great deal of work will be needed, however, before mitochondria-directed therapies are available for the prevention and treatment of cardiovascular disease. PMID:23580773
A functional RG equation for the c-function
DEFF Research Database (Denmark)
Codello, A.; D'Odorico, G.; Pagani, C.
2014-01-01
, local potential approximation and loop expansion. In each case we construct the relative approximate c-function and find it to be consistent with Zamolodchikov's c-theorem. Finally, we present a relation between the c-function and the (matter induced) beta function of Newton's constant, allowing us...... to use heat kernel techniques to compute the RG running of the c-function....
Logarithmically completely monotonic functions involving the Generalized Gamma Function
Directory of Open Access Journals (Sweden)
Faton Merovci
2010-12-01
Full Text Available By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain problems of traffic flow are proved to be logarithmically completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.
Logarithmically completely monotonic functions involving the Generalized Gamma Function
Faton Merovci; Valmir Krasniqi
2010-01-01
By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain problems of traffic flow are proved to be logarithmically completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.
Strange functions in real analysis
Kharazishvili, AB
2005-01-01
Weierstrass and Blancmange nowhere differentiable functions, Lebesgue integrable functions with everywhere divergent Fourier series, and various nonintegrable Lebesgue measurable functions. While dubbed strange or "pathological," these functions are ubiquitous throughout mathematics and play an important role in analysis, not only as counterexamples of seemingly true and natural statements, but also to stimulate and inspire the further development of real analysis.Strange Functions in Real Analysis explores a number of important examples and constructions of pathological functions. After introducing the basic concepts, the author begins with Cantor and Peano-type functions, then moves to functions whose constructions require essentially noneffective methods. These include functions without the Baire property, functions associated with a Hamel basis of the real line, and Sierpinski-Zygmund functions that are discontinuous on each subset of the real line having the cardinality continuum. Finally, he considers e...
[Functional pathophysiology of consciousness].
Jellinger, Kurt A
2009-01-01
Consciousness (Latin conscientia "moral conscience"), according to the English philosopher John Locke (1632-1704) [103], is the awareness of all that occurs in the mind of a person, whereas the American philosopher John Searle (2000) defined it as "inner qualitative, subjective states and processes of awareness". In modern science it is defined as a continuous state of full awareness of the Self and one's relationship to the external and internal environment, describing the degree of wakefulness in which an organism recognizes stimuli. This widely discussed biological term for complex neuronal processes that allow an individuum to recognize itself and its environment and to act accordingly, has been and still is the subject of much research in philosophy and natural/neuroscience. Its definition is often used for awareness and recognition, too. While the Egyptians in the papyrus Edwin Smith already recognized the brain as the seat of consciousness, René Descartes (1644 [36]) believed its special structure should be "a small gland in the middle", but the anatomical structures and physiological processes involved in consciousness were elucidated only in the middle of the 20th century. Neuronal substrates include several functional networks that are hierarchically organized and cooperate functionally. The lowest level is the mesencephalic formatio reticularis and its projections to the thalamus that were identified als ascending reticular system (ARAS) by the classical experiments of Moruzzi and Magoun, whereas later analyses of patients with impaired consciousness provided further insights. The mesencephalic ARAS as motor of the function of higher structures projects 1. via the reticular thalamus diffusely to the cortex, 2. via hypothalamus to the basal forebrain and limbic system, and 3. to the medial raphe of the brainstem and locus coeruleus and their diffuse cortical projections. The reticular system is stimulated directly and indirectly via numerous collaterals
Microbial Functional Gene Diversity Predicts Groundwater Contamination and Ecosystem Functioning.
He, Zhili; Zhang, Ping; Wu, Linwei; Rocha, Andrea M; Tu, Qichao; Shi, Zhou; Wu, Bo; Qin, Yujia; Wang, Jianjun; Yan, Qingyun; Curtis, Daniel; Ning, Daliang; Van Nostrand, Joy D; Wu, Liyou; Yang, Yunfeng; Elias, Dwayne A; Watson, David B; Adams, Michael W W; Fields, Matthew W; Alm, Eric J; Hazen, Terry C; Adams, Paul D; Arkin, Adam P; Zhou, Jizhong
2018-02-20
Contamination from anthropogenic activities has significantly impacted Earth's biosphere. However, knowledge about how environmental contamination affects the biodiversity of groundwater microbiomes and ecosystem functioning remains very limited. Here, we used a comprehensive functional gene array to analyze groundwater microbiomes from 69 wells at the Oak Ridge Field Research Center (Oak Ridge, TN), representing a wide pH range and uranium, nitrate, and other contaminants. We hypothesized that the functional diversity of groundwater microbiomes would decrease as environmental contamination (e.g., uranium or nitrate) increased or at low or high pH, while some specific populations capable of utilizing or resistant to those contaminants would increase, and thus, such key microbial functional genes and/or populations could be used to predict groundwater contamination and ecosystem functioning. Our results indicated that functional richness/diversity decreased as uranium (but not nitrate) increased in groundwater. In addition, about 5.9% of specific key functional populations targeted by a comprehensive functional gene array (GeoChip 5) increased significantly ( P contamination and ecosystem functioning. This study indicates great potential for using microbial functional genes to predict environmental contamination and ecosystem functioning. IMPORTANCE Disentangling the relationships between biodiversity and ecosystem functioning is an important but poorly understood topic in ecology. Predicting ecosystem functioning on the basis of biodiversity is even more difficult, particularly with microbial biomarkers. As an exploratory effort, this study used key microbial functional genes as biomarkers to provide predictive understanding of environmental contamination and ecosystem functioning. The results indicated that the overall functional gene richness/diversity decreased as uranium increased in groundwater, while specific key microbial guilds increased significantly as
ANALYSIS OF NIGERIAN HYDROMETEOROLOGICAL DATA
African Journals Online (AJOL)
Dr Obe
Tests on time homogeneity, showed that the annual rainfall ... lognormal distribution should be adopted for predictions of annual rainfall at Port Harcourt and .... factors for normal probability distribution, ... functions of adjusted skewness.
Neurophysiology of functional imaging.
van Eijsden, Pieter; Hyder, Fahmeed; Rothman, Douglas L; Shulman, Robert G
2009-05-01
The successes of PET and fMRI in non-invasively localizing sensory functions had encouraged efforts to transform the subjective concepts of cognitive psychology into objective physical measures. The assumption was that mental functions could be decomposed into non-overlapping, context-independent modules that are operated on by separable areas of a computer-like brain. The failures of cognitive modularity and of a very localized phrenology are generally, but not universally, accepted; but in their place, and usually not distinguished from the original revolutionary hopes of clarification, experimental results are being interpreted in terms of rather flexible definitions of both cognitive concepts and the degree of localization. In an alternative approach, we have connected fMRI, (13)C MRS, and electrophysiology measurements of brain energy to connect with observable properties of mental life (i.e., awareness). We illustrate this approach with a sensory stimulation experiment; the degree of localization found in BOLD signals was related to the global energy of the brain which, when manipulated by anesthetics, affected the degree of awareness. The influence of brain energy upon functional imaging maps is changing the interpretations of neuroimaging experiments, from psychological concepts generating computer-like responses to empirical responses dominated by the high brain energy and signaling at rest. In our view "baseline" is an operational term, an adjective that defines a property of a state of the system before it is perturbed by a stimulus. Given the dependence of observable psychological properties upon the "baseline" energy, we believe that it is unnecessarily limiting to define a particular state as the baseline.
Expansions for Coulomb wave functions
Boersma, J.
1969-01-01
In this paper we derive a number of expansions for Whittaker functions, regular and irregular Coulomb wave functions. The main result consists of a new expansion for the irregular Coulomb wave functions of orders zero and one in terms of regular Coulomb wave functions. The latter expansions are
Functional Foods for Women's Health.
Lindeman, Alice K.
2002-01-01
Describes functional foods for women's health (foods or food ingredients that provide health benefits beyond basic nutrition), explaining that both whole and modified foods can be included as functional foods. The paper discusses the history, regulation, and promotion of functional foods; consumer interest in functional foods; how to incorporate…
A Primer on Functional Analysis
Yoman, Jerome
2008-01-01
This article presents principles and basic steps for practitioners to complete a functional analysis of client behavior. The emphasis is on application of functional analysis to adult mental health clients. The article includes a detailed flow chart containing all major functional diagnoses and behavioral interventions, with functional assessment…
Semigroups of data normalization functions
Warrens, Matthijs J.
2016-01-01
Variable centering and scaling are functions that are typically used in data normalization. Various properties of centering and scaling functions are presented. It is shown that if we use two centering functions (or scaling functions) successively, the result depends on the order in which the
International Nuclear Information System (INIS)
Smith, R.H.
1984-01-01
The mechanisms are discussed of some clinical disorders, characteristic only of ruminants and related to the effects of abnormal mineral intake on rumen function. With particular attention to tropical conditions, consideration is given to: (a) the possible effects of phosphorus deficiency on rumen microbial activity; (b) the depression of rumen microbial synthesis in sulphur deficiency; (c) the inhibition of magnesium absorption from the forestomachs; and (d) the involvement of the rumen microorganisms in leading to copper and vitamin B 12 deficiencies as a result of low intakes of cobalt. (author)
[Metabolic functions and sport].
Riviere, Daniel
2004-01-01
Current epidemiological studies emphasize the increased of metabolic diseases of the adults, such as obesity, type-2 diabetes and metabolic syndromes. Even more worrying is the rising prevalence of obesity in children. It is due more to sedentariness, caused more by inactivity (television, video, games, etc.) than by overeating. Many studies have shown that regular physical activities benefit various bodily functions including metabolism. After dealing with the major benefits of physical exercise on some adult metabolic disorders, we focus on the prime role played by physical activity in combating the public health problem of childhood obesity.
Aumasson, Jean-Philippe; Phan, Raphael; Henzen, Luca
2014-01-01
This is a comprehensive description of the cryptographic hash function BLAKE, one of the five final contenders in the NIST SHA3 competition, and of BLAKE2, an improved version popular among developers. It describes how BLAKE was designed and why BLAKE2 was developed, and it offers guidelines on implementing and using BLAKE, with a focus on software implementation. In the first two chapters, the authors offer a short introduction to cryptographic hashing, the SHA3 competition, and BLAKE. They review applications of cryptographic hashing, they describe some basic notions such as security de
Gelfand, I M; Shnol, E E
1969-01-01
The second in a series of systematic studies by a celebrated mathematician I. M. Gelfand and colleagues, this volume presents students with a well-illustrated sequence of problems and exercises designed to illuminate the properties of functions and graphs. Since readers do not have the benefit of a blackboard on which a teacher constructs a graph, the authors abandoned the customary use of diagrams in which only the final form of the graph appears; instead, the book's margins feature step-by-step diagrams for the complete construction of each graph. The first part of the book employs simple fu
[Biotechnological functional systems].
Bokser, O Ia
1999-01-01
Based on the theory of functional systems and a concept of the quantum system of behavior, studies of the quantumsystems were conducted. Their structure, the interaction of biological and technical sections were analyzed. Mathematical, biophysical, and experimental models were designed. The paper shows that biotechnical quantumsystems are involved in the formation of biological feedback. A system with imperative feedback from the programmed and introduced current results of efforts has been developed and put into practice for the self-regulation of muscle tension. Training by using this biological feedback system causes a stable increase in the perception rate of proprioceptive stimulus in examinees (operates, sportsmen, neurological patients).
Bhatia, Rajendra
2009-01-01
These notes are a record of a one semester course on Functional Analysis given by the author to second year Master of Statistics students at the Indian Statistical Institute, New Delhi. Students taking this course have a strong background in real analysis, linear algebra, measure theory and probability, and the course proceeds rapidly from the definition of a normed linear space to the spectral theorem for bounded selfadjoint operators in a Hilbert space. The book is organised as twenty six lectures, each corresponding to a ninety minute class session. This may be helpful to teachers planning a course on this topic. Well prepared students can read it on their own.
Choice probability generating functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2013-01-01
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...
Matrix string partition function
Kostov, Ivan K; Kostov, Ivan K.; Vanhove, Pierre
1998-01-01
We evaluate quasiclassically the Ramond partition function of Euclidean D=10 U(N) super Yang-Mills theory reduced to a two-dimensional torus. The result can be interpreted in terms of free strings wrapping the space-time torus, as expected from the point of view of Matrix string theory. We demonstrate that, when extrapolated to the ultraviolet limit (small area of the torus), the quasiclassical expressions reproduce exactly the recently obtained expression for the partition of the completely reduced SYM theory, including the overall numerical factor. This is an evidence that our quasiclassical calculation might be exact.
Ramanujan's mock theta functions.
Griffin, Michael; Ono, Ken; Rolen, Larry
2013-04-09
In his famous deathbed letter, Ramanujan introduced the notion of a mock theta function, and he offered some alleged examples. Recent work by Zwegers [Zwegers S (2001) Contemp Math 291:268-277 and Zwegers S (2002) PhD thesis (Univ of Utrecht, Utrecht, The Netherlands)] has elucidated the theory encompassing these examples. They are holomorphic parts of special harmonic weak Maass forms. Despite this understanding, little attention has been given to Ramanujan's original definition. Here, we prove that Ramanujan's examples do indeed satisfy his original definition.
Plasmodesmata: Structure and Function
Directory of Open Access Journals (Sweden)
Thomas David Geydan
2006-07-01
Full Text Available Plasmodesmata are channels that transverse the cell wall and membrane. These specialized and non passive channels act like gates that facilitate and regulate both communication and transportation of molecules such as water, nutrients, metabolites and macromolecules between plant cells. In the last decade a new point of view of plasmodesmata has emerged, and studies have demonstrated that these channels are more complex. In this brief note, we pretend to expose the actual knowledge of plasmodesmata, focusing on their structure and function.
Takami, A.; Hashimoto, T.; Horibe, M.; Hayashi, A.
2000-01-01
The Wigner functions on the one dimensional lattice are studied. Contrary to the previous claim in literature, Wigner functions exist on the lattice with any number of sites, whether it is even or odd. There are infinitely many solutions satisfying the conditions which reasonable Wigner functions should respect. After presenting a heuristic method to obtain Wigner functions, we give the general form of the solutions. Quantum mechanical expectation values in terms of Wigner functions are also ...
Functionalism and consciousness.
Shoemaker, S
1993-01-01
It is widely held that a mental state and the subject's introspective belief about it are always 'distinct existences' and only contingently connected. This suggests that for each sort of mental state there could be a creature that is introspectively 'blind' with respect to states of that sort, meaning that while it is capable of having such states, and of conceiving of itself as having them, it is totally without introspective access to its states of that sort. It is argued here that introspective blindness with respect to many sorts of mental states, in particular beliefs and sensory states, is not a possibility, because it is incompatible with requirements of rationality that are internal to the functional roles that are constitutive of these states. Introspective accessibility is essential to the functional roles of such mental states when the conceptual and cognitive resources of the subject of those states are sufficiently rich to make beliefs and thoughts about them a possibility. This is a version of the view that such states are necessarily self-intimating and is incompatible with the perceptual model of introspection favoured by some functionalists as well as by many non-functionalists.
Pleural function and lymphatics.
Negrini, D; Moriondo, A
2013-02-01
The pleural space plays an important role in respiratory function as the negative intrapleural pressure regimen ensures lung expansion and in the mean time maintains the tight mechanical coupling between the lung and the chest wall. The efficiency of the lung-chest wall coupling depends upon pleural liquid volume, which in turn reflects the balance between the filtration of fluid into and its egress out of the cavity. While filtration occurs through a single mechanism passively driving fluid from the interstitium of the parietal pleura into the cavity, several mechanisms may co-operate to remove pleural fluid. Among these, the pleural lymphatic system emerges as the most important one in quantitative terms and the only one able to cope with variable pleural fluid volume and drainage requirements. In this review, we present a detailed account of the actual knowledge on: (a) the complex morphology of the pleural lymphatic system, (b) the mechanism supporting pleural lymph formation and propulsion, (c) the dependence of pleural lymphatic function upon local tissue mechanics and (d) the effect of lymphatic inefficiency in the development of clinically severe pleural and, more in general, respiratory pathologies. © 2012 The Authors Acta Physiologica © 2012 Scandinavian Physiological Society.
da Silva, Elaine Zayas Marcelino; Jamur, Maria Célia
2014-01-01
Since first described by Paul Ehrlich in 1878, mast cells have been mostly viewed as effectors of allergy. It has been only in the past two decades that mast cells have gained recognition for their involvement in other physiological and pathological processes. Mast cells have a widespread distribution and are found predominantly at the interface between the host and the external environment. Mast cell maturation, phenotype and function are a direct consequence of the local microenvironment and have a marked influence on their ability to specifically recognize and respond to various stimuli through the release of an array of biologically active mediators. These features enable mast cells to act as both first responders in harmful situations as well as to respond to changes in their environment by communicating with a variety of other cells implicated in physiological and immunological responses. Therefore, the critical role of mast cells in both innate and adaptive immunity, including immune tolerance, has gained increased prominence. Conversely, mast cell dysfunction has pointed to these cells as the main offenders in several chronic allergic/inflammatory disorders, cancer and autoimmune diseases. This review summarizes the current knowledge of mast cell function in both normal and pathological conditions with regards to their regulation, phenotype and role. PMID:25062998
Amusia and musical functioning.
Alossa, Nicoletta; Castelli, Lorys
2009-01-01
Music, as language, is a universal and specific trait to humans; it is a complex ability with characteristics that are unique compared to other cognitive abilities. Nevertheless, several issues are still open to debate, such as, for example, whether music is a faculty that is independent from the rest of the cognitive system, and whether musical skills are mediated by a single mechanism or by a combination of processes that are independent from one another. Moreover, the anatomical correlations of music have yet to be clarified. The goal of this review is to illustrate the current condition of the neuropsychology of music and to describe different approaches to the study of the musical functions. Hereby, we will describe the neuropsychological findings, suggesting that music is a special function carried out by different and dedicated processes that are probably subserved by different anatomical regions of the brain. Moreover, we will review the evidence obtained by working with brain-damaged patients suffering from music agnosia, a selective impairment in music recognition. Copyright 2009 S. Karger AG, Basel.