WorldWideScience

Sample records for monomodal lognormal function

  1. The truncated lognormal distribution as a luminosity function for SWIFT-BAT gamma-ray bursts

    CERN Document Server

    Zaninetti, L

    2016-01-01

    The determination of the luminosity function (LF) in gamma ray bursts (GRBs) depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here we analyse three cosmologies: the standard cosmology, the plasma cosmology, and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law, and secondly by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.

  2. The Truncated Lognormal Distribution as a Luminosity Function for SWIFT-BAT Gamma-Ray Bursts

    Directory of Open Access Journals (Sweden)

    Lorenzo Zaninetti

    2016-11-01

    Full Text Available The determination of the luminosity function (LF in Gamma ray bursts (GRBs depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here, we analyze three cosmologies: the standard cosmology, the plasma cosmology and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law and, secondly, by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.

  3. MODELING PARTICLE SIZE DISTRIBUTION IN HETEROGENEOUS POLYMERIZATION SYSTEMS USING MULTIMODAL LOGNORMAL FUNCTION

    Directory of Open Access Journals (Sweden)

    J. C. Ferrari

    Full Text Available Abstract This work evaluates the usage of the multimodal lognormal function to describe Particle Size Distributions (PSD of emulsion and suspension polymerization processes, including continuous reactions with particle re-nucleation leading to complex multimodal PSDs. A global optimization algorithm, namely Particle Swarm Optimization (PSO, was used for parameter estimation of the proposed model, minimizing the objective function defined by the mean squared errors. Statistical evaluation of the results indicated that the multimodal lognormal function could describe distinctive features of different types of PSDs with accuracy and consistency.

  4. Matching the Evolution of the Stellar Mass Function Using Log-normal Star Formation Histories

    CERN Document Server

    Abramson, Louis E; Dressler, Alan; Oemler, Augustus; Poggianti, Bianca; Vulcani, Benedetta

    2014-01-01

    We show that a model consisting of individual, log-normal star formation histories for a volume-limited sample of $z\\approx0$ galaxies reproduces the evolution of the total and quiescent stellar mass functions at $z\\lesssim2.5$ and stellar masses $M_*\\geq10^{10}\\,{\\rm M_\\odot}$. This model has previously been shown to reproduce the star formation rate/stellar mass relation (${\\rm SFR}$--$M_*$) over the same interval, is fully consistent with the observed evolution of the cosmic ${\\rm SFR}$ density at $z\\leq8$, and entails no explicit "quenching" prescription. We interpret these results/features in the context of other models demonstrating a similar ability to reproduce the evolution of (1) the cosmic ${\\rm SFR}$ density, (2) the total/quiescent stellar mass functions, and (3) the ${\\rm SFR}$--$M_*$ relation, proposing that the key difference between modeling approaches is the extent to which they stress/address diversity in the (starforming) galaxy population. Finally, we suggest that observations revealing t...

  5. The Lognormal Probability Distribution Function of the Perseus Molecular Cloud: A Comparison of HI and Dust

    CERN Document Server

    Burkhart, Blakesley; Murray, Claire; Stanimirovic, Snezana

    2015-01-01

    The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e. Av <1) PDF using dust tracers. In order to constrain the shape and properties of the low column density probability distribution function, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution, and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a powerlaw form. We find that the PDF of the atomic gas is narrow and at column densities larger than...

  6. The Lognormal Probability Distribution Function of the Perseus Molecular Cloud: A Comparison of HI and Dust

    Science.gov (United States)

    Burkhart, Blakesley; Lee, Min-Young; Murray, Claire E.; Stanimirović, Snezana

    2015-10-01

    The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e., {A}V\\lt 1) PDF using dust tracers. In order to constrain the shape and properties of the low column density PDF, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a power-law form. We find that the PDF of the atomic gas is narrow, and at column densities larger than the HI-H2 transition, the HI rapidly depletes, suggesting that the HI PDF may be used to find the HI-H2 transition column density. We also calculate the sonic Mach number of the atomic gas by using HI absorption line data, which yield a median value of Ms = 4.0 for the CNM, while the HI emission PDF, which traces both the WNM and CNM, has a width more consistent with transonic turbulence.

  7. Application of continuous normal-lognormal bivariate density functions in a sensitivity analysis of municipal solid waste landfill.

    Science.gov (United States)

    Petrovic, Igor; Hip, Ivan; Fredlund, Murray D

    2016-09-01

    The variability of untreated municipal solid waste (MSW) shear strength parameters, namely cohesion and shear friction angle, with respect to waste stability problems, is of primary concern due to the strong heterogeneity of MSW. A large number of municipal solid waste (MSW) shear strength parameters (friction angle and cohesion) were collected from published literature and analyzed. The basic statistical analysis has shown that the central tendency of both shear strength parameters fits reasonably well within the ranges of recommended values proposed by different authors. In addition, it was established that the correlation between shear friction angle and cohesion is not strong but it still remained significant. Through use of a distribution fitting method it was found that the shear friction angle could be adjusted to a normal probability density function while cohesion follows the log-normal density function. The continuous normal-lognormal bivariate density function was therefore selected as an adequate model to ascertain rational boundary values ("confidence interval") for MSW shear strength parameters. It was concluded that a curve with a 70% confidence level generates a "confidence interval" within the reasonable limits. With respect to the decomposition stage of the waste material, three different ranges of appropriate shear strength parameters were indicated. Defined parameters were then used as input parameters for an Alternative Point Estimated Method (APEM) stability analysis on a real case scenario of the Jakusevec landfill. The Jakusevec landfill is the disposal site of the capital of Croatia - Zagreb. The analysis shows that in the case of a dry landfill the most significant factor influencing the safety factor was the shear friction angle of old, decomposed waste material, while in the case of a landfill with significant leachate level the most significant factor influencing the safety factor was the cohesion of old, decomposed waste material. The

  8. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    Science.gov (United States)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  9. Alignement automatise de fibres optiques amorces monomodes

    Science.gov (United States)

    St-Amant, Yves

    Cette these jette les bases necessaires au developpement d'algorithmes a base de modele pour l'automatisation de l'alignement de fibres amorces monomodes. A partir de la methode de l'integrale de recouvrement et de deux solutions approximatives existantes, un modele analytique d'efficacite de couplage optique permettant d'estimer la puissance transmise entre un composant et une fibre amorce monomode est d'abord formule. Avec celui-ci, sept proprietes pouvant etre utiles au developpement d'algorithmes a base de modele sont ensuite identifiees et validees. Enfin, a l'aide de ces proprietes, une strategie d'alignement a base de modele est developpee et validee experimentalement. Les resultats obtenus demontrent clairement la repetitivite, la robustesse, la precision et la rapidite de la strategie proposee. Ils demontrent aussi qu'il est possible de realiser un alignement complet sans l'utilisation de systemes auxiliaires tels des systemes de vision, des cameras infrarouges, des capteurs de contact ou des systemes de fixation hautement precis.

  10. Noise Challenges in Monomodal Gaze Interaction

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik

    Modern graphical user interfaces (GUIs) are designed with able-bodied users in mind. Operating these interfaces can be impossible for some users who are unable to control the conventional mouse and keyboard. An eye tracking system offers possibilities for independent use and improved quality...... of life via dedicated interface tools especially tailored to the users’ needs (e.g., interaction, communication, e-mailing, web browsing and entertainment). Much effort has been put towards robustness, accuracy and precision of modern eye-tracking systems and there are many available on the market. Even...... stream are most wanted. The work in this thesis presents three contributions that may advance the use of low-cost monomodal gaze tracking and research in the field: - An assessment of a low-cost open-source gaze tracker and two eye tracking systems through an accuracy and precision test and a performance...

  11. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    Energy Technology Data Exchange (ETDEWEB)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H., E-mail: B.H.Erne@uu.nl

    2015-04-15

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal distribution of the magnetic dipole moment. Here, we test this assumption for different types of superparamagnetic iron oxide nanoparticles in the 5–20 nm range, by multimodal fitting of magnetization curves using the MINORIM inversion method. The particles are studied while in dilute colloidal dispersion in a liquid, thereby preventing hysteresis and diminishing the effects of magnetic anisotropy on the interpretation of the magnetization curves. For two different types of well crystallized particles, the magnetic distribution is indeed log-normal, as expected from the physical size distribution. However, two other types of particles, with twinning defects or inhomogeneous oxide phases, are found to have a bimodal magnetic distribution. Our qualitative explanation is that relatively low fields are sufficient to begin aligning the particles in the liquid on the basis of their net dipole moment, whereas higher fields are required to align the smaller domains or less magnetic phases inside the particles. - Highlights: • Multimodal fits of dilute ferrofluids reveal when the particles are multidomain. • No a priori shape of the distribution is assumed by the MINORIM inversion method. • Well crystallized particles have log-normal TEM and magnetic size distributions. • Defective particles can combine a monomodal size and a bimodal dipole moment.

  12. Improving lognormal models for cosmological fields

    CERN Document Server

    Xavier, Henrique S; Joachimi, Benjamin

    2016-01-01

    It is common practice in cosmology to model large-scale structure observables as lognormal random fields, and this approach has been successfully applied in the past to the matter density and weak lensing convergence fields separately. We argue that this approach has fundamental limitations which prevent its use for jointly modelling these two fields since the lognormal distribution's shape can prevent certain correlations to be attainable. Given the need of ongoing and future large-scale structure surveys for fast joint simulations of clustering and weak lensing, we propose two ways of overcoming these limitations. The first approach slightly distorts the power spectra of the fields using one of two algorithms that minimises either the absolute or the fractional distortions. The second one is by obtaining more accurate convergence marginal distributions, for which we provide a fitting function, by integrating the lognormal density along the line of sight. The latter approach also provides a way to determine ...

  13. Are human interactivity times lognormal?

    CERN Document Server

    Blenn, Norbert

    2016-01-01

    In this paper, we are analyzing the interactivity time, defined as the duration between two consecutive tasks such as sending emails, collecting friends and followers and writing comments in online social networks (OSNs). The distributions of these times are heavy tailed and often described by a power-law distribution. However, power-law distributions usually only fit the heavy tail of empirical data and ignore the information in the smaller value range. Here, we argue that the durations between writing emails or comments, adding friends and receiving followers are likely to follow a lognormal distribution. We discuss the similarities between power-law and lognormal distributions, show that binning of data can deform a lognormal to a power-law distribution and propose an explanation for the appearance of lognormal interactivity times. The historical debate of similarities between lognormal and power-law distributions is reviewed by illustrating the resemblance of measurements in this paper with the historical...

  14. On the attenuation coefficient of monomode periodic waveguides

    CERN Document Server

    Baron, Alexandre; Smigaj, Wojciech; Lalanne, Philippe

    2011-01-01

    It is widely accepted that, on ensemble average, the transmission T of guided modes decays exponentially with the waveguide length L due to small imperfections, leading to the important figure of merit defined as the attenuation-rate coefficient alpha = -/L. In this letter, we evidence that the exponential-damping law is not valid in general for periodic monomode waveguides, especially as the group velocity decreases. This result that contradicts common beliefs and experimental practices aiming at measuring alpha is supported by a theoretical study of light transport in the limit of very small imperfections, and by numerical results obtained for two waveguide geometries that offer contrasted damping behaviours.

  15. Photon, electron, magnon, phonon and plasmon mono-mode circuits [review article

    Science.gov (United States)

    Vasseur, J. O.; Akjouj, A.; Dobrzynski, L.; Djafari-Rouhani, B.; El Boudouti, E. H.

    2004-06-01

    Photon circuits are light conducting networks formed by joining several dielectric wave-guide channels for the transmission of light. Their production utilizes the most advanced surface technologies and represents one of the most important challenges for the next decade. These circuits are usually mono-mode when the lateral dimensions of the conducting wires are small as compared to the photon wavelength. Plasmon circuits are plasmon conducting networks, a plasmon being a collective excitation of an electron gas in a metal. Such circuits made out of nanometric metallic clusters and wires can also be tuned to work at light wavelength. Similarly, electron circuits can be designed with modern surface technologies in which the propagation of electrons is non-diffusive. Similar investigations also started recently for circuits in which the propagating excitations are phonons and magnons (spin waves). In this review paper, we deal with mono-mode circuits for propagating photons, non-diffusive ballistic electrons, magnons, phonons and plasmons. In all these circuits, the interfaces between the different wires out of which the circuits are made of, play a fundamental role. All such circuits exhibit a variety of interference effects in their transport properties. Emphasis in this review paper is placed on the network creations, which include barriers, stubs or resonators, closed loops, interconnecting branched networks and multiplexers. Results for the transmission and reflection properties of such circuits are discussed as a function of the wavelength of the excitations and the physical properties of the circuits.

  16. Showing or Telling a Story: A Comparative Study of Public Education Texts in Multimodality and Monomodality

    Science.gov (United States)

    Wang, Kelu

    2013-01-01

    Multimodal texts that combine words and images produce meaning in a different way from monomodal texts that rely on words. They differ not only in representing the subject matter, but also constructing relationships between text producers and text receivers. This article uses two multimodal texts and one monomodal written text as samples, which…

  17. Statistical analysis of the Lognormal-Pareto distribution using Probability Weighted Moments and Maximum Likelihood

    OpenAIRE

    Marco Bee

    2012-01-01

    This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...

  18. Lognormal Infection Times of Online Information Spread

    CERN Document Server

    Doerr, Christian; Van Mieghem, Piet

    2013-01-01

    The infection times of individuals in online information spread such as the inter-arrival time of Twitter messages or the propagation time of news stories on a social media site can be explained through a convolution of lognormally distributed observation and reaction times of the individual participants. Experimental measurements support the lognormal shape of the individual contributing processes, and have resemblance to previously reported lognormal distributions of human behavior and contagious processes.

  19. Lognormal Approximation of Complex Path-dependent Pension Scheme Payoffs

    DEFF Research Database (Denmark)

    Jørgensen, Peter Løchte

    This paper analyzes an explicit return smoothing mechanism which has recently been introduced as part of a new type of pension savings contract that has been offered by Danish life insurers. We establish the payoff function implied by the return smoothing mechanism and show that its probabilistic...... properties are accurately approximated by a suitably adapted lognormal distribution. The quality of the lognormal approximation is explored via a range of simulation based numerical experiments, and we point to several other potential practical applications of the paper's theoretical results....

  20. From Monomodal to Multimodal Metaphors in the Portuguese sports newspaper A Bola

    Directory of Open Access Journals (Sweden)

    Maria Clotilde Almeida

    Full Text Available Following our comprehensive study of conceptual metaphor occurrences in the Portuguese sports newspaper A Bola (ALMEIDA, 2013 and the research on multimodal metaphor (Forceville, 2008, 2009, 2012, the present paper aims to analyse multimodal metaphors, depicting Cristiano Ronaldo on the covers of this very same newspaper. We wish to uncover conceptual affinities or differences between monomodal and multimodal metaphors as far as their source domains are concerned, namely those of WAR and RELIGION (ALMEIDA, 2013. Upon confronting monomodal and multimodal metaphors, we have unveiled that source domains of multimodal metaphors appear to be more restrictive in comparison to the vast panoply of source domains in monomodal metaphors (ALMEIDA, 2013.

  1. Nonrigid Registration of Monomodal MRI Using Linear Viscoelastic Model

    Directory of Open Access Journals (Sweden)

    Jian Yang

    2014-01-01

    Full Text Available This paper describes a method for nonrigid registration of monomodal MRI based on physical laws. The proposed method assumes that the properties of image deformations are like those of viscoelastic matter, which exhibits the properties of both an elastic solid and a viscous fluid. Therefore, the deformation fields of the deformed image are constrained by both sets of properties. After global registration, the local shape variations are assumed to have the properties of the Maxwell model of linear viscoelasticity, and the deformation fields are constrained by the corresponding partial differential equations. To speed up the registration, an adaptive force is introduced according to the maximum displacement of each iteration. Both synthetic datasets and real datasets are used to evaluate the proposed method. We compare the results of the linear viscoelastic model with those of the fluid model on the basis of both the standard and adaptive forces. The results demonstrate that the adaptive force increases in both models and that the linear viscoelastic model improves the registration accuracy.

  2. Exact Solutions to Extended Nonlinear Schr(o)dinger Equation in Monomode Optical Fiber

    Institute of Scientific and Technical Information of China (English)

    BAI Cheng-Lin; ZHAO Hong; Wang Wei-Tao

    2006-01-01

    By using the generally projective Riccati equation method, more new exact travelling wave solutions to extended nonlinear Schr(o)dinger equation (NLSE), which describes the femtosecond pulse propagation in monomode optical fiber, are found, which include bright soliton solution, dark soliton solution, new solitary waves, periodic solutions, and rational solutions. The finding of abundant solution structures for extended NLSE helps to study the movement rule of femtosecond pulse propagation in monomode optical fiber.

  3. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  4. Neuronal variability during handwriting: lognormal distribution.

    Directory of Open Access Journals (Sweden)

    Valery I Rupasov

    Full Text Available We examined time-dependent statistical properties of electromyographic (EMG signals recorded from intrinsic hand muscles during handwriting. Our analysis showed that trial-to-trial neuronal variability of EMG signals is well described by the lognormal distribution clearly distinguished from the Gaussian (normal distribution. This finding indicates that EMG formation cannot be described by a conventional model where the signal is normally distributed because it is composed by summation of many random sources. We found that the variability of temporal parameters of handwriting--handwriting duration and response time--is also well described by a lognormal distribution. Although, the exact mechanism of lognormal statistics remains an open question, the results obtained should significantly impact experimental research, theoretical modeling and bioengineering applications of motor networks. In particular, our results suggest that accounting for lognormal distribution of EMGs can improve biomimetic systems that strive to reproduce EMG signals in artificial actuators.

  5. Beyond lognormal inequality: The Lorenz Flow Structure

    Science.gov (United States)

    Eliazar, Iddo

    2016-11-01

    Observed from a socioeconomic perspective, the intrinsic inequality of the lognormal law happens to manifest a flow generated by an underlying ordinary differential equation. In this paper we extend this feature of the lognormal law to a general "Lorenz Flow Structure" of Lorenz curves-objects that quantify socioeconomic inequality. The Lorenz Flow Structure establishes a general framework of size distributions that span continuous spectra of socioeconomic states ranging from the pure-communism extreme to the absolute-monarchy extreme. This study introduces and explores the Lorenz Flow Structure, analyzes its statistical properties and its inequality properties, unveils the unique role of the lognormal law within this general structure, and presents various examples of this general structure. Beyond the lognormal law, the examples include the inverse-Pareto and Pareto laws-which often govern the tails of composite size distributions.

  6. Mono-modal feature extraction for bonding quality detection of explosive clad structure with optimized dual-tree complex wavelet transform

    Science.gov (United States)

    Si, Yue; Zhang, Zhousuo; Wang, Hongfang; Yuan, Feichen

    2017-03-01

    Bonding quality detection of explosive clad structure is significant to prevent catastrophic accidents. Multi-modal features related to bonding quality are contained in structural vibration response signal. Different modal feature has different sensitivity to the bonding quality. Extracting the desired mono-modal feature from the vibration response signal is necessary. Due to the mode aliasing easily appeared in the process of extracting the desired mono-modal feature, there is no effective method for this task. Dual-tree complex wavelet with attractive properties such as shift invariance and reduced spectral aliasing may provide a better way to extract the mono-modal feature. However, the fixed basis functions independent of the analyzed signal may weak the advantage of the method and even reduce the accuracy of detection result. To overcome this shortcoming, a technique called optimized dual-tree complex wavelet transform (ODTCWT) is proposed in this paper. Based on the analyzed signal, the optimized dual-tree complex wavelet basis function is constructed by searching for the proper parameters of vanishing moment K and the order of filter L. The optimized dual-tree complex wavelet with improved wavelet filters can best matched the modal frequencies of the analyzed signal. The ODTCWT can extract the mono-modal feature from vibration response signal with lower mode aliasing. The feasibility and effectiveness of the method of constructing ODTCWT is illustrated by the simulated signal. The proposed ODTCWT is combined with time entropy to detecting bonding quality of explosive clad pipes. For comparison, un-optimized dual-tree complex wavelet transform (UODTCWT), second-generation wavelet transform (SGWT) and band-pass filter (BPF) are also used for this task to demonstrate the validity of ODTCWT.

  7. Optimal approximations for risk measures of sums of lognormals based on conditional expectations

    Science.gov (United States)

    Vanduffel, S.; Chen, X.; Dhaene, J.; Goovaerts, M.; Henrard, L.; Kaas, R.

    2008-11-01

    In this paper we investigate the approximations for the distribution function of a sum S of lognormal random variables. These approximations are obtained by considering the conditional expectation E[S|[Lambda

  8. A Lognormal Distribution of Metal Resources

    Institute of Scientific and Technical Information of China (English)

    Donald A.Singer

    2011-01-01

    For national or global resource estimation of frequencies of metals, a lognormal distribution has commonly been recommended but not adequately tested. Tests of frequencies of Cu, Zn, Pb, A_g, and Au contents of 1 984 well-explored mineral deposits display a poor fit to the lognormal distribution. When the same metals plus Mo, Co, Nb2O3, and REE2O3 are grouped into 19 geologically defined deposit types, only eight of the 73 tests fail to be fit by lognormal distribution, and most of those failures are in two deposit types suggesting a problem with those types. Estimates of the mean and standard deviation of each of the metals in each of the deposit types are provided for modeling.

  9. The lognormal handwriter: learning, performing and declining.

    Directory of Open Access Journals (Sweden)

    Réjean ePlamondon

    2013-12-01

    Full Text Available The generation of handwriting is a complex neuromotor skill requiring the interaction of many cognitive processes. It aims at producing a message to be imprinted as an ink trace left on a writing medium. The generated trajectory of the pen tip is made up of strokes superimposed over time. The Kinematic Theory of rapid human movements and its family of lognormal models provide analytical representations of these strokes, often considered as the basic unit of handwriting. This paradigm has not only been experimentally confirmed in numerous predictive and physiologically significant tests but it has also been shown to be the ideal mathematical description for the impulse response of a neuromuscular system. This latter demonstration suggests that the lognormality of the velocity patterns can be interpreted as reflecting the behaviour of subjects who are in perfect control of their movements. To illustrate this interpretation, we present a short overview of the main concepts behind the Kinematic Theory and briefly describe how its models can be exploited, using various software tools, to investigate these ideal lognormal behaviors. We emphasize that the parameters extracted during various tasks can be used to analyze some underlying processes associated with their realization. To investigate the operational convergence hypothesis, we report on two original studies. First, we focus on the early steps of the motor learning process as seen as a converging behaviour toward the production of more precise lognormal patterns as young children practicing handwriting start to become more fluent writers. Second, we illustrate how aging affects handwriting by pointing out the increasing departure from the ideal lognormal behaviour as the control of the fine motricity begins to decline. Overall, the paper highlights this developmental process of merging toward a lognormal behaviour with learning, mastering this behaviour to succeed in performing a given task

  10. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    Energy Technology Data Exchange (ETDEWEB)

    Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D' Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.

    2016-08-30

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  11. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    Energy Technology Data Exchange (ETDEWEB)

    Clerkin, L.; et al.

    2016-05-06

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  12. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    Science.gov (United States)

    Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.

    2017-04-01

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  13. Variational Bayes for Regime-Switching Log-Normal Models

    Directory of Open Access Journals (Sweden)

    Hui Zhao

    2014-07-01

    Full Text Available The power of projection using divergence functions is a major theme in information geometry. One version of this is the variational Bayes (VB method. This paper looks at VB in the context of other projection-based methods in information geometry. It also describes how to apply VB to the regime-switching log-normal model and how it provides a computationally fast solution to quantify the uncertainty in the model specification. The results show that the method can recover exactly the model structure, gives the reasonable point estimates and is very computationally efficient. The potential problems of the method in quantifying the parameter uncertainty are discussed.

  14. Exponential Family Techniques for the Lognormal Left Tail

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    [Xe−θX]/L(θ)=x. The asymptotic formulas involve the Lambert W function. The established relations are used to provide two different numerical methods for evaluating the left tail probability of lognormal sum Sn=X1+⋯+Xn: a saddlepoint approximation and an exponential twisting importance sampling estimator. For the latter we...... demonstrate logarithmic efficiency. Numerical examples for the cdf Fn(x) and the pdf fn(x) of Sn are given in a range of values of σ2,n,x motivated from portfolio Value-at-Risk calculations....

  15. Asymptotic Ergodic Capacity Analysis of Composite Lognormal Shadowed Channels

    KAUST Repository

    Ansari, Imran Shafique

    2015-05-01

    Capacity analysis of composite lognormal (LN) shadowed links, such as Rician-LN, Gamma-LN, and Weibull-LN, is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single composite link transmission system is presented in terms of well- known elementary functions. Capitalizing on these new moments expressions, we present asymptotically tight lower bounds for the ergodic capacity at high SNR. All the presented results are verified via computer-based Monte-Carlo simulations. © 2015 IEEE.

  16. Multilevel quadrature of elliptic PDEs with log-normal diffusion

    KAUST Repository

    Harbrecht, Helmut

    2015-01-07

    We apply multilevel quadrature methods for the moment computation of the solution of elliptic PDEs with lognormally distributed diffusion coefficients. The computation of the moments is a difficult task since they appear as high dimensional Bochner integrals over an unbounded domain. Each function evaluation corresponds to a deterministic elliptic boundary value problem which can be solved by finite elements on an appropriate level of refinement. The complexity is thus given by the number of quadrature points times the complexity for a single elliptic PDE solve. The multilevel idea is to reduce this complexity by combining quadrature methods with different accuracies with several spatial discretization levels in a sparse grid like fashion.

  17. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  18. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    CERN Document Server

    Clerkin, L; Manera, M; Lahav, O; Abdalla, F; Amara, A; Bacon, D; Chang, C; Gaztañaga, E; Hawken, A; Jain, B; Joachimi, B; Vikram, V; Abbott, T; Allam, S; Armstrong, R; Benoit-Lévy, A; Bernstein, G M; Bertin, E; Brooks, D; Burk, D L; Rosell, A Carnero; Kind, M Carrasco; Crocce, M; Cunha, C E; D'Andrea, C B; da Costa, L N; Desai, S; Diehl, H T; Dietrich, J P; Eifler, T F; Evrard, A E; Flaugher, B; Fosalba, P; Frieman, J; Gerdes, D W; Gruen, D; Gruendl, R A; Gutierrez, G; Honscheid, K; James, D J; Kent, S; Kuehn, K; Kuropatkin, N; Lima, M; Melchior, P; Miquel, R; Nord, B; Plazas, A A; Romer, A K; Sanchez, E; Schubnell, M; Sevilla-Noarbe, I; Smith, R C; Santos, M Soares; Sobreira, F; Suchyta, E; Swanson, M E C; Tarle, G; Walker, A R

    2016-01-01

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL d...

  19. Pareto tails and lognormal body of US cities size distribution

    Science.gov (United States)

    Luckstead, Jeff; Devadoss, Stephen

    2017-01-01

    We consider a distribution, which consists of lower tail Pareto, lognormal body, and upper tail Pareto, to estimate the size distribution of all US cities. This distribution fits the data more accurately than a distribution that comprises of only lognormal and the upper tail Pareto.

  20. Lognormal Approximation of Complex Path-Dependent Pension Scheme Payoffs

    DEFF Research Database (Denmark)

    Jørgensen, Peter Løchte

    2007-01-01

    properties are accurately approximated by a suitably adapted lognormal distribution. The quality of the lognormal approximation is explored via a range of simulation-based numerical experiments, and we point to several other potential practical applications of the paper's theoretical results....

  1. Optimal approximations for risk measures of sums of lognormals based on conditional expectations

    NARCIS (Netherlands)

    Vanduffel, S.; Chen, X.; Dhaene, J.; Goovaerts, M.; Henrard, L.; Kaas, R.

    2008-01-01

    In this paper we investigate the approximations for the distribution function of a sum S of lognormal random variables. These approximations are obtained by considering the conditional expectation E[SΛ] of S with respect to a conditioning random variable Λ. The choice of Λ is crucial in order to

  2. A Lognormal Recurrent Network Model for Burst Generation during Hippocampal Sharp Waves.

    Science.gov (United States)

    Omura, Yoshiyuki; Carvalho, Milena M; Inokuchi, Kaoru; Fukai, Tomoki

    2015-10-28

    The strength of cortical synapses distributes lognormally, with a long tail of strong synapses. Various properties of neuronal activity, such as the average firing rates of neurons, the rate and magnitude of spike bursts, the magnitude of population synchrony, and the correlations between presynaptic and postsynaptic spikes, also obey lognormal-like distributions reported in the rodent hippocampal CA1 and CA3 areas. Theoretical models have demonstrated how such a firing rate distribution emerges from neural network dynamics. However, how the other properties also display lognormal patterns remain unknown. Because these features are likely to originate from neural dynamics in CA3, we model a recurrent neural network with the weights of recurrent excitatory connections distributed lognormally to explore the underlying mechanisms and their functional implications. Using multi-timescale adaptive threshold neurons, we construct a low-frequency spontaneous firing state of bursty neurons. This state well replicates the observed statistical properties of population synchrony in hippocampal pyramidal cells. Our results show that the lognormal distribution of synaptic weights consistently accounts for the observed long-tailed features of hippocampal activity. Furthermore, our model demonstrates that bursts spread over the lognormal network much more effectively than single spikes, implying an advantage of spike bursts in information transfer. This efficiency in burst propagation is not found in neural network models with Gaussian-weighted recurrent excitatory synapses. Our model proposes a potential network mechanism to generate sharp waves in CA3 and associated ripples in CA1 because bursts occur in CA3 pyramidal neurons most frequently during sharp waves.

  3. Pareto-Lognormal Modeling of Known and Unknown Metal Resources. II. Method Refinement and Further Applications

    Energy Technology Data Exchange (ETDEWEB)

    Agterberg, Frits, E-mail: agterber@nrcan.gc.ca [Geological Survey of Canada (Canada)

    2017-07-01

    Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that

  4. Recovering the nonlinear density field from the galaxy distribution with a Poisson-Lognormal filter

    CERN Document Server

    Kitaura, Francisco S; Metcalf, R Benton

    2009-01-01

    We present a general expression for a lognormal filter given an arbitrary nonlinear galaxy bias. We derive this filter as the maximum a posteriori solution assuming a lognormal prior distribution for the matter field with a given mean field and modeling the observed galaxy distribution by a Poissonian process. We have performed a three-dimensional implementation of this filter with a very efficient Newton-Krylov inversion scheme. Furthermore, we have tested it with a dark matter N-body simulation assuming a unit galaxy bias relation and compared the results with previous density field estimators like the inverse weighting scheme and Wiener filtering. Our results show good agreement with the underlying dark matter field for overdensities even above delta~1000 which exceeds by one order of magnitude the regime in which the lognormal is expected to be valid. The reason is that for our filter the lognormal assumption enters as a prior distribution function, but the maximum a posteriori solution is also conditione...

  5. Estimation of expected value and coefficient of variation for lognormal and gamma distributions

    Energy Technology Data Exchange (ETDEWEB)

    White, G.C.

    1978-07-01

    Concentrations of environmental pollutants tend to follow positively skewed frequency distributions. Two such density functions are the gamma and lognormal. Minimum variance unbiased estimators of the expected value for both densities are available. The small sample statistical properties of each of these estimators were compared for their own distributions, as well as for the other distribution, to check the robustness of the estimator. The arithmetic mean is known to provide an unbiased estimator of expected value when the underlying density of the sample is either lognormal or gamma, and results indicated the achieved coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two. Further Monte Carlo simulations were conducted to study the robustness of the above estimators by simulating a lognormal or gamma distribution with the expected value of a particular observation selected from a uniform distribution before the lognormal or gamma observation is generated. Again, the arithmetic mean provides an unbiased estimate of expected value, and the achieved coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two.

  6. Enzyme inactivation analyses for industrial blanching applications employing 2450 Mhz monomode microwave cavities.

    Science.gov (United States)

    Sánchez-Hernández, D; Devece, C; Catalá, J M; Rodríguez-López, J N; Tudela, J; García-Cánovas, F; de los Reyes, E

    1999-01-01

    Browning reactions in fruits and vegetables are recognized as a serious problem for the European food industry, particularly for the mushroom sector. The major enzyme responsible for the browning reaction is polyphenoloxidase (PPO). In this paper considerable reduction has been achieved in both the time and temperature required for complete microwave enzyme inactivation compared to conventional hot-water treatments, which can be translated into both increased benefits and enhanced quality products for the food industry. Furthermore, the short exposure time required for complete inactivation of aqueous solutions of PPO irradiated with microwaves within monomode cavities is very important to reduce the browning rate of mushroom extracts, and could lead to a much greater product profitability when treating whole processed mushrooms.

  7. The Sum and Difference of Two Lognormal Random Variables

    Directory of Open Access Journals (Sweden)

    C. F. Lo

    2012-01-01

    Full Text Available We have presented a new unified approach to model the dynamics of both the sum and difference of two correlated lognormal stochastic variables. By the Lie-Trotter operator splitting method, both the sum and difference are shown to follow a shifted lognormal stochastic process, and approximate probability distributions are determined in closed form. Illustrative numerical examples are presented to demonstrate the validity and accuracy of these approximate distributions. In terms of the approximate probability distributions, we have also obtained an analytical series expansion of the exact solutions, which can allow us to improve the approximation in a systematic manner. Moreover, we believe that this new approach can be extended to study both (1 the algebraic sum of N lognormals, and (2 the sum and difference of other correlated stochastic processes, for example, two correlated CEV processes, two correlated CIR processes, and two correlated lognormal processes with mean-reversion.

  8. Gaussian and Lognormal Models of Hurricane Gust Factors

    Science.gov (United States)

    Merceret, Frank

    2009-01-01

    A document describes a tool that predicts the likelihood of land-falling tropical storms and hurricanes exceeding specified peak speeds, given the mean wind speed at various heights of up to 500 feet (150 meters) above ground level. Empirical models to calculate mean and standard deviation of the gust factor as a function of height and mean wind speed were developed in Excel based on data from previous hurricanes. Separate models were developed for Gaussian and offset lognormal distributions for the gust factor. Rather than forecasting a single, specific peak wind speed, this tool provides a probability of exceeding a specified value. This probability is provided as a function of height, allowing it to be applied at a height appropriate for tall structures. The user inputs the mean wind speed, height, and operational threshold. The tool produces the probability from each model that the given threshold will be exceeded. This application does have its limits. They were tested only in tropical storm conditions associated with the periphery of hurricanes. Winds of similar speed produced by non-tropical system may have different turbulence dynamics and stability, which may change those winds statistical characteristics. These models were developed along the Central Florida seacoast, and their results may not accurately extrapolate to inland areas, or even to coastal sites that are different from those used to build the models. Although this tool cannot be generalized for use in different environments, its methodology could be applied to those locations to develop a similar tool tuned to local conditions.

  9. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    NARCIS (Netherlands)

    van Rijssel, Jozef; Kuipers, Bonny W M; Erne, Ben

    2015-01-01

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal d

  10. Approximation to Distribution of Product of Random Variables Using Orthogonal Polynomials for Lognormal Density

    CERN Document Server

    Zheng, Zhong; Hämäläinen, Jyri; Tirkkonen, Olav

    2012-01-01

    We derive a closed-form expression for the orthogonal polynomials associated with the general lognormal density. The result can be utilized to construct easily computable approximations for probability density function of a product of random variables. As an example, we have calculated the approximative distribution for the product of correlated Nakagami-m variables. Simulations indicate that accuracy of the proposed approximation is good.

  11. Parameter estimation and forecasting for multiplicative log-normal cascades.

    Science.gov (United States)

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  12. On the Bivariate Nakagami-Lognormal Distribution and Its Correlation Properties

    Directory of Open Access Journals (Sweden)

    Juan Reig

    2014-01-01

    Full Text Available The bivariate Nakagami-lognormal distribution used to model the composite fast fading and shadowing has been examined exhaustively. In particular, we have derived the joint probability density function, the cross-moments, and the correlation coefficient in power terms. Also, two procedures to generate two correlated Nakagami-lognormal random variables are described. These procedures can be used to evaluate the robustness of the sample correlation coefficient distribution in both macro- and microdiversity scenarios. It is shown that the bias and the standard deviation of this sample correlation coefficient are substantially high for large shadowing standard deviations found in wireless communication measurements, even if the number of observations is considerable.

  13. Packing fraction of particles with lognormal size distribution.

    Science.gov (United States)

    Brouwers, H J H

    2014-05-01

    This paper addresses the packing and void fraction of polydisperse particles with a lognormal size distribution. It is demonstrated that a binomial particle size distribution can be transformed into a continuous particle-size distribution of the lognormal type. Furthermore, an original and exact expression is derived that predicts the packing fraction of mixtures of particles with a lognormal distribution, which is governed by the standard deviation, mode of packing, and particle shape only. For a number of particle shapes and their packing modes (close, loose) the applicable values are given. This closed-form analytical expression governing the packing fraction is thoroughly compared with empirical and computational data reported in the literature, and good agreement is found.

  14. Packing fraction of particles with lognormal size distribution

    Science.gov (United States)

    Brouwers, H. J. H.

    2014-05-01

    This paper addresses the packing and void fraction of polydisperse particles with a lognormal size distribution. It is demonstrated that a binomial particle size distribution can be transformed into a continuous particle-size distribution of the lognormal type. Furthermore, an original and exact expression is derived that predicts the packing fraction of mixtures of particles with a lognormal distribution, which is governed by the standard deviation, mode of packing, and particle shape only. For a number of particle shapes and their packing modes (close, loose) the applicable values are given. This closed-form analytical expression governing the packing fraction is thoroughly compared with empirical and computational data reported in the literature, and good agreement is found.

  15. Lognormal Behavior of the Size Distributions of Animation Characters

    Science.gov (United States)

    Yamamoto, Ken

    This study investigates the statistical property of the character sizes of animation, superhero series, and video game. By using online databases of Pokémon (video game) and Power Rangers (superhero series), the height and weight distributions are constructed, and we find that the weight distributions of Pokémon and Zords (robots in Power Rangers) follow the lognormal distribution in common. For the theoretical mechanism of this lognormal behavior, the combination of the normal distribution and the Weber-Fechner law is proposed.

  16. The Razor's Edge of Collapse: The Transition Point from Lognormal to Powerlaw in Molecular Cloud PDFs

    CERN Document Server

    Burkhart, Blakesley; Collins, David

    2016-01-01

    We derive an analytic expression for the transitional column density value ($s_t$) between the lognormal and power-law form of the probability distribution function (PDF) in star-forming molecular clouds. Our expression for $s_t$ depends on the mean column density, the variance of the lognormal portion of the PDF, and the slope of the power-law portion of the PDF. We show that $s_t$ can be related to physical quantities such as the sonic Mach number of the flow and the power-law index for a self-gravitating isothermal sphere. This implies that the transition point between the lognormal and power-law density/column density PDF represents the critical density where turbulent and thermal pressure balance, the so-called "post-shock density." We test our analytic prediction for the transition column density using dust PDF observations reported in the literature as well as numerical MHD simulations of self-gravitating supersonic turbulence with the Enzo code. We find excellent agreement between the analytic $s_t$ a...

  17. Efficient simulation of tail probabilities of sums of correlated lognormals

    DEFF Research Database (Denmark)

    Asmussen, Søren; Blanchet, José; Juneja, Sandeep;

    We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff...

  18. A lognormal model for response times on test items

    NARCIS (Netherlands)

    van der Linden, Willem J.

    2006-01-01

    A lognormal model for the response times of a person on a set of test items is investigated. The model has a parameter structure analogous to the two-parameter logistic response models in item response theory, with a parameter for the speed of each person as well as parameters for the time intensity

  19. On the Laplace transform of the Lognormal distribution

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    Integral transforms of the lognormal distribution are of great importance in statistics and probability, yet closed-form expressions do not exist. A wide variety of methods have been employed to provide approximations, both analytical and numerical. In this paper, we analyze a closed-form approxi...... to construct a reliable Monte Carlo estimator of L(θ) and prove it to be logarithmically efficient in the rare event sense as θ→∞....

  20. SAMPLING INSPECTION OF RELIABILITY IN (LOG)NORMAL CASE WITH TYPE I CENSORING

    Institute of Scientific and Technical Information of China (English)

    Wu Qiguang; Lu Jianhua

    2006-01-01

    This article proposes a statistical method for working out reliability sampling plans under Type Ⅰ censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.

  1. Comparative evaluation of dental resin composites based on micron- and submicron-sized monomodal glass filler particles.

    Science.gov (United States)

    Valente, Lisia L; Peralta, Sonia L; Ogliari, Fabrício A; Cavalcante, Larissa M; Moraes, Rafael R

    2013-11-01

    A model resin composite containing a novel monomodal inorganic filler system based on submicron-sized Ba-Si-Al glass particles (NanoFine NF180; Schott) was formulated and compared with an experimental composite containing micron-sized particles (UltraFine UF1.0; Schott). The filler particles were characterized using X-ray microanalysis and granulometry, while the composites were characterized in terms of filler-resin morphology, radiopacity, degree of CC conversion, hardness, flexural strength/modulus, work-of-fracture, surface roughness and gloss (before and after simulated toothbrushing abrasion), and bulk compressive creep. The composites were formulated from the same photoactivated dimethacrylate co-monomer, incorporating mass fractions of 75% micron- and 78% submicron-sized particles. Quantitative data were analyzed at a significance level of pcomposites were similar in radiopacity, flexural strength, work-of-fracture, and creep. The submicron composite was harder but had lower flexural modulus and CC conversion. No significant differences in roughness were observed before brushing, although the submicron composite had higher gloss. Brushing increased roughness and decreased gloss on both materials, but the submicron composite retained higher gloss after brushing. The monomodal submicron glass filler system demonstrated potential for use in restorative dental composites, particularly due to improved esthetic properties. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  2. Pedagogical Comparison of Five Reactions Performed under Microwave Heating in Multi-Mode versus Mono-Mode Ovens: Diels-Alder Cycloaddition, Wittig Salt Formation, E2 Dehydrohalogenation to Form an Alkyne, Williamson Ether Synthesis, and Fischer Esterification

    Science.gov (United States)

    Baar, Marsha R.; Gammerdinger, William; Leap, Jennifer; Morales, Erin; Shikora, Jonathan; Weber, Michael H.

    2014-01-01

    Five reactions were rate-accelerated relative to the standard reflux workup in both multi-mode and mono-mode microwave ovens, and the results were compared to determine whether the sequential processing of a mono-mode unit could provide for better lab logistics and pedagogy. Conditions were optimized so that yields matched in both types of…

  3. Pedagogical Comparison of Five Reactions Performed under Microwave Heating in Multi-Mode versus Mono-Mode Ovens: Diels-Alder Cycloaddition, Wittig Salt Formation, E2 Dehydrohalogenation to Form an Alkyne, Williamson Ether Synthesis, and Fischer Esterification

    Science.gov (United States)

    Baar, Marsha R.; Gammerdinger, William; Leap, Jennifer; Morales, Erin; Shikora, Jonathan; Weber, Michael H.

    2014-01-01

    Five reactions were rate-accelerated relative to the standard reflux workup in both multi-mode and mono-mode microwave ovens, and the results were compared to determine whether the sequential processing of a mono-mode unit could provide for better lab logistics and pedagogy. Conditions were optimized so that yields matched in both types of…

  4. Asymptotics of sums of lognormal random variables with Gaussian copula

    DEFF Research Database (Denmark)

    Asmussen, Søren; Rojas-Nandayapa, Leonardo

    2008-01-01

    Let (Y1, ..., Yn) have a joint n-dimensional Gaussian distribution with a general mean vector and a general covariance matrix, and let Xi = eYi, Sn = X1 + ⋯ + Xn. The asymptotics of P (Sn > x) as n → ∞ are shown to be the same as for the independent case with the same lognormal marginals. In part....... In particular, for identical marginals it holds that P (Sn > x) ∼ n P (X1 > x) no matter what the correlation structure is. © 2008 Elsevier B.V. All rights reserved....

  5. On the log-normal distribution of network traffic

    Science.gov (United States)

    Antoniou, I.; Ivanov, V. V.; Ivanov, Valery V.; Zrelov, P. V.

    2002-07-01

    A detailed analysis of traffic measurements shows that the aggregation of these measurements forms a statistical distribution, which is approximated with high accuracy by the log-normal distribution. The inter-arrival times and packet sizes, contributing to the formation of network traffic, can be considered as independent. Applying the wavelet transform to traffic measurements, we demonstrate the multiplicative character of traffic series. This result confirms that the scheme, developed by Kolmogorov [Dokl. Akad. Nauk SSSR 31 (1941) 99] for the homogeneous fragmentation of grains, applies also to network traffic.

  6. Time Truncated Testing Strategy using Multiple Testers: Lognormal Distributed Lifetime

    Directory of Open Access Journals (Sweden)

    Itrat Batool Naqvi

    2014-06-01

    Full Text Available In this study, group acceptance sampling plan proposed by Aslam et al. (2011 is reconsidered when the lifetime variant of the test item follows lognormal distribution. The optimal plan parameters are obtained by considering various pre-specified parameters. The plan parameters are obtained using the non-linear optimization solution using two points approach. The advantage of the proposed plan is discussed over the existing plan using the single point approach and the proposed plan is more efficient than the existing plan.

  7. The IMACS Cluster Building Survey: IV. The Log-normal Star Formation History of Galaxies

    CERN Document Server

    Gladders, Michael D; Dressler, Alan; Poggianti, Bianca; Vulcani, Benedetta; Abramson, Louis

    2013-01-01

    We present here a simple model for the star formation history of galaxies that is successful in describing both the star formation rate density over cosmic time, as well as the distribution of specific star formation rates of galaxies at the current epoch, and the evolution of this quantity in galaxy populations to a redshift of z=1. We show first that the cosmic star formation rate density is remarkably well described by a simple log-normal in time. We next postulate that this functional form for the ensemble is also a reasonable description for the star formation histories of individual galaxies. Using the measured specific star formation rates for galaxies at z~0 from Paper III in this series, we then construct a realisation of a universe populated by such galaxies in which the parameters of the log-normal star formation history of each galaxy are adjusted to match the specific star formation rates at z~0 as well as fitting, in ensemble, the cosmic star formation rate density from z=0 to z=8. This model pr...

  8. A method to dynamic stochastic multicriteria decision making with log-normally distributed random variables.

    Science.gov (United States)

    Wang, Xin-Fan; Wang, Jian-Qiang; Deng, Sheng-Yue

    2013-01-01

    We investigate the dynamic stochastic multicriteria decision making (SMCDM) problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG) operator and the dynamic log-normal distribution weighted geometric (DLNDWG) operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.

  9. A Method to Dynamic Stochastic Multicriteria Decision Making with Log-Normally Distributed Random Variables

    Directory of Open Access Journals (Sweden)

    Xin-Fan Wang

    2013-01-01

    Full Text Available We investigate the dynamic stochastic multicriteria decision making (SMCDM problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG operator and the dynamic log-normal distribution weighted geometric (DLNDWG operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.

  10. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  11. On the capacity of FSO links under lognormal and Rician-lognormal turbulences

    KAUST Repository

    Ansari, Imran Shafique

    2014-09-01

    A unified capacity analysis under weak and composite turbulences of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection as well as heterodyne detection) is addressed in this work. More specifically, a unified exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system is presented in terms of well-known elementary functions. Capitalizing on these new moments expressions, unified approximate and simple closed- form results are offered for the ergodic capacity at high SNR regime as well as at low SNR regime. All the presented results are verified via computer- based Monte-Carlo simulations.

  12. Galaxy rotation curves with log-normal density distribution

    CERN Document Server

    Marr, John H

    2015-01-01

    The log-normal distribution represents the probability of finding randomly distributed particles in a micro canonical ensemble with high entropy. To a first approximation, a modified form of this distribution with a truncated termination may represent an isolated galactic disk, and this disk density distribution model was therefore run to give the best fit to the observational rotation curves for 37 representative galaxies. The resultant curves closely matched the observational data for a wide range of velocity profiles and galaxy types with rising, flat or descending curves in agreement with Verheijen's classification of 'R', 'F' and 'D' type curves, and the corresponding theoretical total disk masses could be fitted to a baryonic Tully Fisher relation (bTFR). Nine of the galaxies were matched to galaxies with previously published masses, suggesting a mean excess dynamic disk mass of dex0.61+/-0.26 over the baryonic masses. Although questionable with regard to other measurements of the shape of disk galaxy g...

  13. Analysis of random laser scattering pulse signals with lognormal distribution

    Institute of Scientific and Technical Information of China (English)

    Yan Zhen-Gang; Bian Bao-Min; Wang Shou-Yu; Lin Ying-Lu; Wang Chun-Yong; Li Zhen-Hua

    2013-01-01

    The statistical distribution of natural phenomena is of great significance in studying the laws of nature.In order to study the statistical characteristics of a random pulse signal,a random process model is proposed theoretically for better studying of the random law of measured results.Moreover,a simple random pulse signal generation and testing system is designed for studying the counting distributions of three typical objects including particles suspended in the air,standard particles,and background noise.Both normal and lognormal distribution fittings are used for analyzing the experimental results and testified by chi-square distribution fit test and correlation coefficient for comparison.In addition,the statistical laws of three typical objects and the relations between them are discussed in detail.The relation is also the non-integral dimension fractal relation of statistical distributions of different random laser scattering pulse signal groups.

  14. Exponential Family Techniques for the Lognormal Left Tail

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    Let X be lognormal(μ,σ2) with density f(x), let θ>0 and define L(θ)=Ee−θX. We study properties of the exponentially tilted density (Esscher transform) fθ(x)=e−θxf(x)/L(θ), in particular its moments, its asymptotic form as θ→∞ and asymptotics for the saddlepoint θ(x) determined by E[Xe−θX]/L(θ)=x....... demonstrate logarithmic efficiency. Numerical examples for the cdf Fn(x) and the pdf fn(x) of Sn are given in a range of values of σ2,n,x motivated from portfolio Value-at-Risk calculations....

  15. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  16. The Razor’s Edge of Collapse: The Transition Point from Lognormal to Power-Law Distributions in Molecular Clouds

    Science.gov (United States)

    Burkhart, Blakesley; Stalpes, Kye; Collins, David C.

    2017-01-01

    We derive an analytic expression for the transitional column density value ({η }t) between the lognormal and power-law form of the probability distribution function (PDF) in star-forming molecular clouds. Our expression for {η }t depends on the mean column density, the variance of the lognormal portion of the PDF, and the slope of the power-law portion of the PDF. We show that {η }t can be related to physical quantities such as the sonic Mach number of the flow and the power-law index for a self-gravitating isothermal sphere. This implies that the transition point between the lognormal and power-law density/column density PDF represents the critical density where turbulent and thermal pressure balance, the so-called “post-shock density.” We test our analytic prediction for the transition column density using dust PDF observations reported in the literature, as well as numerical MHD simulations of self-gravitating supersonic turbulence with the Enzo code. We find excellent agreement between the analytic {η }t and the measured values from the numerical simulations and observations (to within 1.2 AV). We discuss the utility of our expression for determining the properties of the PDF from unresolved low-density material in dust observations, for estimating the post-shock density, and for determining the H i–H2 transition in clouds.

  17. Input Response of Neural Network Model with Lognormally Distributed Synaptic Weights

    Science.gov (United States)

    Nagano, Yoshihiro; Karakida, Ryo; Watanabe, Norifumi; Aoyama, Atsushi; Okada, Masato

    2016-07-01

    Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.

  18. Franchissement du seuil dans un laser monomode à milieu actif homogène : étude spectrale auto-cohérente

    Science.gov (United States)

    Boucher, Y. G.

    2006-10-01

    Nous présentons une étude auto-cohérente du franchissement du seuil dans un laser monomode à milieu actif homogène. Nous obtenons pour la puissance, la longueur d'onde et la largeur de raie des expressions universelles en coordonnées normalisées, continûment valables de part et d'autre du seuil.

  19. Angular momentum of disc galaxies with a lognormal density distribution

    CERN Document Server

    Marr, John Herbert

    2015-01-01

    Whilst most galaxy properties scale with galaxy mass, similar scaling relations for angular momentum are harder to demonstrate. A lognormal (LN) density distribution for disc mass provides a good overall fit to the observational data for disc rotation curves for a wide variety of galaxy types and luminosities. In this paper, the total angular momentum J and energy $\\vert{}$E$\\vert{}$ were computed for 38 disc galaxies from the published rotation curves and plotted against the derived disc masses, with best fit slopes of 1.683$\\pm{}$0.018 and 1.643$\\pm{}$0.038 respectively, using a theoretical model with a LN density profile. The derived mean disc spin parameter was $\\lambda{}$=0.423$\\pm{}$0.014. Using the rotation curve parameters V$_{max}$ and R$_{max}$ as surrogates for the virial velocity and radius, the virial mass estimator $M_{disc}\\propto{}R_{max}V_{max}^2$ was also generated, with a log-log slope of 1.024$\\pm{}$0.014 for the 38 galaxies, and a proportionality constant ${\\lambda{}}^*=1.47\\pm{}0.20\\time...

  20. Analysis of a stochastic model for bacterial growth and the lognormality in the cell-size distribution

    CERN Document Server

    Yamamoto, Ken

    2016-01-01

    This paper theoretically analyzes a phenomenological stochastic model for bacterial growth. This model comprises cell divisions and linear growth of cells, where growth rates and cell cycles are drawn from lognormal distributions. We derive that the cell size is expressed as a sum of independent lognormal variables. We show numerically that the quality of the lognormal approximation greatly depends on the distributions of the growth rate and cell cycle. Furthermore, we show that actual parameters of the growth rate and cell cycle take values which give good lognormal approximation, so the experimental cell-size distribution is in good agreement with a lognormal distribution.

  1. Handbook of tables for order statistics from lognormal distributions with applications

    CERN Document Server

    Balakrishnan, N

    1999-01-01

    Lognormal distributions are one of the most commonly studied models in the sta­ tistical literature while being most frequently used in the applied literature. The lognormal distributions have been used in problems arising from such diverse fields as hydrology, biology, communication engineering, environmental science, reliability, agriculture, medical science, mechanical engineering, material science, and pharma­ cology. Though the lognormal distributions have been around from the beginning of this century (see Chapter 1), much of the work concerning inferential methods for the parameters of lognormal distributions has been done in the recent past. Most of these methods of inference, particUlarly those based on censored samples, involve extensive use of numerical methods to solve some nonlinear equations. Order statistics and their moments have been discussed quite extensively in the literature for many distributions. It is very well known that the moments of order statistics can be derived explicitly only...

  2. Can Self Organized Critical Accretion Disks Generate a Log-normal Emission Variability in AGN?

    CERN Document Server

    Kunjaya, Chatief; Vierdayanti, Kiki; Herlie, Stefani

    2011-01-01

    Active Galactic Nuclei (AGN), such as Seyfert galaxies, quasars, etc., show light variations in all wavelength bands, with various amplitude and in many time scales. The variations usually look erratic, not periodic nor purely random. Many of these objects also show lognormal flux distribution and RMS - flux relation and power law frequency distribution. So far, the lognormal flux distribution of black hole objects is only observational facts without satisfactory explanation about the physical mechanism producing such distribution in the accretion disk. One of the most promising models based on cellular automaton mechanism has been successful in reproducing PSD (Power Spectral Density) of the observed objects but could not reproduce lognormal flux distribution. Such distribution requires the existence of underlying multiplicative process while the existing SOC models are based on additive processes. A modified SOC model based on cellular automaton mechanism for producing lognormal flux distribution is present...

  3. Log-normal distribution from a process that is not multiplicative but is additive.

    Science.gov (United States)

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  4. High SNR BER comparison of coherent and differentially coherent modulation schemes in lognormal fading channels

    KAUST Repository

    Song, Xuegui

    2014-09-01

    Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes.

  5. Log-normality of indoor radon data in the Walloon region of Belgium.

    Science.gov (United States)

    Cinelli, Giorgia; Tondeur, François

    2015-05-01

    The deviations of the distribution of Belgian indoor radon data from the log-normal trend are examined. Simulated data are generated to provide a theoretical frame for understanding these deviations. It is shown that the 3-component structure of indoor radon (radon from subsoil, outdoor air and building materials) generates deviations in the low- and high-concentration tails, but this low-C trend can be almost completely compensated by the effect of measurement uncertainties and by possible small errors in background subtraction. The predicted low-C and high-C deviations are well observed in the Belgian data, when considering the global distribution of all data. The agreement with the log-normal model is improved when considering data organised in homogeneous geological groups. As the deviation from log-normality is often due to the low-C tail for which there is no interest, it is proposed to use the log-normal fit limited to the high-C half of the distribution. With this prescription, the vast majority of the geological groups of data are compatible with the log-normal model, the remaining deviations being mostly due to a few outliers, and rarely to a "fat tail". With very few exceptions, the log-normal modelling of the high-concentration part of indoor radon data is expected to give reasonable results, provided that the data are organised in homogeneous geological groups.

  6. Inference of bioequivalence for log-normal distributed data with unspecified variances.

    Science.gov (United States)

    Xu, Siyan; Hua, Steven Y; Menton, Ronald; Barker, Kerry; Menon, Sandeep; D'Agostino, Ralph B

    2014-07-30

    Two drugs are bioequivalent if the ratio of a pharmacokinetic (PK) parameter of two products falls within equivalence margins. The distribution of PK parameters is often assumed to be log-normal, therefore bioequivalence (BE) is usually assessed on the difference of logarithmically transformed PK parameters (δ). In the presence of unspecified variances, test procedures such as two one-sided tests (TOST) use sample estimates for those variances; Bayesian models integrate them out in the posterior distribution. These methods limit our knowledge on the extent that inference about BE is affected by the variability of PK parameters. In this paper, we propose a likelihood approach that retains the unspecified variances in the model and partitions the entire likelihood function into two components: F-statistic function for variances and t-statistic function for δ. Demonstrated with published real-life data, the proposed method not only produces results that are same as TOST and comparable with Bayesian method but also helps identify ranges of variances, which could make the determination of BE more achievable. Our findings manifest the advantages of the proposed method in making inference about the extent that BE is affected by the unspecified variances, which cannot be accomplished either by TOST or Bayesian method.

  7. Scaling Relations of Lognormal Type Growth Process with an Extremal Principle of Entropy

    Directory of Open Access Journals (Sweden)

    Zi-Niu Wu

    2017-01-01

    Full Text Available The scale, inflexion point and maximum point are important scaling parameters for studying growth phenomena with a size following the lognormal function. The width of the size function and its entropy depend on the scale parameter (or the standard deviation and measure the relative importance of production and dissipation involved in the growth process. The Shannon entropy increases monotonically with the scale parameter, but the slope has a minimum at p 6/6. This value has been used previously to study spreading of spray and epidemical cases. In this paper, this approach of minimizing this entropy slope is discussed in a broader sense and applied to obtain the relationship between the inflexion point and maximum point. It is shown that this relationship is determined by the base of natural logarithm e ' 2.718 and exhibits some geometrical similarity to the minimal surface energy principle. The known data from a number of problems, including the swirling rate of the bathtub vortex, more data of droplet splashing, population growth, distribution of strokes in Chinese language characters and velocity profile of a turbulent jet, are used to assess to what extent the approach of minimizing the entropy slope can be regarded as useful.

  8. The Stochastic Galerkin Method for Darcy Flow Problem with Log-Normal Random Field Coefficients

    Directory of Open Access Journals (Sweden)

    Michal Beres

    2017-01-01

    Full Text Available This article presents a study of the Stochastic Galerkin Method (SGM applied to the Darcy flow problem with a log-normally distributed random material field given by a mean value and an autocovariance function. We divide the solution of the problem into two parts. The first one is the decomposition of a random field into a sum of products of a random vector and a function of spatial coordinates; this can be achieved using the Karhunen-Loeve expansion. The second part is the solution of the problem using SGM. SGM is a simple extension of the Galerkin method in which the random variables represent additional problem dimensions. For the discretization of the problem, we use a finite element basis for spatial variables and a polynomial chaos discretization for random variables. The results of SGM can be utilised for the analysis of the problem, such as the examination of the average flow, or as a tool for the Bayesian approach to inverse problems.

  9. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    Science.gov (United States)

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  10. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    Science.gov (United States)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  11. Franchissement du seuil dans un laser monomode : approche phénoménologique à partir des équations d'évolution

    Science.gov (United States)

    Boucher, Y.

    2004-11-01

    Nous présentons une étude phénoménologique du franchissement du seuil dans un laser monomode, à partir des équations d'évolution du milieu (assimilé à une collection de systèmes à 2 niveaux ouverts) et du champ optique. Nous obtenons des expressions analytiques universelles en coordonnées réduites, caractérisées par deux paramètres indépendants.

  12. An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient

    KAUST Repository

    Nobile, Fabio

    2016-03-18

    In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature and interpolation on unbounded sets. We also consider several profit indicators that are suitable to drive the adaptation process. We then use such algorithm to solve an important test case in Uncertainty Quantification problem, namely the Darcy equation with lognormal permeability random field, and compare the results with those obtained with the quasi-optimal sparse grids based on profit estimates, which we have proposed in our previous works (cf. e.g. Convergence of quasi-optimal sparse grids approximation of Hilbert-valued functions: application to random elliptic PDEs). To treat the case of rough permeability fields, in which a sparse grid approach may not be suitable, we propose to use the adaptive sparse grid quadrature as a control variate in a Monte Carlo simulation. Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields. Moreover, their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.

  13. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  14. Species Abundance in a Forest Community in South China: A Case of Poisson Lognormal Distribution

    Institute of Scientific and Technical Information of China (English)

    Zuo-Yun YIN; Hai REN; Qian-Mei ZHANG; Shao-Lin PENG; Qin-Feng GUO; Guo-Yi ZHOU

    2005-01-01

    Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m×20 m, 5 m×5 m, and 1 m×1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal;(ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (σ andμ) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the σ and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/σ should be an alternative measure of diversity.

  15. Can self-organized critical accretion disks generate a log-normal emission variability in AGN?

    Science.gov (United States)

    Kunjaya, C.; Mahasena, P.; Vierdayanti, K.; Herlie, S.

    2011-12-01

    Active Galactic Nuclei (AGN), such as Seyfert galaxies, quasars, etc., show light variations in all wavelength bands, with various amplitude and in many time scales. The variations usually look erratic, not periodic nor purely random. Many of these objects also show lognormal flux distribution and RMS-flux relation and power law frequency distribution. So far, the lognormal flux distribution of black hole objects is only observational facts without satisfactory explanation about the physical mechanism producing such distribution in the accretion disk. One of the most promising models based on cellular automaton mechanism has been successful in reproducing PSD (Power Spectral Density) of the observed objects but could not reproduce lognormal flux distribution. Such distribution requires the existence of underlying multiplicative process while the existing SOC models are based on additive processes. A modified SOC model based on cellular automaton mechanism for producing lognormal flux distribution is presented in this paper. The idea is that the energy released in the avalanche and diffusion in the accretion disk is not entirely emitted instantaneously as in the original cellular automaton model. Some part of the energy is kept in the disk and thus increase its energy content so that the next avalanche will be in higher energy condition and will release more energy. The later an avalanche occurs, the more amount of energy is emitted to the observers. This can provide multiplicative effects to the flux and produces lognormal flux distribution.

  16. The lognormal perfusion model for disruption replenishment measurements of blood flow: in vivo validation.

    Science.gov (United States)

    Hudson, John M; Leung, Kogee; Burns, Peter N

    2011-10-01

    Dynamic contrast enhanced ultrasound (DCE-US) is evolving as a promising tool to noninvasively quantify relative tissue perfusion in organs and solid tumours. Quantification using the method of disruption replenishment is best performed using a model that accurately describes the replenishment of microbubble contrast agents through the ultrasound imaging plane. In this study, the lognormal perfusion model was validated using an exposed in vivo rabbit kidney model. Compared against an implanted transit time flow meter, longitudinal relative flow measurement was (×3) less variable and correlated better when quantification was performed with the lognormal perfusion model (Spearman r = 0.90, 95% confidence interval [CI] = 0.05) vs. the prevailing mono-exponential model (Spearman r = 0.54, 95% CI = 0.18). Disruption-replenishment measurements using the lognormal perfusion model were reproducible in vivo to within 12%.

  17. Models for Unsaturated Hydraulic Conductivity Based on Truncated Lognormal Pore-size Distributions

    CERN Document Server

    Malama, Bwalya

    2013-01-01

    We develop a closed-form three-parameter model for unsaturated hydraulic conductivity associated with a three-parameter lognormal model of moisture retention, which is based on lognormal grainsize distribution. The derivation of the model is made possible by a slight modification to the theory of Mualem. We extend the three-parameter lognormal distribution to a four-parameter model that also truncates the pore size distribution at a minimum pore radius. We then develop the corresponding four-parameter model for moisture retention and the associated closed-form expression for unsaturated hydraulic conductivity. The four-parameter model is fitted to experimental data, similar to the models of Kosugi and van Genuchten. The proposed four-parameter model retains the physical basis of Kosugi's model, while improving fit to observed data especially when simultaneously fitting pressure-saturation and pressure-conductivity data.

  18. Outage Performance Analysis of Cooperative Diversity with MRC and SC in Correlated Lognormal Channels

    Directory of Open Access Journals (Sweden)

    Skraparlis D

    2009-01-01

    Full Text Available Abstract The study of relaying systems has found renewed interest in the context of cooperative diversity for communication channels suffering from fading. This paper provides analytical expressions for the end-to-end SNR and outage probability of cooperative diversity in correlated lognormal channels, typically found in indoor and specific outdoor environments. The system under consideration utilizes decode-and-forward relaying and Selection Combining or Maximum Ratio Combining at the destination node. The provided expressions are used to evaluate the gains of cooperative diversity compared to noncooperation in correlated lognormal channels, taking into account the spectral and energy efficiency of the protocols and the half-duplex or full-duplex capability of the relay. Our analysis demonstrates that correlation and lognormal variances play a significant role on the performance gain of cooperative diversity against noncooperation.

  19. On Modelling Insurance Data by Using a Generalized Lognormal Distribution || Sobre la modelización de datos de seguros usando una distribución lognormal generalizada

    Directory of Open Access Journals (Sweden)

    García, Victoriano J.

    2014-12-01

    Full Text Available In this paper, a new heavy-tailed distribution is used to model data with a strong right tail, as often occurs in practical situations. The distribution proposed is derived from the lognormal distribution, by using the Marshall and Olkin procedure. Some basic properties of this new distribution are obtained and we present situations where this new distribution correctly reflects the sample behaviour for the right tail probability. An application of the model to dental insurance data is presented and analysed in depth. We conclude that the generalized lognormal distribution proposed is a distribution that should be taken into account among other possible distributions for insurance data in which the properties of a heavy-tailed distribution are present. || Presentamos una nueva distribución lognormal con colas pesadas que se adapta bien a muchas situaciones prácticas en el campo de los seguros. Utilizamos el procedimiento de Marshall y Olkin para generar tal distribución y estudiamos sus propiedades básicas. Se presenta una aplicación de la misma para datos de seguros dentales que es analizada en profundidad, concluyendo que tal distribución deberá formar parte del catálogo de distribuciones a tener cuenta para la modernización de datos en seguros cuando hay presencia de colas pesadas.

  20. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  1. Subcarrier MPSK/MDPSK modulated optical wireless communications in lognormal turbulence

    KAUST Repository

    Song, Xuegui

    2015-03-01

    Bit-error rate (BER) performance of subcarrier Mary phase-shift keying (MPSK) and M-ary differential phase-shift keying (MDPSK) is analyzed for optical wireless communications over the lognormal turbulence channels. Both exact BER and approximate BER expressions are presented. We demonstrate that the approximate BER, which is obtained by dividing the symbol error rate by the number of bits per symbol, can be used to estimate the BER performance with acceptable accuracy. Through our asymptotic analysis, we derive closed-form asymptotic BER performance loss expression for MDPSK with respect to MPSK in the lognormal turbulence channels. © 2015 IEEE.

  2. Lognormal distribution of firing time and rate from a single neuron?

    CERN Document Server

    Kish, Eszter A; Der, Andras; Kish, Laszlo B

    2014-01-01

    Even a single neuron may be able to produce significant lognormal features in its firing statistics due to noise in the charging ion current. A mathematical scheme introduced in advanced nanotechnology is relevant for the analysis of this mechanism in the simplest case, the integrate-and-fire model with white noise in the charging ion current.

  3. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  4. Note---The Mean-Coefficient-of-Variation Rule: The Lognormal Case

    OpenAIRE

    Haim Levy

    1991-01-01

    The mean-variance (M-V) rule may lead to paradoxical results which may be resolved by employing the mean coefficient of variation (M-C) rule. It is shown that the M-C rule constitutes an optimal decision rule for lognormal distributions.

  5. Génération de photons uniques monomodes par une boite quantique d'InAs en microcavité

    Science.gov (United States)

    Gérard, J. M.; Robert, I.; Moreau, E.; Abram, I.

    2002-06-01

    Nous présentons la première source solide monomode de photons uniques; constituée par une boîte quantique semiconductrice placée dans une microcavité optique. Cette source exploite la forte interaction coulombienne entre porteurs piégés pour contrôler le nombre de photons, et l'effet Purcell pour collecter et préparer dans un état donné (même mode spatial, même polarisation) les photons émis. Nous discutons plus particulièrement les performances et l'intérêt potentiel de cette nouvelle source dans le contexte de la distribution de clefs quantiques.

  6. Log-normal censored regression model detecting prognostic factors in gastric cancer: A study of 3018 cases

    Institute of Scientific and Technical Information of China (English)

    Bin-Bin Wang; Cai-Gang Liu; Ping Lu; A Latengbaolide; Yang Lu

    2011-01-01

    AIM: To investigate the efficiency of Cox proportional hazard model in detecting prognostic factors for gastric cancer.METHODS: We used the log-normal regression model to evaluate prognostic factors in gastric cancer and compared it with the Cox model. Three thousand and eighteen gastric cancer patients who received a gastrectomy between 1980 and 2004 were retrospectively evaluated. Clinic-pathological factors were included in a log-normal model as well as Cox model. The akaike information criterion (AIC) was employed to compare the efficiency of both models. Univariate analysis indicated that age at diagnosis, past history, cancer location, distant metastasis status, surgical curative degree, combined other organ resection, Borrmann type, Lauren's classification, pT stage, total dissected nodes and pN stage were prognostic factors in both log-normal and Cox models.RESULTS: In the final multivariate model, age at diagnosis,past history, surgical curative degree, Borrmann type, Lauren's classification, pT stage, and pN stage were significant prognostic factors in both log-normal and Cox models. However, cancer location, distant metastasis status, and histology types were found to be significant prognostic factors in log-normal results alone.According to AIC, the log-normal model performed better than the Cox proportional hazard model (AIC value:2534.72 vs 1693.56).CONCLUSION: It is suggested that the log-normal regression model can be a useful statistical model to evaluate prognostic factors instead of the Cox proportional hazard model.

  7. Reconstruction of Gaussian and log-normal fields with spectral smoothness

    CERN Document Server

    Oppermann, Niels; Bell, Michael R; Enßlin, Torsten A

    2012-01-01

    We develop a method to infer log-normal random fields from measurement data affected by Gaussian noise. The log-normal model is well suited to describe strictly positive signals with fluctuations whose amplitude varies over several orders of magnitude. We use the formalism of minimum Gibbs free energy to derive an algorithm that uses the signal's correlation structure to regularize the reconstruction. The correlation structure, described by the signal's power spectrum, is thereby reconstructed from the same data set. We further introduce a prior for the power spectrum that enforces spectral smoothness. The appropriateness of this prior in different scenarios is discussed and its effects on the reconstruction's results are demonstrated. We validate the performance of our reconstruction algorithm in a series of one- and two-dimensional test cases with varying degrees of non-linearity and different noise levels.

  8. Discriminating between Weibull distributions and log-normal distributions emerging in branching processes

    Science.gov (United States)

    Goh, Segun; Kwon, H. W.; Choi, M. Y.

    2014-06-01

    We consider the Yule-type multiplicative growth and division process, and describe the ubiquitous emergence of Weibull and log-normal distributions in a single framework. With the help of the integral transform and series expansion, we show that both distributions serve as asymptotic solutions of the time evolution equation for the branching process. In particular, the maximum likelihood method is employed to discriminate between the emergence of the Weibull distribution and that of the log-normal distribution. Further, the detailed conditions for the distinguished emergence of the Weibull distribution are probed. It is observed that the emergence depends on the manner of the division process for the two different types of distribution. Numerical simulations are also carried out, confirming the results obtained analytically.

  9. Radical tessellation of the packing of spheres with a log-normal size distribution

    Science.gov (United States)

    Yi, L. Y.; Dong, K. J.; Zou, R. P.; Yu, A. B.

    2015-09-01

    The packing of particles with a log-normal size distribution is studied by means of the discrete element method. The packing structures are analyzed in terms of the topological properties such as the number of faces per radical polyhedron and the number of edges per face, and the metric properties such as the perimeter and area per face and the perimeter, area, and volume per radical polyhedron, obtained from the radical tessellation. The effect of the geometric standard deviation in the log-normal distribution on these properties is quantified. It is shown that when the size distribution gets wider, the packing becomes denser; thus the radical tessellation of a particle has decreased topological and metric properties. The quantitative relationships obtained should be useful in the modeling and analysis of structural properties such as effective thermal conductivity and permeability.

  10. Effects of a primordial magnetic field with log-normal distribution on the cosmic microwave background

    CERN Document Server

    Yamazaki, Dai G; Takahashi, Keitaro; 10.1103/PhysRevD.84.123006

    2011-01-01

    We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at $k\\simeq 10^{-2.5}$ Mpc$^{-1}$ with the upper limit $B\\lesssim 3$ nG.

  11. Lognormality of gradients of diffusive scalars in homogeneous, two-dimensional mixing systems

    Science.gov (United States)

    Kerstein, A. R.; Ashurst, W. T.

    1984-12-01

    Kolmogorov's third hypothesis, as extended by Gurvich and Yaglom, is found to be obeyed by a diffusive scalar for a class of homogeneous, two-dimensional mixing models. The mixing models all involve the advection of fluid by discrete vortices distributed in a square region with periodic boundary conditions. By computer simulation, it is found that the squared gradient of a diffusive scalar so advected is lognormally distributed, obeys the predicted scaling when a spatial smoothing is applied, and exhibits a power-law range in the spatial autocorrelation. In addition, it is found that the scaling property cuts off at the Batchelor length, as predicted by Gibson. Since the mixing models employed do not incorporate the dynamical features of high-Reynolds-number turbulence, these results suggest that scalar lognormality and associated scaling behavior may be more robust or persistent than the scaling laws of the flow field.

  12. A Study of the Application of the Lognormal Distribution to Corrective Maintenance Repair Time

    Science.gov (United States)

    1979-06-01

    PERFORMING ORGANIZATION NAM AND AOOREW I0. PROGRAM ELEMENT. PROJECT. TASK Naval Postgraduate School AREA & WORK UNIT NUMERS Monterey, California 93940 11...from the more usual procedure in which the test statistic is compared to a value which is such that the area under the distribution to its right is...Most of the sets of data show that the lognormal distribucion cannot be rejected as an adequate descriptor for corrective maintenance repair time

  13. Lognormal Distribution of Some Physiological Responses in Young Healthy Indian Males

    Directory of Open Access Journals (Sweden)

    S. S. Verma

    1986-01-01

    Full Text Available Evaluation of statistical distribution of physiological responses is of fundamental importance for better statistical interpretation of physiological phenomenon. In this paper, statistical distribution of three important physiological responses viz., maximal aerobic power (VO2 max, maximal heart rate (HR max and maximum voluntary ventilation (MVV in young healthy Indian males of age ranging from 19 to 22 years have been worked out. It is concluded that these three important physiological responses follow the lognormal distribution.

  14. Determination of substrate log-normal distribution in the AZ91/SICP composite

    Directory of Open Access Journals (Sweden)

    J. Lelito

    2015-01-01

    Full Text Available The aim in this work is to develop a log-normal distribution of heterogeneous nucleation substrates for the composite based on AZ91 alloy reinforced by SiC particles. The computational algorithm allowing the restore of the nucleation substrates distribution was used. The experiment was performed for the AZ91 alloy containing 1 % wt. of SiC particles. Obtained from experiment, the grains density of magnesium primary phase and supercooling were used to algorithm as input data.

  15. On the Efficient Simulation of Outage Probability in a Log-normal Fading Environment

    KAUST Repository

    Rached, Nadhir Ben

    2017-02-15

    The outage probability (OP) of the signal-to-interference-plus-noise ratio (SINR) is an important metric that is used to evaluate the performance of wireless systems. One difficulty toward assessing the OP is that, in realistic scenarios, closed-form expressions cannot be derived. This is for instance the case of the Log-normal environment, in which evaluating the OP of the SINR amounts to computing the probability that a sum of correlated Log-normal variates exceeds a given threshold. Since such a probability does not admit a closed-form expression, it has thus far been evaluated by several approximation techniques, the accuracies of which are not guaranteed in the region of small OPs. For these regions, simulation techniques based on variance reduction algorithms is a good alternative, being quick and highly accurate for estimating rare event probabilities. This constitutes the major motivation behind our work. More specifically, we propose a generalized hybrid importance sampling scheme, based on a combination of a mean shifting and a covariance matrix scaling, to evaluate the OP of the SINR in a Log-normal environment. We further our analysis by providing a detailed study of two particular cases. Finally, the performance of these techniques is performed both theoretically and through various simulation results.

  16. Characterization of a conventional optic fiber (monomode and multimode) and its use for the elaboration of a new vibration and sonor pressure detection system

    Science.gov (United States)

    Javahiraly, Nicolas; Lebrun, Antoine; Chakari, Ayoub

    2008-06-01

    Accurate vibration measurement of mechanical structures is a relevant problem in aerospace or automotive industry and sismic detection. Used methods concern for instance non guided optics devices, contact resistive systems, contact mechanical systems. We introduce a modelling and an experimental validation of an intrinsic vibration sensor using polarization modulation of the light propagation in an optical fiber as a result of modulated constraints applied to this fiber. We demonstrate that this method allow measurements of vibration frequencies with a good accuracy and a large dynamic. We first analyze the intrinsic birefringence in telecom monomode fiber. Then, we analyze the extrinsic birefringence coming from twisting and bending the fiber. On this basis, we introduce a polarisation controller. We then apply a static force on the fiber through a spring. With vibration applied to the fiber, this force will become dynamic and will induce a dynamic modulation of the polarization at the output of the fiber that we will read. The optimal signal sensing of sound and vibration is obtained by integrating all the characteristics of the device mentioned above.

  17. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    Science.gov (United States)

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Analysis of variance of communication latencies in anesthesia: comparing means of multiple log-normal distributions.

    Science.gov (United States)

    Ledolter, Johannes; Dexter, Franklin; Epstein, Richard H

    2011-10-01

    Anesthesiologists rely on communication over periods of minutes. The analysis of latencies between when messages are sent and responses obtained is an essential component of practical and regulatory assessment of clinical and managerial decision-support systems. Latency data including times for anesthesia providers to respond to messages have moderate (> n = 20) sample sizes, large coefficients of variation (e.g., 0.60 to 2.50), and heterogeneous coefficients of variation among groups. Highly inaccurate results are obtained both by performing analysis of variance (ANOVA) in the time scale or by performing it in the log scale and then taking the exponential of the result. To overcome these difficulties, one can perform calculation of P values and confidence intervals for mean latencies based on log-normal distributions using generalized pivotal methods. In addition, fixed-effects 2-way ANOVAs can be extended to the comparison of means of log-normal distributions. Pivotal inference does not assume that the coefficients of variation of the studied log-normal distributions are the same, and can be used to assess the proportional effects of 2 factors and their interaction. Latency data can also include a human behavioral component (e.g., complete other activity first), resulting in a bimodal distribution in the log-domain (i.e., a mixture of distributions). An ANOVA can be performed on a homogeneous segment of the data, followed by a single group analysis applied to all or portions of the data using a robust method, insensitive to the probability distribution.

  19. Possible Lognormal Distribution of Fermi-LAT Data of OJ 287

    Indian Academy of Sciences (India)

    G. G. Deng; Y. Liu; J. H. Fan; H. G. Wang

    2014-09-01

    OJ 287 is a BL Lac object at redshift = 0.306 that has shown double-peaked bursts at regular intervals of 12 yr during the last 40 yr according to previous research. Some of the AGN ray power density shows a white noise process, while some others shows a red noise process. Some AGN flux presents normal or log-normal distribution. The two processes have an intrinsic relationship with centre black hole emission mechanism. We present the results of the analysis of the Fermi-LAT data. We review some problems concerning the random process.

  20. On the low SNR capacity of log-normal turbulence channels with full CSI

    KAUST Repository

    Benkhelifa, Fatma

    2014-09-01

    In this paper, we characterize the low signal-To-noise ratio (SNR) capacity of wireless links undergoing the log-normal turbulence when the channel state information (CSI) is perfectly known at both the transmitter and the receiver. We derive a closed form asymptotic expression of the capacity and we show that it scales essentially as λ SNR where λ is the water-filling level satisfying the power constraint. An asymptotically closed-form expression of λ is also provided. Using this framework, we also propose an on-off power control scheme which is capacity-achieving in the low SNR regime.

  1. Survey on Log-Normally Distributed Market-Technical Trend Data

    Directory of Open Access Journals (Sweden)

    René Brenner

    2016-07-01

    Full Text Available In this survey, a short introduction of the recent discovery of log-normally-distributed market-technical trend data will be given. The results of the statistical evaluation of typical market-technical trend variables will be presented. It will be shown that the log-normal assumption fits better to empirical trend data than to daily returns of stock prices. This enables one to mathematically evaluate trading systems depending on such variables. In this manner, a basic approach to an anti-cyclic trading system will be given as an example.

  2. Piecewise log-normal approximation of size distributions for aerosol modelling

    Directory of Open Access Journals (Sweden)

    K. von Salzen

    2006-01-01

    Full Text Available An efficient and accurate method for the representation of particle size distributions in atmospheric models is proposed. The method can be applied, but is not necessarily restricted, to aerosol mass and number size distributions. A piecewise log-normal approximation of the number size distribution within sections of the particle size spectrum is used. Two of the free parameters of the log-normal approximation are obtained from the integrated number and mass concentration in each section. The remaining free parameter is prescribed. The method is efficient in a sense that only relatively few calculations are required for applications of the method in atmospheric models. Applications of the method in simulations of particle growth by condensation and simulations with a single column model for nucleation, condensation, gravitational settling, wet deposition, and mixing are described. The results are compared to results from simulations employing single- and double-moment bin methods that are frequently used in aerosol modelling. According to these comparisons, the accuracy of the method is noticeably higher than the accuracy of the other methods.

  3. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  4. The Lognormal Race: A Cognitive-Process Model of Choice and Latency with Desirable Psychometric Properties.

    Science.gov (United States)

    Rouder, Jeffrey N; Province, Jordan M; Morey, Richard D; Gomez, Pablo; Heathcote, Andrew

    2015-06-01

    We present a cognitive process model of response choice and response time performance data that has excellent psychometric properties and may be used in a wide variety of contexts. In the model there is an accumulator associated with each response option. These accumulators have bounds, and the first accumulator to reach its bound determines the response time and response choice. The times at which accumulator reaches its bound is assumed to be lognormally distributed, hence the model is race or minima process among lognormal variables. A key property of the model is that it is relatively straightforward to place a wide variety of models on the logarithm of these finishing times including linear models, structural equation models, autoregressive models, growth-curve models, etc. Consequently, the model has excellent statistical and psychometric properties and can be used in a wide range of contexts, from laboratory experiments to high-stakes testing, to assess performance. We provide a Bayesian hierarchical analysis of the model, and illustrate its flexibility with an application in testing and one in lexical decision making, a reading skill.

  5. Frittage micro-ondes en cavité monomode de biocéramiques Microwaves sintering of bioceramics in a single mode cavity

    Directory of Open Access Journals (Sweden)

    Savary Etienne

    2013-11-01

    Full Text Available Le but premier de cette étude est de montrer la faisabilité du frittage direct en cavité micro-ondes monomode de deux biomatériaux céramiques : l'hydroxyapatite et le phosphate tri-calcique. Ainsi, cette étude montre que ce procédé a permis d'obtenir, en des temps très courts, inférieurs à 20 minutes, des échantillons denses présentant des microstructures fines. Les caractérisations mécaniques sur les échantillons frittés par micro-ondes ont révélé des valeurs de module d'élasticité et de dureté supérieures à celles généralement obtenues sur des échantillons frittés de manière conventionnelle. Ces résultats sont discutés en fonction de la microstructure obtenue et des différents paramètres expérimentaux : granulométrie des poudres, température de frittage, temps d'irradiation micro-ondes. The main purpose of this study consists in investigating the direct microwaves sintering in a single mode cavity of two bioceramics: hydroxyapatite and tri-calcium phosphate. Thus, dense samples presenting fine microstructures are successfully obtained in less than 20 minutes of irradiation. The resulting mechanical characterizations on microwaves sintered samples evidence higher Young's modulus and hardness values than those usually reported on conventionally sintered samples. Those results are discussed according to the microstructures observed and the experimental parameters such as powders granulometries, sintering temperatures, microwaves irradiation times.

  6. How log-normal is your country? An analysis of the statistical distribution of the exported volumes of products

    Science.gov (United States)

    Annunziata, Mario Alberto; Petri, Alberto; Pontuale, Giorgio; Zaccaria, Andrea

    2016-10-01

    We have considered the statistical distributions of the volumes of 1131 products exported by 148 countries. We have found that the form of these distributions is not unique but heavily depends on the level of development of the nation, as expressed by macroeconomic indicators like GDP, GDP per capita, total export and a recently introduced measure for countries' economic complexity called fitness. We have identified three major classes: a) an incomplete log-normal shape, truncated on the left side, for the less developed countries, b) a complete log-normal, with a wider range of volumes, for nations characterized by intermediate economy, and c) a strongly asymmetric shape for countries with a high degree of development. Finally, the log-normality hypothesis has been checked for the distributions of all the 148 countries through different tests, Kolmogorov-Smirnov and Cramér-Von Mises, confirming that it cannot be rejected only for the countries of intermediate economy.

  7. Bayesian CMB foreground separation with a correlated log-normal model

    CERN Document Server

    Oppermann, Niels

    2014-01-01

    The extraction of foreground and CMB maps from multi-frequency observations relies mostly on the different frequency behavior of the different components. Existing Bayesian methods additionally make use of a Gaussian prior for the CMB whose correlation structure is described by an unknown angular power spectrum. We argue for the natural extension of this by using non-trivial priors also for the foreground components. Focusing on diffuse Galactic foregrounds, we propose a log-normal model including unknown spatial correlations within each component and cross-correlations between the different foreground components. We present case studies at low resolution that demonstrate the superior performance of this model when compared to an analysis with flat priors for all components.

  8. Retention for Stoploss reinsurance to minimize VaR in compound Poisson-Lognormal distribution

    Science.gov (United States)

    Soleh, Achmad Zanbar; Noviyanti, Lienda; Nurrahmawati, Irma

    2015-12-01

    Automobile insurance is one of the emerging general insurance's product in Indonesia. Fluctuation in total premium revenues and total claim expenses leads to a risk that insurance company can not be able to pay consumer's claims, thus reinsurance is needeed. Reinsurance is a risk transfer mechanism from the insurance company to another company called reinsurer, one of the reinsurance type is Stoploss. Because reinsurer charges premium to the insurance company, it is important to determine the retention or the total claims to be retain solely by the insurance company. Thus, retention is determined using Value at Risk (VaR) which minimize the total risk of the insurance company in the presence of Stoploss reinsurance. Retention depends only on the distribution of total claims and reinsurance loading factor. We use the compound Poisson distribution and the Log-Normal Distribution to illustrate the retention value in a collective risk model.

  9. On the Ergodic Capacity of Dual-Branch Correlated Log-Normal Fading Channels with Applications

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-05-01

    Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high signal-to-noise (SNR). In this work, we propose simple closed-form asymptotic expressions of the ergodic capacity of dual-branch correlated Log- Normal corresponding to selection combining, and switch-and-stay combining. Furthermore, we capitalize on these new results to find new asymptotic ergodic capacity of correlated dual- branch free-space optical communication system under the impact of pointing error with both heterodyne and intensity modulation/direct detection. © 2015 IEEE.

  10. STANDARDIZED PRECIPITATION INDEX (SPI CALCULATED WITH THE USE OF LOG-NORMAL DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Edward Gąsiorek

    2014-10-01

    Full Text Available The problem analyzed in this paper is the continuation of research conducted on data from Wrocław-Swojec agro- and hydrometeorology observatory in 1964–2009 period and published in “Infrastruktura i Ekologia Terenów Wiejskich” nr 3/III/2012, pp. 197–208. The paper concerns two methods of calculation of standardized precipitation index (SPI. The first one extracts SPI directly from gamma distribution, since monthly precipitation sums in the 1964–2009 period in Wrocław are gamma distributed. The second method is based on the transformations of data leading to normal distribution. The authors calculate SPI with the use of log-normal distribution and confront it with values obtained by gamma and normal distributions. The aim of this paper is to comparatively assess the SPI values obtained with those three different methods.

  11. Reconstruction of probabilistic S-N curves under fatigue life following lognormal distribution with given confidence

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-xiang; YANG Bing; PENG Jia-chun

    2007-01-01

    When the historic probabilistic S-N curves are given under special survival probability and confidence levels and there is no possible to re-test, fatigue reliability analysis at other levels can not be done except for the special levels. Therefore, the wide applied curves are expected. Monte Carlo reconstruction methods of the test data and the curves are investigated under fatigue life following lognormal distribution. To overcome the non-conservative assessment of existent man-made enlarging the sample size up to thousands, a simulation policy is employed to address the true production where the sample size is controlled less than 20 for material specimens, 10 for structural component specimens and the errors matching the statistical parameters are less than 5 percent. Availability and feasibility of the present methods have been indicated by the reconstruction practice of the test data and curves for 60Si2Mn high strength spring steel of railway industry.

  12. Multivariate poisson-lognormal model for modeling related factors in crash frequency by severity

    Directory of Open Access Journals (Sweden)

    Mehdi Tazhibi

    2013-01-01

    Full Text Available Aims: Traditionally, roadway safety analyses have used univariate distributions to model crash data for each level of severity separately. This paper uses the multivariate Poisson lognormal (MVPLN models to estimate the expected crash frequency by two levels of severity and then compares those estimates with the univariate Poisson-lognormal (UVPLN and the univariate Poisson (UVP models. Materials and Methods: The parameters estimation is done by Bayesian method for crash data at two levels of severity at the intersection of Isfahan city for 6 months. Results: The results showed that there was over-dispersion issue in data. The UVP model is not able to overcome this problem while the MVPLN model can account for over-dispersion. Also, the estimates of the extra Poisson variation parameters in the MVPLN model were smaller than the UVPLN model that causes improvement in the precision of the MNPLN model. Hence, the MVPLN model is better fitted to the data set. Also, results showed effect of the total Average annual daily traffic (AADT on the property damage only crash was significant in the all of models but effect of the total left turn AADT on the injuries and fatalities crash was significant just in the UVP model. Hence, holding all other factors fixed more property damage only crashes were expected on more the total AADT. For example, under MVPLN model an increase of 1000 vehicles in (average the total AADT was predicted to result in 31% more property damage only crash. Conclusion: Hence, reduction of total AADT was predicted to be highly cost-effective, in terms of the crash cost reductions over the long run.

  13. A log-normal distribution model for the molecular weight of aquatic fulvic acids

    Science.gov (United States)

    Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.

    2000-01-01

    The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.

  14. Remark about Transition Probabilities Calculation for Single Server Queues with Lognormal Inter-Arrival or Service Time Distributions

    Science.gov (United States)

    Lee, Moon Ho; Dudin, Alexander; Shaban, Alexy; Pokhrel, Subash Shree; Ma, Wen Ping

    Formulae required for accurate approximate calculation of transition probabilities of embedded Markov chain for single-server queues of the GI/M/1, GI/M/1/K, M/G/1, M/G/1/K type with heavy-tail lognormal distribution of inter-arrival or service time are given.

  15. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Science.gov (United States)

    Shao, Kan; Gift, Jeffrey S; Setzer, R Woodrow

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose-response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean±standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the "hybrid" method and relative deviation approach, we first evaluate six representative continuous dose-response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates.

  16. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  17. Geomagnetic storms, the Dst ring-current myth and lognormal distributions

    Science.gov (United States)

    Campbell, W.H.

    1996-01-01

    The definition of geomagnetic storms dates back to the turn of the century when researchers recognized the unique shape of the H-component field change upon averaging storms recorded at low latitude observatories. A generally accepted modeling of the storm field sources as a magnetospheric ring current was settled about 30 years ago at the start of space exploration and the discovery of the Van Allen belt of particles encircling the Earth. The Dst global 'ring-current' index of geomagnetic disturbances, formulated in that period, is still taken to be the definitive representation for geomagnetic storms. Dst indices, or data from many world observatories processed in a fashion paralleling the index, are used widely by researchers relying on the assumption of such a magnetospheric current-ring depiction. Recent in situ measurements by satellites passing through the ring-current region and computations with disturbed magnetosphere models show that the Dst storm is not solely a main-phase to decay-phase, growth to disintegration, of a massive current encircling the Earth. Although a ring current certainly exists during a storm, there are many other field contributions at the middle-and low-latitude observatories that are summed to show the 'storm' characteristic behavior in Dst at these observatories. One characteristic of the storm field form at middle and low latitudes is that Dst exhibits a lognormal distribution shape when plotted as the hourly value amplitude in each time range. Such distributions, common in nature, arise when there are many contributors to a measurement or when the measurement is a result of a connected series of statistical processes. The amplitude-time displays of Dst are thought to occur because the many time-series processes that are added to form Dst all have their own characteristic distribution in time. By transforming the Dst time display into the equivalent normal distribution, it is shown that a storm recovery can be predicted with

  18. Discrete Lognormal Model as an Unbiased Quantitative Measure of Scientific Performance Based on Empirical Citation Data

    Science.gov (United States)

    Moreira, Joao; Zeng, Xiaohan; Amaral, Luis

    2013-03-01

    Assessing the career performance of scientists has become essential to modern science. Bibliometric indicators, like the h-index are becoming more and more decisive in evaluating grants and approving publication of articles. However, many of the more used indicators can be manipulated or falsified by publishing with very prolific researchers or self-citing papers with a certain number of citations, for instance. Accounting for these factors is possible but it introduces unwanted complexity that drives us further from the purpose of the indicator: to represent in a clear way the prestige and importance of a given scientist. Here we try to overcome this challenge. We used Thompson Reuter's Web of Science database and analyzed all the papers published until 2000 by ~1500 researchers in the top 30 departments of seven scientific fields. We find that over 97% of them have a citation distribution that is consistent with a discrete lognormal model. This suggests that our model can be used to accurately predict the performance of a researcher. Furthermore, this predictor does not depend on the individual number of publications and is not easily ``gamed'' on. The authors acknowledge support from FCT Portugal, and NSF grants

  19. Strength and fracture toughness of heterogeneous blocks with joint lognormal modulus and failure strain

    Science.gov (United States)

    Dimas, Leon S.; Veneziano, Daniele; Buehler, Markus J.

    2016-07-01

    We obtain analytical approximations to the probability distribution of the fracture strengths of notched one-dimensional rods and two-dimensional plates in which the stiffness (Young's modulus) and strength (failure strain) of the material vary as jointly lognormal random fields. The fracture strength of the specimen is measured by the elongation, load, and toughness at two critical stages: when fracture initiates at the notch tip and, in the 2D case, when fracture propagates through the entire specimen. This is an extension of a previous study on the elastic and fracture properties of systems with random Young's modulus and deterministic material strength (Dimas et al., 2015a). For 1D rods our approach is analytical and builds upon the ANOVA decomposition technique of (Dimas et al., 2015b). In 2D we use a semi-analytical model to derive the fracture initiation strengths and regressions fitted to simulation data for the effect of crack arrest during fracture propagation. Results are validated through Monte Carlo simulation. Randomness of the material strength affects in various ways the mean and median values of the initial strengths, their log-variances, and log-correlations. Under low spatial correlation, material strength variability can significantly increase the effect of crack arrest, causing ultimate failure to be a more predictable and less brittle failure mode than fracture initiation. These insights could be used to guide design of more fracture resistant composites, and add to the design features that enhance material performance.

  20. Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.

    2017-02-07

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance.

  1. Ultrahigh throughput plasma processing of free standing silicon nanocrystals with lognormal size distribution

    Energy Technology Data Exchange (ETDEWEB)

    Dogan, Ilker; Kramer, Nicolaas J.; Westermann, Rene H. J.; Verheijen, Marcel A. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Dohnalova, Katerina; Gregorkiewicz, Tom [Van der Waals-Zeeman Institute, University of Amsterdam, Science Park 904, 1098 XH Amsterdam (Netherlands); Smets, Arno H. M. [Photovoltaic Materials and Devices Laboratory, Delft University of Technology, P.O. Box 5031, 2600 GA Delft (Netherlands); Sanden, Mauritius C. M. van de [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Dutch Institute for Fundamental Energy Research (DIFFER), P.O. Box 1207, 3430 BE Nieuwegein (Netherlands)

    2013-04-07

    We demonstrate a method for synthesizing free standing silicon nanocrystals in an argon/silane gas mixture by using a remote expanding thermal plasma. Transmission electron microscopy and Raman spectroscopy measurements reveal that the distribution has a bimodal shape consisting of two distinct groups of small and large silicon nanocrystals with sizes in the range 2-10 nm and 50-120 nm, respectively. We also observe that both size distributions are lognormal which is linked with the growth time and transport of nanocrystals in the plasma. Average size control is achieved by tuning the silane flow injected into the vessel. Analyses on morphological features show that nanocrystals are monocrystalline and spherically shaped. These results imply that formation of silicon nanocrystals is based on nucleation, i.e., these large nanocrystals are not the result of coalescence of small nanocrystals. Photoluminescence measurements show that silicon nanocrystals exhibit a broad emission in the visible region peaked at 725 nm. Nanocrystals are produced with ultrahigh throughput of about 100 mg/min and have state of the art properties, such as controlled size distribution, easy handling, and room temperature visible photoluminescence.

  2. Thermal and log-normal distributions of plasma in laser driven Coulomb explosions of deuterium clusters

    Science.gov (United States)

    Barbarino, M.; Warrens, M.; Bonasera, A.; Lattuada, D.; Bang, W.; Quevedo, H. J.; Consoli, F.; de Angelis, R.; Andreoli, P.; Kimura, S.; Dyer, G.; Bernstein, A. C.; Hagel, K.; Barbui, M.; Schmidt, K.; Gaul, E.; Donovan, M. E.; Natowitz, J. B.; Ditmire, T.

    2016-08-01

    In this work, we explore the possibility that the motion of the deuterium ions emitted from Coulomb cluster explosions is highly disordered enough to resemble thermalization. We analyze the process of nuclear fusion reactions driven by laser-cluster interactions in experiments conducted at the Texas Petawatt laser facility using a mixture of D2+3He and CD4+3He cluster targets. When clusters explode by Coulomb repulsion, the emission of the energetic ions is “nearly” isotropic. In the framework of cluster Coulomb explosions, we analyze the energy distributions of the ions using a Maxwell-Boltzmann (MB) distribution, a shifted MB distribution (sMB), and the energy distribution derived from a log-normal (LN) size distribution of clusters. We show that the first two distributions reproduce well the experimentally measured ion energy distributions and the number of fusions from d-d and d-3He reactions. The LN distribution is a good representation of the ion kinetic energy distribution well up to high momenta where the noise becomes dominant, but overestimates both the neutron and the proton yields. If the parameters of the LN distributions are chosen to reproduce the fusion yields correctly, the experimentally measured high energy ion spectrum is not well represented. We conclude that the ion kinetic energy distribution is highly disordered and practically not distinguishable from a thermalized one.

  3. The Hum: log-normal distribution and planetary-solar resonance

    Science.gov (United States)

    Tattersall, R.

    2013-12-01

    Observations of solar and planetary orbits, rotations, and diameters show that these attributes are related by simple ratios. The forces of gravity and magnetism and the principles of energy conservation, entropy, power laws, and the log-normal distribution which are evident are discussed in relation to planetary distribution with respect to time in the solar system. This discussion is informed by consideration of the periodicities of interactions, as well as the regularity and periodicity of fluctuations in proxy records which indicate solar variation. It is demonstrated that a simple model based on planetary interaction frequencies can well replicate the timing and general shape of solar variation over the period of the sunspot record. Finally, an explanation is offered for the high degree of stable organisation and correlation with cyclic solar variability observed in the solar system. The interaction of the forces of gravity and magnetism along with the thermodynamic principles acting on planets may be analogous to those generating the internal dynamics of the Sun. This possibility could help account for the existence of strong correlations between orbital dynamics and solar variation for which a sufficiently powerful physical mechanism has yet to be fully demonstrated.

  4. Energy-harvesting in cooperative AF relaying networks over log-normal fading channels

    KAUST Repository

    Rabie, Khaled M.

    2016-07-26

    Energy-harvesting (EH) and wireless power transfer are increasingly becoming a promising source of power in future wireless networks and have recently attracted a considerable amount of research, particularly on cooperative two-hop relay networks in Rayleigh fading channels. In contrast, this paper investigates the performance of wireless power transfer based two-hop cooperative relaying systems in indoor channels characterized by log-normal fading. Specifically, two EH protocols are considered here, namely, time switching relaying (TSR) and power splitting relaying (PSR). Our findings include accurate analytical expressions for the ergodic capacity and ergodic outage probability for the two aforementioned protocols. Monte Carlo simulations are used throughout to confirm the accuracy of our analysis. The results show that increasing the channel variance will always provide better ergodic capacity performance. It is also shown that a good selection of the EH time in the TSR protocol, and the power splitting factor in the PTS protocol, is the key to achieve the best system performance. © 2016 IEEE.

  5. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    Science.gov (United States)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  6. Outage Analysis of Ultra-Wideband System in Lognormal Multipath Fading and Square-Shaped Cellular Configurations

    Directory of Open Access Journals (Sweden)

    Pirinen Pekka

    2006-01-01

    Full Text Available Generic ultra-wideband (UWB spread-spectrum system performance is evaluated in centralized and distributed spatial topologies comprising square-shaped indoor cells. Statistical distributions for link distances in single-cell and multicell configurations are derived. Cochannel-interference-induced outage probability is used as a performance measure. The probability of outage varies depending on the spatial distribution statistics of users (link distances, propagation characteristics, user activities, and receiver settings. Lognormal fading in each channel path is incorporated in the model, where power sums of multiple lognormal signal components are approximated by a Fenton-Wilkinson approach. Outage performance of different spatial configurations is outlined numerically. Numerical results show the strong dependence of outage probability on the link distance distributions, number of rake fingers, and path losses.

  7. Upper Bound of the Generalized p Value for the Population Variances of Lognormal Distributions with Known Coefficients of Variation

    Directory of Open Access Journals (Sweden)

    Rada Somkhuean

    2017-01-01

    Full Text Available This paper presents an upper bound for each of the generalized p values for testing the one population variance, the difference between two population variances, and the ratio of population variances for lognormal distribution when coefficients of variation are known. For each of the proposed generalized p values, we derive a closed form expression of the upper bound of the generalized p value. Numerical computations illustrate the theoretical results.

  8. The effect of ignoring individual heterogeneity in Weibull log-normal sire frailty models.

    Science.gov (United States)

    Damgaard, L H; Korsgaard, I R; Simonsen, J; Dalsgaard, O; Andersen, A H

    2006-06-01

    The objective of this study was, by means of simulation, to quantify the effect of ignoring individual heterogeneity in Weibull sire frailty models on parameter estimates and to address the consequences for genetic inferences. Three simulation studies were evaluated, which included 3 levels of individual heterogeneity combined with 4 levels of censoring (0, 25, 50, or 75%). Data were simulated according to balanced half-sib designs using Weibull log-normal animal frailty models with a normally distributed residual effect on the log-frailty scale. The 12 data sets were analyzed with 2 models: the sire model, equivalent to the animal model used to generate the data (complete sire model), and a corresponding model in which individual heterogeneity in log-frailty was neglected (incomplete sire model). Parameter estimates were obtained from a Bayesian analysis using Gibbs sampling, and also from the software Survival Kit for the incomplete sire model. For the incomplete sire model, the Monte Carlo and Survival Kit parameter estimates were similar. This study established that when unobserved individual heterogeneity was ignored, the parameter estimates that included sire effects were biased toward zero by an amount that depended in magnitude on the level of censoring and the size of the ignored individual heterogeneity. Despite the biased parameter estimates, the ranking of sires, measured by the rank correlations between true and estimated sire effects, was unaffected. In comparison, parameter estimates obtained using complete sire models were consistent with the true values used to simulate the data. Thus, in this study, several issues of concern were demonstrated for the incomplete sire model.

  9. LogCauchy, log-sech and lognormal distributions of species abundances in forest communities

    Science.gov (United States)

    Yin, Z.-Y.; Peng, S.-L.; Ren, H.; Guo, Q.; Chen, Z.-H.

    2005-01-01

    Species-abundance (SA) pattern is one of the most fundamental aspects of biological community structure, providing important information regarding species richness, species-area relation and succession. To better describe the SA distribution (SAD) in a community, based on the widely used lognormal (LN) distribution model with exp(-x2) roll-off on Preston's octave scale, this study proposed two additional models, logCauchy (LC) and log-sech (LS), respectively with roll-offs of simple x-2 and e-x. The estimation of the theoretical total number of species in the whole community, S*, including very rare species not yet collected in sample, was derived from the left-truncation of each distribution. We fitted these three models by Levenberg-Marquardt nonlinear regression and measured the model fit to the data using coefficient of determination of regression, parameters' t-test and distribution's Kolmogorov-Smirnov (KS) test. Examining the SA data from six forest communities (five in lower subtropics and one in tropics), we found that: (1) on a log scale, all three models that are bell-shaped and left-truncated statistically adequately fitted the observed SADs, and the LC and LS did better than the LN; (2) from each model and for each community the S* values estimated by the integral and summation methods were almost equal, allowing us to estimate S* using a simple integral formula and to estimate its asymptotic confidence internals by regression of a transformed model containing it; (3) following the order of LC, LS, and LN, the fitted distributions became lower in the peak, less concave in the side, and shorter in the tail, and overall the LC tended to overestimate, the LN tended to underestimate, while the LS was intermediate but slightly tended to underestimate, the observed SADs (particularly the number of common species in the right tail); (4) the six communities had some similar structural properties such as following similar distribution models, having a common

  10. A Study of the Application of the Lognormal and Gamma Distributions to Corrective Maintenance Repair Time Data.

    Science.gov (United States)

    1982-10-01

    8217....,•,-.. . . -. ::..: .. - - .__ - -! jIaIi I SYSTEMS /EQUIPMENTS ANALYZED SetNo S Y S T E " N A M E 2 French Fessenheim Puups Repair Time 3 Condenser Extrac ’on aump 4...lognormal assumption. The plots are also very good. The gamma family does not well represent the data sets. d. Sets No. 1 and 2 - French Fessenheim Pumps...6.0 * 6.5* 7.0 1 * 󈧴 set No d 0 964 so 0 00 4 to :80022TID jag.9 I4 SET NO 2 - FRENCH FESSENHEIM PUMPS (REPAIR TIME) SAMPLE SIZE N = 43 NO. OF CELLS

  11. Performance Analysis of a DS-CDMA Cellular System with Effects of Soft Handoff in Log-Normal Shadowing Channels

    Institute of Scientific and Technical Information of China (English)

    YANG Feng-rui; LUO Hong; ZHOU Jie; HISAKAZU Kikuchi

    2004-01-01

    Next generation wireless communication is based on a global system of fixed and wireless mobile services that are transportable across different network back-bones, network service providers and network geographical boundaries.This paper presents an approach to investigate the effects of soft handover and perfect power control on the forward link in a DS-CDMA cellular system. Especially, the relationships between the size of handover zone and the capacity gain are evaluated under the log-normal shadow channel. Then the optimization of maximum forward capacity is very necessary to be done with the maximum size of soft handover zone to the various system characteristics.

  12. Half-Duplex and Full-Duplex AF and DF Relaying with Energy-Harvesting in Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.

    2017-08-15

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels, which represent outdoor environments. In contrast, this paper is dedicated to analyze the performance of dual-hop relaying systems with EH over indoor channels characterized by log-normal fading. Both half-duplex (HD) and full-duplex (FD) relaying mechanisms are studied in this work with decode-and-forward (DF) and amplify-and-forward (AF) relaying protocols. In addition, three EH schemes are investigated, namely, time switching relaying, power splitting relaying and ideal relaying receiver which serves as a lower bound. The system performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions. Monte Carlo simulations are provided throughout to validate the accuracy of our analysis. Results reveal that, in both HD and FD scenarios, AF relaying performs only slightly worse than DF relaying which can make the former a more efficient solution when the processing energy cost at the DF relay is taken into account. It is also shown that FD relaying systems can generally outperform HD relaying schemes as long as the loop-back interference in FD is relatively small. Furthermore, increasing the variance of the log-normal channel has shown to deteriorate the performance in all the relaying and EH protocols considered.

  13. Data assimilation in a coupled physical-biogeochemical model of the California Current System using an incremental lognormal 4-dimensional variational approach: Part 1-Model formulation and biological data assimilation twin experiments

    Science.gov (United States)

    Song, Hajoon; Edwards, Christopher A.; Moore, Andrew M.; Fiechter, Jerome

    2016-10-01

    A quadratic formulation for an incremental lognormal 4-dimensional variational assimilation method (incremental L4DVar) is introduced for assimilation of biogeochemical observations into a 3-dimensional ocean circulation model. L4DVar assumes that errors in the model state are lognormally rather than Gaussian distributed, and implicitly ensures that state estimates are positive definite, making this approach attractive for biogeochemical variables. The method is made practical for a realistic implementation having a large state vector through linear assumptions that render the cost function quadratic and allow application of existing minimization techniques. A simple nutrient-phytoplankton-zooplankton-detritus (NPZD) model is coupled to the Regional Ocean Modeling System (ROMS) and configured for the California Current System. Quadratic incremental L4DVar is evaluated in a twin model framework in which biological fields only are in error and compared to G4DVar which assumes Gaussian distributed errors. Five-day assimilation cycles are used and statistics from four years of model integration analyzed. The quadratic incremental L4DVar results in smaller root-mean-squared errors and better statistical agreement with reference states than G4DVar while maintaining a positive state vector. The additional computational cost and implementation effort are trivial compared to the G4DVar system, making quadratic incremental L4DVar a practical and beneficial option for realistic biogeochemical state estimation in the ocean.

  14. Aplicación del modelo log-normal para la predicción de activos del Banco Sabadell

    OpenAIRE

    Debón Aucejo, Ana María; Cortés López, Juan Carlos; Moreno Navarro, Carla

    2008-01-01

    El objetivo de este trabajo es predecir el valor de una acción. Para ello, se han utilizado cotizaciones intradía durante el primer trimestre del año 2007 de activos del Banco Sabadell (BS). En primer lugar, obtenemos el ajuste del modelo log-normal a partir del histórico del activo desde el 28 de diciembre al 20 de marzo de 2007. En segundo lugar, se calcula el precio que alcanzará la acción BS a 21 de marzo de 2007 con el modelo. Después, se ha obtenido para esa fecha una predicción por int...

  15. Simulation of mineral dust aerosol with piecewise log-normal approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2011-09-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Module (CanAM4-PAM. The total simulated annual mean dust burden is 37.8 mg m−2 for year 2000, which is consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with several satellite observations and shows good agreements. The model yields a dust AOD of 0.042 and total AOD of 0.126 for the year 2000. The simulated aerosol direct radiative forcings (ADRF of dust and total aerosol over ocean are −1.24 W m−2 and −4.76 W m−2 respectively, which show good consistency with satellite estimates for the year 2001.

  16. Simulation of mineral dust aerosol with Piecewise Log-normal Approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2012-08-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Model (CanAM4-PAM. The total simulated annual global dust emission is 2500 Tg yr−1, and the dust mass load is 19.3 Tg for year 2000. Both are consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Biases in long-range transport are also contributing. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with satellite and surface remote sensing measurements and shows general agreement in terms of the dust distribution around sources. The model yields a dust AOD of 0.042 and dust aerosol direct radiative forcing (ADRF of −1.24 W m−2 respectively, which show good consistency with model estimates from other studies.

  17. Analysis of rabbit doe longevity using a semiparametric log-Normal animal frailty model with time-dependent covariates

    Directory of Open Access Journals (Sweden)

    Damgaard Lars

    2006-04-01

    Full Text Available Abstract Data on doe longevity in a rabbit population were analysed using a semiparametric log-Normal animal frailty model. Longevity was defined as the time from the first positive pregnancy test to death or culling due to pathological problems. Does culled for other reasons had right censored records of longevity. The model included time dependent covariates associated with year by season, the interaction between physiological state and the number of young born alive, and between order of positive pregnancy test and physiological state. The model also included an additive genetic effect and a residual in log frailty. Properties of marginal posterior distributions of specific parameters were inferred from a full Bayesian analysis using Gibbs sampling. All of the fully conditional posterior distributions defining a Gibbs sampler were easy to sample from, either directly or using adaptive rejection sampling. The marginal posterior mean estimates of the additive genetic variance and of the residual variance in log frailty were 0.247 and 0.690.

  18. Analysis of rabbit doe longevity using a semiparametric log-Normal animal frailty model with time-dependent covariates.

    Science.gov (United States)

    Sánchez, Juan Pablo; Korsgaard, Inge Riis; Damgaard, Lars Holm; Baselga, Manuel

    2006-01-01

    Data on doe longevity in a rabbit population were analysed using a semiparametric log-Normal animal frailty model. Longevity was defined as the time from the first positive pregnancy test to death or culling due to pathological problems. Does culled for other reasons had right censored records of longevity. The model included time dependent covariates associated with year by season, the interaction between physiological state and the number of young born alive, and between order of positive pregnancy test and physiological state. The model also included an additive genetic effect and a residual in log frailty. Properties of marginal posterior distributions of specific parameters were inferred from a full Bayesian analysis using Gibbs sampling. All of the fully conditional posterior distributions defining a Gibbs sampler were easy to sample from, either directly or using adaptive rejection sampling. The marginal posterior mean estimates of the additive genetic variance and of the residual variance in log frailty were 0.247 and 0.690.

  19. Evidence for two Lognormal States in Multi-wavelength Flux Variation of FSRQ PKS 1510-089

    CERN Document Server

    Kushwaha, Pankaj; Misra, Ranjeev; Sahayanathan, S; Singh, K P; Baliyan, K S

    2016-01-01

    We present a systematic characterization of multi-wavelength emission from blazar PKS 1510-089 using well-sampled data at infrared(IR)-optical, X-ray and $\\gamma$-ray energies. The resulting flux distributions, except at X-rays, show two distinct lognormal profiles corresponding to a high and a low flux level. The dispersions exhibit energy dependent behavior except for the LAT $\\gamma$-ray and optical B-band. During the low level flux states, it is higher towards the peak of the spectral energy distribution, with $\\gamma$-ray being intrinsically more variable followed by IR and then optical, consistent with mainly being a result of varying bulk Lorentz factor. On the other hand, the dispersions during the high state are similar in all bands expect optical B-band, where thermal emission still dominates. The centers of distributions are a factor of $\\sim 4$ apart, consistent with anticipation from studies of extragalactic $\\gamma$-ray background with the high state showing a relatively harder mean spectral ind...

  20. Beyond Zipf's Law: The Lavalette Rank Function and its Properties

    CERN Document Server

    Fontanelli, Oscar; Yang, Yaning; Cocho, Germinal; Li, Wentian

    2016-01-01

    Although Zipf's law is widespread in natural and social data, one often encounters situations where one or both ends of the ranked data deviate from the power-law function. Previously we proposed the Beta rank function to improve the fitting of data which does not follow a perfect Zipf's law. Here we show that when the two parameters in the Beta rank function have the same value, the Lavalette rank function, the probability density function can be derived analytically. We also show both computationally and analytically that Lavalette distribution is approximately equal, though not identical, to the lognormal distribution. We illustrate the utility of Lavalette rank function in several datasets. We also address three analysis issues on the statistical testing of Lavalette fitting function, comparison between Zipf's law and lognormal distribution through Lavalette function, and comparison between lognormal distribution and Lavalette distribution.

  1. Methodology for lognormal modelling of malignant pleural mesothelioma survival time distributions: a study of 5580 case histories from Europe and USA

    Science.gov (United States)

    Mould, Richard F.; Lahanas, Michael; Asselain, Bernard; Brewster, David; Burgers, Sjaak A.; Damhuis, Ronald A. M.; DeRycke, Yann; Gennaro, Valerio; Szeszenia-Dabrowska, Neonila

    2004-09-01

    A truncated left-censored and right-censored lognormal model has been validated for representing pleural mesothelioma survival times in the range 5-200 weeks for data subsets grouped by age for males, 40-49, 50-59, 60-69, 70-79 and 80+ years and for all ages combined for females. The cases available for study were from Europe and USA and totalled 5580. This is larger than any other pleural mesothelioma cohort accrued for study. The methodology describes the computation of reference baseline probabilities, 5-200 weeks, which can be used in clinical trials to assess results of future promising treatment methods. This study is an extension of previous lognormal modelling by Mould et al (2002 Phys. Med. Biol. 47 3893-924) to predict long-term cancer survival from short-term data where the proportion cured is denoted by C and the uncured proportion, which can be represented by a lognormal, by (1 - C). Pleural mesothelioma is a special case when C = 0.

  2. Pricing FX Options in the Heston/CIR Jump-Diffusion Model with Log-Normal and Log-Uniform Jump Amplitudes

    Directory of Open Access Journals (Sweden)

    Rehez Ahlip

    2015-01-01

    model for the exchange rate with log-normal jump amplitudes and the volatility model with log-uniformly distributed jump amplitudes. We assume that the domestic and foreign stochastic interest rates are governed by the CIR dynamics. The instantaneous volatility is correlated with the dynamics of the exchange rate return, whereas the domestic and foreign short-term rates are assumed to be independent of the dynamics of the exchange rate and its volatility. The main result furnishes a semianalytical formula for the price of the foreign exchange European call option.

  3. Charged-Particle Thermonuclear Reaction Rates: II. Tables and Graphs of Reaction Rates and Probability Density Functions

    CERN Document Server

    Iliadis, Christian; Champagne, Art; Coc, Alain; Fitzgerald, Ryan

    2010-01-01

    Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this series (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, "lower limit", "nominal value" and "upper limit" of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters {\\mu} and {\\sigma} at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rat...

  4. Tamaño de muestra requerido para estimar la media aritmética de una distribución lognormal

    OpenAIRE

    2012-01-01

    Se presentan fórmulas cerradas para calcular el tamaño de la muestra requerido en la estimación de la media aritmética de una distribución lognormal para datos censurados y no censurados. Las fórmulas son el resultado del ajuste de modelos no lineales para los tamaños de la muestra exactos reportados por Pérez (1995) en función de la desviación geométrica estándar, el porcentaje de diferencia a la verdadera media aritmética y niveles de confianza del 90 %, 95% y 99 %. Las fórmulas presentadas...

  5. Initial luminosity functions of starburst galaxies

    Science.gov (United States)

    Parnovsky, S.; Izotova, I.

    2016-12-01

    For the sample of about 800 starburst galaxies the initial luminosity functions which appear the distributions of galaxy luminosities at zero starburst age are considered based on the data of luminosities of galaxies in the recombination Hα emission line in the regions of ionised hydrogen and the ultraviolet continuum. We find the initial luminosity functions for the starburst galaxies with Hα emission and ultraviolet continuum are satisfactory approximated with log-normal function.

  6. Constraints on the multi-lognormal magnetic fields from the observations of the cosmic microwave background and the matter power spectrum

    CERN Document Server

    Yamazaki, Dai G; Takahashi, Keitaro

    2013-01-01

    Primordial magnetic fields (PMFs), which were generated in the early universe before recombination, affect the motion of plasma and then the cosmic microwave background (CMB) and the matter power spectrum (MPS). We consider constraints on PMFs with a characteristic correlation length from the observations of the anisotropies of CMB (WMAP, QUAD, ACT, SPT, and ACBAR) and MPS. The spectrum of PMFs is modeled with multi-lognormal distributions (MLND), rather than power-law distribution, and we derive constraints on the strength $|\\mathbf{B}_k|$ at each wavenumber $k$ along with the standard cosmological parameters in the flat Universe and the foreground sources. We obtain upper bounds on the field strengths at $k=10^{-1}, 10^{-2},10^{-4}$ and $10^{-5}$ Mpc$^{-1}$ as 4.7 nG, 2.1 nG, 5.3 nG and 10.9 nG ($2\\sigma$ C.L.) respectively, while the field strength at $k=10^{-3} $Mpc$^{-1}$ turns out to have a finite value as $|\\mathbf{B}_{k = 10^{-3}}| = 6.2 \\pm 1.3 $ nG ($1\\sigma$ C.L.). This finite value is attributed t...

  7. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    Science.gov (United States)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  8. Zipf's law and log-normal distributions in measures of scientific output across fields and institutions: 40 years of Slovenia's research as an example

    CERN Document Server

    Perc, Matjaz

    2010-01-01

    Slovenia's Current Research Information System (SICRIS) currently hosts 86,443 publications with citation data from 8,359 researchers working on the whole plethora of social and natural sciences from 1970 till present. Using these data, we show that the citation distributions derived from individual publications have Zipfian properties in that they can be fitted by a power law $P(x) \\sim x^{-\\alpha}$, with $\\alpha$ between 2.4 and 3.1 depending on the institution and field of research. Distributions of indexes that quantify the success of researchers rather than individual publications, on the other hand, cannot be associated with a power law. We find that for Egghe's g-index and Hirsch's h-index the log-normal form $P(x) \\sim \\exp[-a\\ln x -b(\\ln x)^2]$ applies best, with $a$ and $b$ depending moderately on the underlying set of researchers. In special cases, particularly for institutions with a strongly hierarchical constitution and research fields with high self-citation rates, exponential distributions can...

  9. Performance Evaluation of Localization Accuracy for a Log-Normal Shadow Fading Wireless Sensor Network under Physical Barrier Attacks

    Directory of Open Access Journals (Sweden)

    Ahmed Abdulqader Hussein

    2015-12-01

    Full Text Available Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI, face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL and step function multi-frequency multi-power localization (SF-MFMPL, including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks.

  10. Influence functions of trimmed likelihood estimators for lifetime experiments

    OpenAIRE

    2015-01-01

    We provide a general approach for deriving the influence function for trimmed likelihood estimators using the implicit function theorem. The approach is applied to lifetime models with exponential or lognormal distributions possessing a linear or nonlinear link function. A side result is that the functional form of the trimmed estimator for location and linear regression used by Bednarski and Clarke (1993, 2002) and Bednarski et al. (2010) is not generally always the correct fu...

  11. 对数正态分布寿命型序贯验证试验方法%Sequential compliance test method for lognormal distribution

    Institute of Scientific and Technical Information of China (English)

    邓清; 袁宏杰

    2012-01-01

    Using the experience of sequential verification test program in exponential distribution for reference,the method of making the sequential verification test program in lognormal distribution was discussed,which takes the average life as an indicator.The test procedure of sequential test was provided,and the upper limit value of the producer and consumer risks were studied under censoring.According to the sampling method in practical engineering,the simulation method was proposed to evaluate the above mentioned test program.Evaluation results indicate that the proposed sequential verification test program can meet the requirements of controlling both sides of risk on the premise of satisfying the requirement of sample and censored size.And the consumer's risk is lower than the expected value.%借鉴指数分布寿命型序贯验证试验方案的思想,讨论了以平均寿命为指标的对数正态分布寿命型产品序贯验证试验的制定方法,给出了序贯试验的试验程序,研究了截尾状态下序贯试验的生产方风险和使用方风险的上限.基于工程实际的抽样方法,给出了计算机仿真评价方法,对给出的序贯试验方案进行评价.评价结果表明,在样本量和截尾数满足要求的前提下,所提出的序贯验证试验方法能够满足对双方风险的控制要求,且对使用方风险提供了更大的保护.

  12. Methane emission rates from the Arctic coastal tundra at Barrow are log-normally distributed: Is this a tail that wags climate?

    Science.gov (United States)

    von Fischer, J. C.; Rhew, R.

    2008-12-01

    Over the past two growing seasons, we have conducted >200 point measurements of methane emission and ecosystem respiration rates on the Arctic coastal tundra within the Barrow Environmental Observatory. These measures reveal that methane emission rates are log-normally distributed, but ecosystem respiration rates are normally distributed. The contrast in frequency distributions indicates that methane and carbon dioxide emission rates respond in a qualitatively different way to their environmental drivers: while ecosystem respiration rates rise linearly with increasing temperature and soil moisture, methane emissions increase exponentially. Thus, the long positive tail in methane emission rates does generate positive feedback on climate change that is strongly non-linear. To further evaluate this response, we examined the spatial statistics of our dataset, and conducted additional measures of carbon flux from points on the landscape that typically had the highest rates of methane emission. The spatial analysis showed that neither ecosystem respiration nor methane emission rates have spatial co-correlation beyond that predicted by macroscopic properties of vegetation (e.g., species composition, plant height) and soil (e.g., permafrost depth, temperature, water content), suggesting that our findings can be used to scale up. Our analysis of high-emission points focused on wet and flooded areas where Carex aquatilis growth was greatest. Here, we found variation in methane emission rates to be correlated with Carex aboveground biomass and rates of gross primary production, but not ecosystem respiration. Given the sensitivity of Carex's phenotype to inundation, permafrost depth and soil temperature, we anticipate that the magnitude the climate-methane feedback in the Arctic coastal plain will depend strongly on how permafrost thaw alters the ecology of Carex aquatilis.

  13. Mills' ratio: Reciprocal concavity and functional inequalities

    CERN Document Server

    Baricz, Árpád

    2010-01-01

    This note contains suficient conditions for the probability density function of an arbitrary continuous univariate distribution such that the corresponding Mills ratio to be reciprocally convex (concave). To illustrate the applications of the main results, the Mills ratio of some common continuous univariate distributions, like gamma, log-normal and Student's t distributions, are discussed in details. The application to monopoly theory is also summarized.

  14. Discerning the Form of the Dense Core Mass Function

    CERN Document Server

    Swift, Jonathan J

    2009-01-01

    We investigate the ability to discern between lognormal and powerlaw forms for the observed mass function of dense cores in star forming regions. After testing our fitting, goodness-of-fit, and model selection procedures on simulated data, we apply our analysis to 14 datasets from the literature. Whether the core mass function has a powerlaw tail or whether it follows a pure lognormal form cannot be distinguished from current data. From our simulations it is estimated that datasets from uniform surveys containing more than approximately 500 cores with a completeness limit below the peak of the mass distribution are needed to definitively discern between these two functional forms. We also conclude that the width of the core mass function may be more reliably estimated than the powerlaw index of the high mass tail and that the width may also be a more useful parameter in comparing with the stellar initial mass function to deduce the statistical evolution of dense cores into stars.

  15. Power-transfer effects in monomode optical nonlinear waveguiding structures.

    Science.gov (United States)

    Jakubczyk, Z; Jerominek, H; Patela, S; Tremblay, R; Delisle, C

    1987-09-01

    We describe power-transfer effects, over a certain threshold, among constituents of planar waveguiding structures consisting of an optical linear layer deposited onto a nonlinear substrate (CdS(x)Se(1-x)-doped glass). Proper selection of the thickness of the linear waveguiding film and the refractive index of the linear cladding allows one to obtain optical transistor action and to construct all-optical AND, OR, NOT, and XOR logic gates. The effects appear for the TE(0) guided mode.

  16. Capteur de temperature interferometrique a fibre optique monomode

    Science.gov (United States)

    Lacroix, S.; Bures, J.; Parent, M.; Lapierre, J.

    1984-08-01

    We describe the use of an optical fiber reflection two-wave interferometer as a temperature sensor. As it uses only one fiber this device is easy to set up. We calculate its sensitivity based on the temperature rate of change of the refractive index and length of the fiber, for the case of pure silica. The measured sensitivity, equal to 73 fringes/°C for a 1 m long fiber, is slightly higher than the theoretical value. This result is in agreement with the expected increase in the thermal expansion and thermo-optic coefficients of doped silica.

  17. Non-Spatial Analysis of Relative Risk of Dengue Disease in Bandung Using Poisson-gamma and Log-normal Models: A Case Study of Dengue Data from Santo Borromeus Hospital in 2013

    Science.gov (United States)

    Irawan, R.; Yong, B.; Kristiani, F.

    2017-02-01

    Bandung, one of the cities in Indonesia, is vulnerable to dengue disease for both early-stage (Dengue Fever) and severe-stage (Dengue Haemorrhagic Fever and Dengue Shock Syndrome). In 2013, there were 5,749 patients in Bandung and 2,032 of the patients were hospitalized in Santo Borromeus Hospital. In this paper, there are two models, Poisson-gamma and Log-normal models, that use Bayesian inference to estimate the value of the relative risk. The calculation is done by Markov Chain Monte Carlo method which is the simulation using Gibbs Sampling algorithm in WinBUGS 1.4.3 software. The analysis results for dengue disease of 30 sub-districts in Bandung in 2013 based on Santo Borromeus Hospital’s data are Coblong and Bandung Wetan sub-districts had the highest relative risk using both models for the early-stage, severe-stage, and all stages. Meanwhile, Cinambo sub-district had the lowest relative risk using both models for the severe-stage and all stages and BojongloaKaler sub-district had the lowest relative risk using both models for the early-stage. For the model comparison using DIC (Deviance Information Criterion) method, the Log-normal model is a better model for the early-stage and severe-stage, but for the all stages, the Poisson-gamma model is a better model which fits the data.

  18. The Probability Density Functions to Diameter Distributions for Scots Pine Oriental Beech and Mixed Stands

    Directory of Open Access Journals (Sweden)

    Aydın Kahriman

    2011-11-01

    Full Text Available Determine the diameter distribution of a stand and its relations with stand ages, site index, density and mixture percentage is very important both biologically and economically. The Weibull with two parameters, Weibull with three parameters, Gamma with two parameters, Gamma with three parameters, Beta, Lognormal with two parameters, Lognormal with three parameters, Normal, Johnson SB probability density functions were used to determination of diameter distributions. This study aimed to compared based on performance of describing different diameter distribution and to describe the best successful function of diameter distributions. The data were obtaited from 162 temporary sample plots measured Scots pine and Oriental beech mixed stands in Black Sea Region. The results show that four parameter Johnson SB function for both scots pine and oriental beech is the best successful function to describe diameter distributions based on error index values calculated by difference between observed and predicted diameter distributions.

  19. Power law behaviors in natural and social phenomena and the double Pareto lognormal distribution%自然与社会环境中的幂律现象和双帕累托对数正态分布

    Institute of Scientific and Technical Information of China (English)

    方正; 王杰

    2011-01-01

    Power law behaviors are ubiquitous in natural and social phenomena. How to accurately describe such behaviors and provide reasonable explanations of why such behaviors occur, however, has long been a standing open problem. The double Pareto lognormal distribution offers, from the stochastic process point of view, a viable approach to this problem. This article elaborates the mathematical concept of the double Pareto lognormal distribution and provides an overview of natural and social phenomena that exhibit such distribution. These include the number of friends in social networks, Internet file sizes, stock market returns, wealth possessions in human societies, human settlement sizes, oil field reserves, and areas burnt from forest wildfire.%幂律是在许多自然和社会环境中都能观察到的现象.但如何精确地描述这种现象并合理地解释这种现象的成因却一直令人困扰.双帕累托对数正态分布从随机过程的角度对这一问题给出了一个新的思路.本文首先描述双帕累托对数正态分布的数学推导与生成模型,然后解释此分布为什么会在社交网朋友的数量、互联网文件的大小、股票市场的回报、社会财富的占有、城市人口的规模、油田的储量及森林火灾焚烧的面积等现象中出现的可能原因.

  20. Aliphatic polycarbonates based on carbon dioxide, furfuryl glycidyl ether, and glycidyl methyl ether: reversible functionalization and cross-linking.

    Science.gov (United States)

    Hilf, Jeannette; Scharfenberg, Markus; Poon, Jeffrey; Moers, Christian; Frey, Holger

    2015-01-01

    Well-defined poly((furfuryl glycidyl ether)-co-(glycidyl methyl ether) carbonate) (P((FGE-co-GME)C)) copolymers with varying furfuryl glycidyl ether (FGE) content in the range of 26% to 100% are prepared directly from CO2 and the respective epoxides in a solvent-free synthesis. All materials are characterized by size-exclusion chromatography (SEC), (1)H NMR spectroscopy, and differential scanning calorimetry (DSC). The furfuryl-functional samples exhibit monomodal molecular weight distributions with Mw/Mn in the range of 1.16 to 1.43 and molecular weights (Mn) between 2300 and 4300 g mol(-1). Thermal properties reflect the amorphous structure of the polymers. Both post-functionalization and cross-linking are performed via Diels-Alder chemistry using maleimide derivatives, leading to reversible network formation. This transformation is shown to be thermally reversible at 110 °C.

  1. Functional

    Directory of Open Access Journals (Sweden)

    Fedoua Gandia

    2014-07-01

    Full Text Available The study was carried out to investigate the effects of inhaled Mg alone and associated with F in the treatment of bronchial hyperresponsiveness. 43 male Wistar rats were randomly divided into four groups and exposed to inhaled NaCl 0.9%, MeCh, MgSO4 and MgF2. Pulmonary changes were assessed by means of functional tests and quantitative histological examination of lungs and trachea. Results revealed that delivery of inhaled Mg associated with F led to a significant decrease of total lung resistance better than inhaled Mg alone (p < 0.05. Histological examinations illustrated that inhaled Mg associated with F markedly suppressed muscular hypertrophy (p = 0.034 and bronchoconstriction (p = 0.006 in MeCh treated rats better than inhaled Mg alone. No histological changes were found in the trachea. This study showed that inhaled Mg associated with F attenuated the main principle of the central components of changes in MeCh provoked experimental asthma better than inhaled Mg alone, potentially providing a new therapeutic approach against asthma.

  2. The Distribution of the Asymptotic Number of Citations to Sets of Publications by a Researcher or From an Academic Department Are Consistent With a Discrete Lognormal Model

    CERN Document Server

    Moreira, João A G; Amaral, Luís A Nunes

    2015-01-01

    How to quantify the impact of a researcher's or an institution's body of work is a matter of increasing importance to scientists, funding agencies, and hiring committees. The use of bibliometric indicators, such as the h-index or the Journal Impact Factor, have become widespread despite their known limitations. We argue that most existing bibliometric indicators are inconsistent, biased, and, worst of all, susceptible to manipulation. Here, we pursue a principled approach to the development of an indicator to quantify the scientific impact of both individual researchers and research institutions grounded on the functional form of the distribution of the asymptotic number of citations. We validate our approach using the publication records of 1,283 researchers from seven scientific and engineering disciplines and the chemistry departments at the 106 U.S. research institutions classified as "very high research activity". Our approach has three distinct advantages. First, it accurately captures the overall scien...

  3. Data assimilation in a coupled physical-biogeochemical model of the California Current System using an incremental lognormal 4-dimensional variational approach: Part 2-Joint physical and biological data assimilation twin experiments

    Science.gov (United States)

    Song, Hajoon; Edwards, Christopher A.; Moore, Andrew M.; Fiechter, Jerome

    2016-10-01

    Coupled physical and biological data assimilation is performed within the California Current System using model twin experiments. The initial condition of physical and biological variables is estimated using the four-dimensional variational (4DVar) method under the Gaussian and lognormal error distributions assumption, respectively. Errors are assumed to be independent, yet variables are coupled by assimilation through model dynamics. Using a nutrient-phytoplankton-zooplankton-detritus (NPZD) model coupled to an ocean circulation model (the Regional Ocean Modeling System, ROMS), the coupled data assimilation procedure is evaluated by comparing results to experiments with no assimilation and with assimilation of physical data and biological data separately. Independent assimilation of physical (biological) data reduces the root-mean-squared error (RMSE) of physical (biological) state variables by more than 56% (43%). However, the improvement in biological (physical) state variables is less than 7% (13%). In contrast, coupled data assimilation improves both physical and biological components by 57% and 49%, respectively. Coupled data assimilation shows robust performance with varied observational errors, resulting in significantly smaller RMSEs compared to the free run. It still produces the estimation of observed variables better than that from the free run even with the physical and biological model error, but leads to higher RMSEs for unobserved variables. A series of twin experiments illustrates that coupled physical and biological 4DVar assimilation is computationally efficient and practical, capable of providing the reliable estimation of the coupled system with the same and ready to be examined in a realistic configuration.

  4. Data assimilation in a coupled physical-biogeochemical model of the California current system using an incremental lognormal 4-dimensional variational approach: Part 3-Assimilation in a realistic context using satellite and in situ observations

    Science.gov (United States)

    Song, Hajoon; Edwards, Christopher A.; Moore, Andrew M.; Fiechter, Jerome

    2016-10-01

    A fully coupled physical and biogeochemical ocean data assimilation system is tested in a realistic configuration of the California Current System using the Regional Ocean Modeling System. In situ measurements for sea surface temperature and salinity as well as satellite observations for temperature, sea level and chlorophyll are used for the year 2000. Initial conditions of the combined physical and biogeochemical state are adjusted at the start of each 3-day assimilation cycle. Data assimilation results in substantial reduction of root-mean-square error (RMSE) over unconstrained model output. RMSE for physical variables is slightly lower when assimilating only physical variables than when assimilating both physical variables and surface chlorophyll. Surface chlorophyll RMSE is lowest when assimilating both physical variables and surface chlorophyll. Estimates of subsurface, nitrate and chlorophyll show modest improvements over the unconstrained model run relative to independent, unassimilated in situ data. Assimilation adjustments to the biogeochemical initial conditions are investigated within different regions of the California Current System. The incremental, lognormal 4-dimensional data assimilation method tested here represents a viable approach to coupled physical biogeochemical state estimation at practical computational cost.

  5. Multiplicative processes and power laws in human reaction times derived from hyperbolic functions

    Energy Technology Data Exchange (ETDEWEB)

    Medina, José M., E-mail: jmanuel@fisica.uminho.pt [Center for Physics, University of Minho, Campus de Gualtar, 4710-057 Braga (Portugal)

    2012-04-09

    In sensory psychophysics reaction time is a measure of the stochastic latency elapsed from stimulus presentation until a sensory response occurs as soon as possible. A random multiplicative model of reaction time variability is investigated for generating the reaction time probability density functions. The model describes a generic class of hyperbolic functions by Piéron's law. The results demonstrate that reaction time distributions are the combination of log-normal with power law density functions. A transition from log-normal to power law behavior is found and depends on the transfer of information in neurons. The conditions to obtain Zipf's law are analyzed. -- Highlights: ► I have examined human reaction time variability by random multiplicative processes. ► A transition from power law to log-normal distributions is described. ► The transition depends on the transfer of information in neurons. ► Zipf's law in reaction time distributions depends on the exponent of Piéron's law.

  6. Evolving Molecular Cloud Structure and the Column Density Probability Distribution Function

    CERN Document Server

    Ward, Rachel L; Sills, Alison

    2014-01-01

    The structure of molecular clouds can be characterized with the probability distribution function (PDF) of the mass surface density. In particular, the properties of the distribution can reveal the nature of the turbulence and star formation present inside the molecular cloud. In this paper, we explore how these structural characteristics evolve with time and also how they relate to various cloud properties as measured from a sample of synthetic column density maps of molecular clouds. We find that, as a cloud evolves, the peak of its column density PDF will shift to surface densities below the observational threshold for detection, resulting in an underlying lognormal distribution which has been effectively lost at late times. Our results explain why certain observations of actively star-forming, dynamically older clouds, such as the Orion molecular cloud, do not appear to have any evidence of a lognormal distribution in their column density PDFs. We also study the evolution of the slope and deviation point ...

  7. Study on Fitting Heat Release Rate of HCCI Combustion with Partition Lognormal Distribution Function%运用分段对数正态函数拟合HCCI燃烧放热规律的研究

    Institute of Scientific and Technical Information of China (English)

    张宗法; 熊锐; 罗伟欢; 周伟文

    2008-01-01

    在分析对数正态函数在HCCI燃烧放热率研究中的运用及其存在问题的基础上,首次提出了分段对数正态函数,并用其拟合了HCCI燃烧的实际放热率曲线.结果表明,分段对数正态函数可以充分反映HCCI燃烧的阶段性特征,拟合效果更好.

  8. Inversion method based on stochastic optimization for particle sizing.

    Science.gov (United States)

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  9. The use of the Wagner function to describe poled-order relaxation processes in electrooptic polymers

    Science.gov (United States)

    Verbiest, T.; Burland, D. M.

    1995-04-01

    The Wagner (lognormal) time decay function is used to describe decay of the second harmonic signal due to electric field poled-order relaxation in the guest—host polymer system 20 wt% lophine 1 in Ultem ®. This function can be related to a Gaussian distribution of Arrhenius activation energies. From the temperature dependence of the relaxation process one can determine the average value for the activation energy. In the present case a value of 40 kcal/mol is found consistent with experimental values obtained for a variety of other thermally activated processes in polymers.

  10. The relationship between the prestellar core mass function and the stellar initial mass function

    CERN Document Server

    Goodwin, Simon P; Kroupa, Pavel; Ward-Thompson, Derek; Whitworth, Anthony P

    2007-01-01

    Stars form from dense molecular cores, and the mass function of these cores (the CMF) is often found to be similar to the form of the stellar initial mass function (IMF). This suggests that the form of the IMF is the result of the form of the CMF. However, most stars are thought to form in binary and multiple systems, therefore the relationship between the IMF and the CMF cannot be trivial. We test two star formation scenarios - one in which all stars form as binary or triple systems, and one in which low-mass stars form in a predominantly single mode. We show that from a log-normal CMF, similar to those observed, and expected on theoretical grounds, the model in which all stars form as multiples gives a better fit to the IMF.

  11. Mathematical functions for the representation of chromatographic peaks.

    Science.gov (United States)

    Di Marco, V B; Bombi, G G

    2001-10-05

    About ninety empirical functions for the representation of chromatographic peaks have been collected and tabulated. The table, based on almost 200 references, reports for every function: (1) the most used name; (2) the most convenient equation, with the existence intervals for the adjustable parameters and for the independent variable; (3) the applications; (4) the mathematical properties, in relation to the possible applications. The list includes also equations originally proposed to represent peaks obtained in other analytical techniques (e.g. in spectroscopy), which in many instances have proved useful in representing chromatographic peaks as well; the built-in functions employed in some commercial peak-fitting software packages were included, too. Some of the most important chromatographic functions, i.e. the Exponentially Modified Gaussian, the Poisson, the Log-normal, the Edgeworth/Cramér series and the Gram/Charlier series, have been reviewed and commented in more detail.

  12. On the mass function of stars growing in a flocculent medium

    CERN Document Server

    Maschberger, Thomas

    2013-01-01

    Stars form in regions of very inhomogeneous densities and may have chaotic orbital motions. This leads to a time variation of the accretion rate, which will spread the masses over some mass range. We investigate the mass distribution functions that arise from fluctuating accretion rates in non-linear accretion, $\\dot{m} \\propto m^{\\alpha}$. The distribution functions evolve in time and develop a power law tail attached to a lognormal body, like in numerical simulations of star formation. Small fluctuations may be modelled by a Gaussian and develop a power-law tail $\\propto m^{-\\alpha}$ at the high-mass side for $\\alpha > 1$ and at the low-mass side for $\\alpha < 1$. Large fluctuations require that their distribution is strictly positive, for example, lognormal. For positive fluctuations the mass distribution function develops the power-law tail always at the high-mass hand side, independent of $\\alpha$ larger or smaller than unity. Furthermore, we discuss Bondi-Hoyle accretion in a supersonically turbulent...

  13. A general approach to crystalline and monomodal pore size mesoporous materials

    National Research Council Canada - National Science Library

    Poyraz, Altug S; Kuo, Chung-Hao; Biswas, Sourav; King'ondu, Cecil K; Suib, Steven L

    2013-01-01

    Mesoporous oxides attract a great deal of interest in many fields, including energy, catalysis and separation, because of their tunable structural properties such as surface area, pore volume and size...

  14. Understanding star formation in molecular clouds. III. Probability distribution functions of molecular lines in Cygnus X

    Science.gov (United States)

    Schneider, N.; Bontemps, S.; Motte, F.; Ossenkopf, V.; Klessen, R. S.; Simon, R.; Fechtenbaum, S.; Herpin, F.; Tremblin, P.; Csengeri, T.; Myers, P. C.; Hill, T.; Cunningham, M.; Federrath, C.

    2016-03-01

    The probability distribution function of column density (N-PDF) serves as a powerful tool to characterise the various physical processes that influence the structure of molecular clouds. Studies that use extinction maps or H2 column-density maps (N) that are derived from dust show that star-forming clouds can best be characterised by lognormal PDFs for the lower N range and a power-law tail for higher N, which is commonly attributed to turbulence and self-gravity and/or pressure, respectively. While PDFs from dust cover a large dynamic range (typically N ~ 1020-24 cm-2 or Av~ 0.1-1000), PDFs obtained from molecular lines - converted into H2 column density - potentially trace more selectively different regimes of (column) densities and temperatures. They also enable us to distinguish different clouds along the line of sight through using the velocity information. We report here on PDFs that were obtained from observations of 12CO, 13CO, C18O, CS, and N2H+ in the Cygnus X North region, and make a comparison to a PDF that was derived from dust observations with the Herschel satellite. The PDF of 12CO is lognormal for Av ~ 1-30, but is cut for higher Av because of optical depth effects. The PDFs of C18O and 13CO are mostly lognormal up to Av ~ 1-15, followed by excess up to Av ~ 40. Above that value, all CO PDFs drop, which is most likely due to depletion. The high density tracers CS and N2H+ exhibit only a power law distribution between Av ~ 15 and 400, respectively. The PDF from dust is lognormal for Av ~ 3-15 and has a power-law tail up to Av ~ 500. Absolute values for the molecular line column densities are, however, rather uncertain because of abundance and excitation temperature variations. If we take the dust PDF at face value, we "calibrate" the molecular line PDF of CS to that of the dust and determine an abundance [CS]/[H2] of 10-9. The slopes of the power-law tails of the CS, N2H+, and dust PDFs are -1.6, -1.4, and -2.3, respectively, and are thus consistent

  15. Mapping the core mass function onto the stellar IMF: multiplicity matters

    CERN Document Server

    Holman, K; Goodwin, S P; Whitworth, A P

    2013-01-01

    Observations indicate that the central portions of the Present-Day Prestellar Core Mass Function (CMF) and the Stellar Initial Mass Function (IMF) both have approximately log-normal shapes, but that the CMF is displaced to higher mass than the IMF by a factor F = 4+/-1. This has lead to suggestions that the shape of the IMF is directly inherited from the shape of the CMF - and therefore, by implication, that there is a self-similar mapping from the CMF onto the IMF. If we assume a self-similar mapping, it follows (i) that F = N0/eta, where eta is the mean fraction of a core's mass that ends up in stars, and N0 is the mean number of stars spawned by a single core; and (ii) that the stars spawned by a single core must have an approximately log-normal distribution of relative masses, with universal standard deviation sigma0. Observations can be expected to deliver ever more accurate estimates of F, but this still leaves a degeneracy between eta and N0; and sigma0 is also unconstrained by observation. Here we sho...

  16. What does the N-point function hierarchy of the cosmological matter density field really measure ?

    CERN Document Server

    Carron, Julien

    2015-01-01

    The cosmological dark matter field is not completely described by its hierarchy of $N$-point functions, a non-perturbative effect with the consequence that only part of the theory can be probed with the hierarchy. We give here an exact characterization of the joint information of the full set of $N$-point correlators of the lognormal field. The lognormal field is the archetypal example of a field where this effect occurs, and, at the same time, one of the few tractable and insightful available models to specify fully the statistical properties of the evolved matter density field beyond the perturbative regime. Nonlinear growth in the Universe in that model is set letting the log-density field probability density functional evolve keeping its Gaussian shape, according to the diffusion equation in Euclidean space. We show that the hierarchy probes a different evolution equation, the diffusion equation defined not in Euclidean space but on the compact torus, with uniformity as the long-term solution. The extract...

  17. What does the N-point function hierarchy of the cosmological matter density field really measure?

    Science.gov (United States)

    Carron, J.; Szapudi, I.

    2017-08-01

    The cosmological dark matter field is not completely described by its hierarchy of N-point functions, a non-perturbative effect with the consequence that only part of the theory can be probed with the hierarchy. We give here an exact characterization of the joint information of the hierarchy within the lognormal field. The lognormal field is the archetypal example of a field where this effect occurs, and, at the same time, one of the few tractable and insightful available models to specify fully the statistical properties of the evolved matter density field beyond the perturbative regime. Non-linear growth in the Universe in that model is set letting the log-density field probability density functional evolve keeping its Gaussian shape, according to the diffusion equation in Euclidean space. We show that the hierarchy probes a different evolution equation, the diffusion equation defined not in Euclidean space but on the compact torus, with uniformity as the long-term solution. The extraction of the hierarchy of correlators can be recast in the form of a non-linear transformation applied to the field, 'wrapping', undergoing a sharp transition towards complete disorder in the deeply non-linear regime, where all memory of the initial conditions is lost.

  18. THE INITIAL MASS FUNCTION MODELED BY A LEFT TRUNCATED BETA DISTRIBUTION

    Energy Technology Data Exchange (ETDEWEB)

    Zaninetti, Lorenzo, E-mail: zaninetti@ph.unito.it [Dipartimento di Fisica, Via Pietro Giuria 1, I-10125 Torino (Italy)

    2013-03-10

    The initial mass function for stars is usually fitted by three straight lines, which means it has seven parameters. The presence of brown dwarfs (BDs) increases the number of straight lines to four and the number of parameters to nine. Another common fitting function is the lognormal distribution, which is characterized by two parameters. This paper is devoted to demonstrating the advantage of introducing a left truncated beta probability density function, which is characterized by four parameters. The constant of normalization, the mean, the mode, and the distribution function are calculated for the left truncated beta distribution. The normal beta distribution that results from convolving independent normally distributed and beta distributed components is also derived. The chi-square test and the Kolmogorov-Smirnov test are performed on a first sample of stars and BDs that belongs to the massive young cluster NGC 6611, and on a second sample that represents the masses of the stars of the cluster NGC 2362.

  19. The initial mass function modeled by a left truncated beta distribution

    CERN Document Server

    Zaninetti, L

    2013-01-01

    The initial mass function (IMF) for the stars is usually fitted by three straight lines, which means seven parameters. The presence of brown dwarfs (BD) increases to four the straight lines and to nine the parameters. Another common fitting function is the lognormal distribution, which is characterized by two parameters. This paper is devoted to demonstrating the advantage of introducing a left truncated beta probability density function, which is characterized by four parameters. The constant of normalization, the mean, the mode and the distribution function are calculated for the left truncated beta distribution. The normal-beta (NB) distribution which results from convolving independent normally distributed and beta distributed components is also derived. The chi-square test and the K-S test are performed on a first sample of stars and BDs which belongs to the massive young cluster NGC 6611 and on a second sample which represents the star's masses of the cluster NGC 2362.

  20. On the excited state wave functions of Dirac fermions in the random gauge potential

    Indian Academy of Sciences (India)

    H Milani Moghaddam

    2010-04-01

    In the last decade, it was shown that the Liouville field theory is an effective theory of Dirac fermions in the random gauge potential (FRGP). We show that the Dirac wave functions in FRGP can be written in terms of descendents of the Liouville vertex operator. In the quasiclassical approximation of the Liouville theory, our model predicts 22.2 that the localization length scales with the energy as $ ∼ E^{−b^{2}(1+b^{2})^{2}}$, where is the strength of the disorder. The self-duality of the theory under the transformation → 1/ is discussed. We also calculate the distribution functions of 0 = |0 ()|2, (i.e. (0); 0 () is the ground state wave function), which behaves as the log-normal distribution function. It is also shown that in small 0, (0) behaves as a chi-square distribution.

  1. Nitrilotriacetic acid functionalized Adansonia digitata biosorbent: Preparation, characterization and sorption of Pb (II and Cu (II pollutants from aqueous solution

    Directory of Open Access Journals (Sweden)

    Adewale Adewuyi

    2016-11-01

    Full Text Available Nitrilotriacetic acid functionalized Adansonia digitata (NFAD biosorbent has been synthesized using a simple and novel method. NFAD was characterized by X-ray Diffraction analysis technique (XRD, Scanning Electron Microscopy (SEM, Brunauer-Emmett-Teller (BET surface area analyzer, Fourier Transform Infrared spectrometer (FTIR, particle size dispersion, zeta potential, elemental analysis (CHNS/O analyzer, thermogravimetric analysis (TGA, differential thermal analysis (DTA, derivative thermogravimetric analysis (DTG and energy dispersive spectroscopy (EDS. The ability of NFAD as biosorbent was evaluated for the removal of Pb (II and Cu (II ions from aqueous solutions. The particle distribution of NFAD was found to be monomodal while SEM revealed the surface to be heterogeneous. The adsorption capacity of NFAD toward Pb (II ions was 54.417 mg/g while that of Cu (II ions was found to be 9.349 mg/g. The adsorption of these metals was found to be monolayer, second-order-kinetic, and controlled by both intra-particle diffusion and liquid film diffusion. The results of this study were compared better than some reported biosorbents in the literature. The current study has revealed NFAD to be an effective biosorbent for the removal of Pb (II and Cu (II from aqueous solution.

  2. Nitrilotriacetic acid functionalized Adansonia digitata biosorbent: Preparation, characterization and sorption of Pb (II) and Cu (II) pollutants from aqueous solution.

    Science.gov (United States)

    Adewuyi, Adewale; Pereira, Fabiano Vargas

    2016-11-01

    Nitrilotriacetic acid functionalized Adansonia digitata (NFAD) biosorbent has been synthesized using a simple and novel method. NFAD was characterized by X-ray Diffraction analysis technique (XRD), Scanning Electron Microscopy (SEM), Brunauer-Emmett-Teller (BET) surface area analyzer, Fourier Transform Infrared spectrometer (FTIR), particle size dispersion, zeta potential, elemental analysis (CHNS/O analyzer), thermogravimetric analysis (TGA), differential thermal analysis (DTA), derivative thermogravimetric analysis (DTG) and energy dispersive spectroscopy (EDS). The ability of NFAD as biosorbent was evaluated for the removal of Pb (II) and Cu (II) ions from aqueous solutions. The particle distribution of NFAD was found to be monomodal while SEM revealed the surface to be heterogeneous. The adsorption capacity of NFAD toward Pb (II) ions was 54.417 mg/g while that of Cu (II) ions was found to be 9.349 mg/g. The adsorption of these metals was found to be monolayer, second-order-kinetic, and controlled by both intra-particle diffusion and liquid film diffusion. The results of this study were compared better than some reported biosorbents in the literature. The current study has revealed NFAD to be an effective biosorbent for the removal of Pb (II) and Cu (II) from aqueous solution.

  3. High Mass Star Formation. III. The Functional Form of the Submillimeter Clump Mass Function

    CERN Document Server

    Reid, M A; Reid, Michael A.; Wilson, Christine D.

    2006-01-01

    We investigate the mass function of cold, dusty clumps in 11 low- and high-mass star-forming regions. Using a homogeneous fitting technique, we analyze the shape of each region's clump mass function and examine the commonalities among them. We find that the submillimeter continuum clump mass function in low-mass star-forming regions is typically best fit by a lognormal distribution, while that in high-mass star-forming regions is better fit by a double power law. A single power law clump mass distribution is ruled out in all cases. Fitting all of the regions with a double power law, we find the mean power law exponent at the high-mass end of each mass function is alpha_high = -2.4+/-0.1, consistent with the Salpeter result of alpha = -2.35. We find no region-to-region trend in alpha_high with the mass scale of the clumps in a given region, as characterized by their median mass. Similarly, non non-parametric tests show that the shape of the clump mass function does not change much from region to region, despit...

  4. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    Science.gov (United States)

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  5. Modelling and validation of particle size distributions of supported nanoparticles using the pair distribution function technique

    Energy Technology Data Exchange (ETDEWEB)

    Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; Martinez-Inesta, Maria

    2017-04-13

    The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.

  6. Probability density functions for the variable solar wind near the solar cycle minimum

    CERN Document Server

    Vörös,; Leitner, M; Narita, Y; Consolini, G; Kovács, P; Tóth, A; Lichtenberger, J

    2015-01-01

    Unconditional and conditional statistics is used for studying the histograms of magnetic field multi-scale fluctuations in the solar wind near the solar cycle minimum in 2008. The unconditional statistics involves the magnetic data during the whole year 2008. The conditional statistics involves the magnetic field time series splitted into concatenated subsets of data according to a threshold in dynamic pressure. The threshold separates fast stream leading edge compressional and trailing edge uncompressional fluctuations. The histograms obtained from these data sets are associated with both large-scale (B) and small-scale ({\\delta}B) magnetic fluctuations, the latter corresponding to time-delayed differences. It is shown here that, by keeping flexibility but avoiding the unnecessary redundancy in modeling, the histograms can be effectively described by a limited set of theoretical probability distribution functions (PDFs), such as the normal, log-normal, kappa and logkappa functions. In a statistical sense the...

  7. Probabilistic density function estimation of geotechnical shear strength parameters using the second Chebyshev orthogonal polynomial

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A method to estimate the probabilistic density function (PDF) of shear strength parameters was proposed. The second Chebyshev orthogonal polynomial(SCOP) combined with sample moments (the originmoments)was used to approximate the PDF of parameters. χ2 test was adopted to verify the availability of the method. It is distribution-free because no classical theoretical distributions were assumed in advance and the inference result provides a universal form of probability density curves. Six most commonly-used theoretical distributions named normal, lognormal, extreme value Ⅰ , gama, beta and Weibull distributions were used to verify SCOP method. An example from the observed data of cohesion c of a kind of silt clay was presented for illustrative purpose. The results show that the acceptance levels in SCOP are all smaller than those in the classical finite comparative method and the SCOP function is more accurate and effective in the reliability analysis of geotechnical engineering.

  8. Local Field Distribution Function and High Order Field Moments for metal-dielectric composites.

    Science.gov (United States)

    Genov, Dentcho A.; Sarychev, Andrey K.; Shalaev, Vladimir M.

    2001-11-01

    In a span of two decades the physics of nonlinear optics saw vast improvement in our understanding of optical properties for various inhomogeneous mediums. One such medium is the metal-dielectric composite, where the metal inclusions have a surface coverage fraction of p, while the rest (1-p) is assumed to represent the dielectric host. The computations carried out by using different theoretical models and the experimental data show existence of giant local electric and magnetic field fluctuations. In this presentation we will introduce a new developed 2D model that determines exactly the Local Field Distribution Function (LFDF) and all other relevant parameters of the film. The LFDF for small filling factors will be shown to transform from lognormal distribution into a single-dipole distribution function. We also will confirm the predictions of the scaling theory for the high field moments, which have a power law dependence on the loss factor.

  9. Neuroplasticity of sign language: implications from structural and functional brain imaging.

    Science.gov (United States)

    Meyer, Martin; Toepel, Ulrike; Keller, Joerg; Nussbaumer, Daniela; Zysset, Stefan; Friederici, Angela D

    2007-01-01

    The present study was designed to investigate the neural correlates of German Sign Language (Deutsche Gebärdensprache; DGS) processing. In particular, was expected the impact of the visuo-spatial mode in sign language on underlying neural networks compared to the impact of the interpretation of linguistic information. For this purpose, two groups of participants took part in a functional MRI study at 3 Tesla. One group consisted of prelingually deafened users of DGS, the other group of hearing non-signers naïve to sign language. The two groups were presented with identical video sequences comprising DGS sentences in form of dialoges. To account for substantial interindividual anatomical variability observed in the group of deaf participants, the brain responses in the two groups of subjects were analyzed with two different procedures. Results from a multi-subject averaging approach were contrasted with an analysis, which can account for the considerable inter-individual variability of gross anatomical landmarks. The anatomy-based approach indicated that individuals' responses to proper DGS processing was tied up with a leftward asymmetry in the dorsolateral prefrontal cortex, anterior and middle temporal gyrus, and visual association cortices. In contrast, standard multi-subject averaging of deaf individuals during DGS perception revealed a less lateralized peri- and extrasylvian network. Furthermore, voxel-based analyses of the brains' morphometry evidenced a white-matter deficit in the left posterior longitudinal and inferior uncinate fasciculi and a steeper slope of the posterior part of the left Sylvian Fissure (SF) in the deaf individuals. These findings may imply that the cerebral anatomy of deaf individuals has undergone structural changes as a function of monomodal visual sign language perception during childhood and adolescence.

  10. Analyzing coastal environments by means of functional data analysis

    Science.gov (United States)

    Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.

    2017-07-01

    Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.

  11. Time-averaged probability density functions of soot nanoparticles along the centerline of a piloted turbulent diffusion flame using a scanning mobility particle sizer

    KAUST Repository

    Chowdhury, Snehaunshu

    2017-01-23

    In this study, we demonstrate the use of a scanning mobility particle sizer (SMPS) as an effective tool to measure the probability density functions (PDFs) of soot nanoparticles in turbulent flames. Time-averaged soot PDFs necessary for validating existing soot models are reported at intervals of ∆x/D∆x/D = 5 along the centerline of turbulent, non-premixed, C2H4/N2 flames. The jet exit Reynolds numbers of the flames investigated were 10,000 and 20,000. A simplified burner geometry based on a published design was chosen to aid modelers. Soot was sampled directly from the flame using a sampling probe with a 0.5-mm diameter orifice and diluted with N2 by a two-stage dilution process. The overall dilution ratio was not evaluated. An SMPS system was used to analyze soot particle concentrations in the diluted samples. Sampling conditions were optimized over a wide range of dilution ratios to eliminate the effect of agglomeration in the sampling probe. Two differential mobility analyzers (DMAs) with different size ranges were used separately in the SMPS measurements to characterize the entire size range of particles. In both flames, the PDFs were found to be mono-modal in nature near the jet exit. Further downstream, the profiles were flatter with a fall-off at larger particle diameters. The geometric mean of the soot size distributions was less than 10 nm for all cases and increased monotonically with axial distance in both flames.

  12. The bispectrum covariance beyond Gaussianity: A log-normal approach

    CERN Document Server

    Martin, Sandra; Simon, Patrick

    2011-01-01

    To investigate and specify the statistical properties of cosmological fields with particular attention to possible non-Gaussian features, accurate formulae for the bispectrum and the bispectrum covariance are required. The bispectrum is the lowest-order statistic providing an estimate for non-Gaussianities of a distribution, and the bispectrum covariance depicts the errors of the bispectrum measurement and their correlation on different scales. Currently, there do exist fitting formulae for the bispectrum and an analytical expression for the bispectrum covariance, but the former is not very accurate and the latter contains several intricate terms and only one of them can be readily evaluated from the power spectrum of the studied field. Neglecting all higher-order terms results in the Gaussian approximation of the bispectrum covariance. We study the range of validity of this Gaussian approximation for two-dimensional non-Gaussian random fields. For this purpose, we simulate Gaussian and non-Gaussian random fi...

  13. Extraction of delta-lognormal parameters from handwriting strokes

    Institute of Scientific and Technical Information of China (English)

    Réiean Plamondon; Xiaolin Li; Moussa Djioua

    2007-01-01

    In the context of the Kinematic Theory of Rapid Human Movement,handwriting strokes are considered to be primitives that reflect the intrinsic properties of the neuromuscular system of a writer as well as the basic control strategies that the writer uses to produce such strokes.The study of these strokes relies on the extraction of the different parameters that characterize a stroke velocity profile.In this paper,we present a new method for stroke parameter extraction.The algorithm is described and evaluated under various testing conditions.

  14. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  15. Probability Distribution Functions OF 12CO(J = 1-0) Brightness and Integrated Intensity in M51: The PAWS View

    CERN Document Server

    Hughes, Annie; Schinnerer, Eva; Colombo, Dario; Pety, Jerome; Leroy, Adam K; Dobbs, Clare L; Garcia-Burillo, Santiago; Thompson, Todd A; Dumas, Gaelle; Schuster, Karl F; Kramer, Carsten

    2013-01-01

    We analyse the distribution of CO brightness temperature and integrated intensity in M51 at ~40 pc resolution using new CO data from the Plateau de Bure Arcsecond Whirlpool Survey (PAWS). We present probability distribution functions (PDFs) of the CO emission within the PAWS field, which covers the inner 11 x 7 kpc of M51. We find variations in the shape of CO PDFs within different M51 environments, and between M51 and M33 and the Large Magellanic Cloud (LMC). Globally, the PDFs for the inner disk of M51 can be represented by narrow lognormal functions that cover 1 to 2 orders of magnitude in CO brightness and integrated intensity. The PDFs for M33 and the LMC are narrower and peak at lower CO intensities. However, the CO PDFs for different dynamical environments within the PAWS field depart from the shape of the global distribution. The PDFs for the interarm region are approximately lognormal, but in the spiral arms and central region of M51, they exhibit diverse shapes with a significant excess of bright CO...

  16. On the Globular Cluster Initial Mass Function below 1 Msolar

    Science.gov (United States)

    Paresce, Francesco; De Marchi, Guido

    2000-05-01

    Accurate luminosity functions (LFs) for a dozen globular clusters have now been measured at or just beyond their half-light radius using HST. They span almost the entire cluster main sequence (MS) below 0.75 Msolar. All these clusters exhibit LFs that rise continuously from an absolute I magnitude MI~=6 to a peak at MI~=8.5-9 and then drop with increasing MI. Transformation of the LFs into mass functions (MFs) by means of mass-luminosity (ML) relations that are consistent with all presently available data on the physical properties of low-mass, low-metallicity stars shows that all the LFs observed so far can be obtained from MFs having the shape of a lognormal distribution with characteristic mass mc=0.33+/-0.03 Msolar and standard deviation σ=0.34+/-0.04. In particular, the LFs of the four clusters in the sample that extend well beyond the peak luminosity down to close to the hydrogen-burning limit (NGC 6341, NGC 6397, NGC 6752, and NGC 6809) can only be reproduced by such distributions and not by a single power law in the 0.1-0.6 Msolar range. After correction for the effects of mass segregation, the variation of the ratio of the number of higher to lower mass stars with cluster mass or any simple orbital parameter or the expected time to disruption recently computed for these clusters shows no statistically significant trend over a range of this last parameter of more than a factor of ~100. We conclude that the global MFs of these clusters have not been measurably modified by evaporation and tidal interactions with the Galaxy and, thus, should reflect the initial distribution of stellar masses. Since the lognormal function that we find is also very similar to the one obtained independently for much younger clusters and to the form expected theoretically, the implication seems to be unavoidable that it represents the true stellar initial mass function for this type of star in this mass range. Based on observations with the NASA/ESA Hubble Space Telescope

  17. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    Science.gov (United States)

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide

  18. Synthesis And Properties Of Functional Ultra-High Molecular Weight Transparent Styrene-Butadiene Block Copolymer

    Institute of Scientific and Technical Information of China (English)

    GONG Guang-bi; ZHAO Xu-tao; WANG Gui-lun

    2004-01-01

    Functional ultra-high molecular weight transparent styrene-butadiene block copolymer possesses both high transparency and impact resistance and has excellent comprehensive properties prior to other transparent resins. In this paper we not only use anionic polymerization process which includes 1 time addition of initiator and 3 time addition of monomers, but also introduce functional coupling agent for the fist time to prepare mentioned functional block copolymer.The typical preparation process is described as the following: (a) Adding cyclohexane, styrene and initiator to the polymerizer, the polymerization is carried out at 50~75℃; (b) adding a mixture of styrene, butadiene and cyclohexane, the polymerization is carried out at 50~70℃ ;(c) adding a mixture of butadiene and cyclohexane, the polymerization is finished at 60~70℃ ;(d) adding coupling agent which is a substituted trimethoxysilane being expressed as N-silane, O-silane and being converted into a functional group (-NH, -OH) of mentioned block copolymer, coupling at 75~90℃ for 1 hr; (e) The amounts of coupling agent are about one sixth to one third of the initiator; (f) treating the prepared copolymer solution with some water and Carbon dioxide at 50~70℃ for 15 min.The copolymer is from three-arm to six-arm mono-modal radial block copolymer having 75~90%styrene, 10~25% butadiene and functional group of-NH or-OH. of the copolymer, Mw is from 30×104 to 120×104, Mw/Mn from 2.0 to 2.5, Izod notched impact strength 50~65 J/m,light transmission not less 87.5%, tensile strength not less 45 Mpa.The exploratory research shows that the mole ratio and feed rate of the random copolymerized styrene-butadiene, as well as the total ratio of styrene-butadiene have greater influence on the properties of the copolymer. The following model is established:Y=bo +∑3j=1 bjxj+∑3j=1bkjxkxj+∑3j=1bjjx2j (k<j)Where: Y is the light transmission, tensile strength, elongation, Izod notched impact

  19. A method to account for outliers in the development of safety performance functions.

    Science.gov (United States)

    El-Basyouny, Karim; Sayed, Tarek

    2010-07-01

    Accident data sets can include some unusual data points that are not typical of the rest of the data. The presence of these data points (usually termed outliers) can have a significant impact on the estimates of the parameters of safety performance functions (SPFs). Few studies have considered outliers analysis in the development of SPFs. In these studies, the practice has been to identify and then exclude outliers from further analysis. This paper introduces alternative mixture models based on the multivariate Poisson lognormal (MVPLN) regression. The proposed approach presents outlier resistance modeling techniques that provide robust safety inferences by down-weighting the outlying observations rather than rejecting them. The first proposed model is a scale-mixture model that is obtained by replacing the normal distribution in the Poisson-lognormal hierarchy by the Student t distribution, which has heavier tails. The second model is a two-component mixture (contaminated normal model) where it is assumed that most of the observations come from a basic distribution, whereas the remaining few outliers arise from an alternative distribution that has a larger variance. The results indicate that the estimates of the extra-Poisson variation parameters were considerably smaller under the mixture models leading to higher precision. Also, both mixture models have identified the same set of outliers. In terms of goodness-of-fit, both mixture models have outperformed the MVPLN. The outlier rejecting MVPLN model provided a superior fit in terms of a much smaller DIC and standard deviations for the parameter estimates. However, this approach tends to underestimate uncertainty by producing too small standard deviations for the parameter estimates, which may lead to incorrect conclusions. It is recommended that the proposed outlier resistance modeling techniques be used unless the exclusion of the outlying observations can be justified because of data related reasons (e

  20. Measuring safety treatment effects using full Bayes non-linear safety performance intervention functions.

    Science.gov (United States)

    El-Basyouny, Karim; Sayed, Tarek

    2012-03-01

    Full Bayes linear intervention models have been recently proposed to conduct before-after safety studies. These models assume linear slopes to represent the time and treatment effects across the treated and comparison sites. However, the linear slope assumption can only furnish some restricted treatment profiles. To overcome this problem, a first-order autoregressive (AR1) safety performance function (SPF) that has a dynamic regression equation (known as the Koyck model) is proposed. The non-linear 'Koyck' model is compared to the linear intervention model in terms of inference, goodness-of-fit, and application. Both models were used in association with the Poisson-lognormal (PLN) hierarchy to evaluate the safety performance of a sample of intersections that have been improved in the Greater Vancouver area. The two models were extended by incorporating random parameters to account for the correlation between sites within comparison-treatment pairs. Another objective of the paper is to compute basic components related to the novelty effects, direct treatment effects, and indirect treatment effects and to provide simple expressions for the computation of these components in terms of the model parameters. The Koyck model is shown to furnish a wider variety of treatment profiles than those of the linear intervention model. The analysis revealed that incorporating random parameters among matched comparison-treatment pairs in the specification of SPFs can significantly improve the fit, while reducing the estimates of the extra-Poisson variation. Also, the proposed PLN Koyck model fitted the data much better than the Poisson-lognormal linear intervention (PLNI) model. The novelty effects were short lived, the indirect (through traffic volumes) treatment effects were approximately within ±10%, whereas the direct treatment effects indicated a non-significant 6.5% reduction during the after period under PLNI compared to a significant 12.3% reduction in predicted collision

  1. Erbium trifluoromethanesulfonate-catalyzed Friedel–Crafts acylation using aromatic carboxylic acids as acylating agents under monomode-microwave irradiation

    DEFF Research Database (Denmark)

    Tran, Phuong Hoang; Hansen, Poul Erik; Nguyen, Hai Truong;

    2015-01-01

    Erbium trifluoromethanesulfonate is found to be a good catalyst for the Friedel–Crafts acylation of arenes containing electron-donating substituents using aromatic carboxylic acids as the acylating agents under microwave irradiation. An effective, rapid and waste-free method allows the preparation...

  2. Feasibility of monomodal analgesia with IV alfentanil during burn dressing changes at bedside (in spontaneously breathing non-intubated patients).

    Science.gov (United States)

    Fontaine, Mathieu; Latarjet, Jacques; Payre, Jacqueline; Poupelin, Jean-Charles; Ravat, François

    2017-03-01

    The severe pain related to repeated burn dressing changes at bedside is often difficult to manage. However these dressings can be performed at bedside on spontaneously breathing non-intubated patients using powerful intravenous opioids with a quick onset and a short duration of action such as alfentanil. The purpose of this study is to demonstrate the efficacy and safety of the protocol which is used in our burn unit for pain control during burn dressing changes. Cohort study began after favorable opinion from local ethic committee has been collected. Patient's informed consent was collected. No fasting was required. Vital signs for patients were continuously monitored (non-invasive blood pressure, ECG monitoring, cutaneous oxygen saturation, respiratory rate) all over the process. Boluses of 500 (±250) mcg IV alfentanil were administered. A continuous infusion was added in case of insufficient analgesia. Adverse reactions were collected and pain intensity was measured throughout the dressing using a ten step verbal rating scale (VRS) ranging from 0 (no pain) to 10 (worst pain conceivable). 100 dressings (35 patients) were analyzed. Median age was 45 years and median burned area 10%. We observed 3 blood pressure drops, 5 oxygen desaturations (treated with stimulation without the necessity of ventilatory support) and one episode of nausea. Most of the patients (87%) were totally conscious during the dressing and 13% were awakened by verbal stimulation. Median total dose of alfentanil used was 2000μg for a median duration of 35min. Pain scores during the procedure were low or moderate (VRS mean=2.0 and maximal VRS=5). Median satisfaction collected 2h after the dressing was 10 on a ten step scale. Pain control with intravenous alfentanil alone is efficient and appears safe for most burn bedside repeated dressings in hospitalized patients. It achieves satisfactory analgesia during and after the procedure. It is now our standard analgesic method to provide repeated bedside dressings changes for burned patients. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  3. Hazard function analysis for flood planning under nonstationarity

    Science.gov (United States)

    Read, Laura K.; Vogel, Richard M.

    2016-05-01

    The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.

  4. The Galactic disk mass-budget I. stellar mass-function and density

    CERN Document Server

    Chabrier, G

    2001-01-01

    In this paper, we use the general theory worked out within the past few years for the structure and the evolution of low-mass stars to derive the stellar mass-function in the Galactic disk down to the vicinity of the hydrogen-burning limit, from the observed nearby luminosity functions. The accuracy of the mass-magnitude relationships derived from the afore-mentioned theory is examined by comparison with recent, accurate observational relationships in the M-dwarf domain. The mass function is shown to flatten out below $\\sim 1 \\msol$ but to keep rising down to the bottom of the main sequence. Combining the present determination below 1 $\\msol$ and Scalo's (1986) mass function for larger masses, we show that the mass function is well described over the entire stellar mass range, from $\\sim 100 \\msol$ to $\\sim 0.1 \\msol$, by three functional forms, namely a two-segment power-law, a log-normal form or an exponential form, all normalized to the Hipparcos sample at 0.8 $\\msol$. Integration of this mass function yie...

  5. Size distribution and mixing state of refractory black carbon aerosol from a coastal city in South China

    Science.gov (United States)

    Wang, Qiyuan; Huang, Ru-Jin; Zhao, Zhuzi; Zhang, Ningning; Wang, Yichen; Ni, Haiyan; Tie, Xuexi; Han, Yongming; Zhuang, Mazhan; Wang, Meng; Zhang, Jieru; Zhang, Xuemin; Dusek, Uli; Cao, Junji

    2016-11-01

    An intensive measurement campaign was conducted in the coastal city of Xiamen, China to investigate the size distribution and mixing state of the refractory black carbon (rBC) aerosol. The average rBC concentration for the campaign, measured with a ground-based single particle soot photometer (SP2), was 2.3 ± 1.7 μg m- 3, which accounted for ~ 4.3% of the PM2.5 mass. A potential source contribution function model indicated that emissions from coastal cities to the southwest were the most important source for the rBC and that shipping traffic was another likely source. The mass size distribution of the rBC particles was mono-modal and approximately lognormal, with a mass median diameter (MMD) of ~ 185 nm. Larger MMDs (~ 195 nm) occurred during polluted conditions compared with non-polluted times (~ 175 nm) due to stronger biomass burning activities during pollution episodes. Uncoated or thinly-coated particles composed the bulk of the rBC aerosol, and on average ~ 31% of the rBC was internally-mixed or thickly-coated. A positive matrix factorization model showed that organic materials were the predominant component of the rBC coatings and that mixing with nitrate increased during pollution conditions. These findings should lead to improvements in the parameterizations used to model the radiative effects of rBC.

  6. The local space density of Sb-Sdm galaxies as function of their scalesize, surface brightness and luminosity

    CERN Document Server

    De Jong, R S; Jong, Roelof S. de; Lacey, Cedric

    2000-01-01

    We investigate the dependence of the local space density of spiral galaxies on luminosity, scalesize and surface brightness. We derive bivariate space density distributions in these quantities from a sample of about 1000 Sb-Sdm spiral galaxies, corrected for selection effects in luminosity and surface brightness. The structural parameters of the galaxies were corrected for internal extinction using a description depending on galaxy surface brightness. We find that the bivariate space density distribution of spiral galaxies in the (luminosity, scalesize)-plane is well described by a Schechter luminosity function in the luminosity dimension and a log-normal scale size distribution at a given luminosity. This parameterization of the scalesize distribution was motivated by a simple model for the formation of disks within dark matter halos, with halos acquiring their angular momenta through tidal torques from neighboring objects, and the disk specific angular momentum being proportional to that of the parent halo....

  7. Relationship between the fraction of backscattered light and the asymmetry parameter

    Science.gov (United States)

    Horvath, Helmuth

    2015-04-01

    location where sampling took place and the type of aerosol seems to be of minor importance. The lines in figure 1 show results of calculations for spherical particles having a lognormal monomodal size distribution of various sizes. Several approximations for the relationship asymmetry vs backscattering available from the literature are shown as well Thus it appears that an unanimous relation, fairly independent of location and type of aerosol, has been found between asymmetry parameters and backscattering ratio. The assumption of spherical particles seems to be a good assumption. Figure 1. Relation between backscattered fraction and asymmetry parameter. The cloud of dots represents about 6500 measurements of the phase function. The lines are results of calculations for aerosols consisting of monomodal spherical particles and approximations

  8. Texture and lubrication properties of functional cream cheese: Effect of β-glucan and phytosterol.

    Science.gov (United States)

    Ningtyas, Dian Widya; Bhandari, Bhesh; Bansal, Nidhi; Prakash, Sangeeta

    2017-06-08

    The effect of β-glucan (BG) and phytosterols (PS) as fat replacers on textural, microstructural, and lubrication properties of reduced-fat cream cheese was investigated. Five formulations (BG-PS ester, PS ester, BG-PS emulsions, PS emulsions, and BG) of cream cheese with added β-glucan and phytosterols (in emulsified and esterified form) were investigated and compared with commercial cheese. Among the five formulations used in this experiment, the effect of β-glucan appeared to be more pronounced imparting increased viscosity and firmness to reduced-fat cream cheese, similar to commercial high-fat cream cheese sample. Conversely, in lubrication study both the phytosterols (esterified and emulsified) were effective in reducing the coefficient of friction resulting in a more spreadable cream cheese. The microstructure of cream cheese with added β-glucan and phytosterols, used solo or in combination, exhibited more open structure of casein matrix, although differences in fat globule size were observed. Cream cheese made from PS emulsion (emulsified from phytosterols powder) resulted in a larger fat globule size than PS ester and β-glucan as shown by confocal laser scanning microscopy. In addition, the particle size distribution of cream cheese formulation containing β-glucan only showed a monomodal curves with small globule size, while a bimodal distribution with larger particle size was observed from cream cheese with phytosterols alone. Reducing the fat content, impacts the quality characteristics of low-fat cream cheese. This research showed a novel way to incorporate β-glucan and phytosterols as fat replacers and functional ingredients in cream cheese formulation that improves its textural and lubrication properties. In addition, this article discusses the effect of β-glucan and phytosterols used both individually and in combination on the particle size, microstructural and rheological characteristics of functional cream cheese and compares them against

  9. The bimodal initial mass function in the Orion Nebula Cloud

    CERN Document Server

    Drass, H; Chini, R; Bayo, A; Hackstein, M; Hoffmeister, V; Godoy, N; Vogt, N

    2016-01-01

    Due to its youth, proximity and richness the Orion Nebula Cloud (ONC) is an ideal testbed to obtain a comprehensive view on the Initial Mass Function (IMF) down to the planetary mass regime. Using the HAWK-I camera at the VLT, we have obtained an unprecedented deep and wide near-infrared JHK mosaic of the ONC (90% completeness at K~19.0mag, 22'x28). Applying the most recent isochrones and accounting for the contamination of background stars and galaxies, we find that ONC's IMF is bimodal with distinct peaks at about 0.25 and 0.025 M_sun separated by a pronounced dip at the hydrogen burning limit (0.08 M_sun), with a depth of about a factor 2-3 below the log-normal distribution. Apart from ~920 low-mass stars (M 0.005 M_sun, hence about ten times more substellar candidates than known before. The substellar IMF peak at 0.025 M_sun could be caused by BDs and IPMOs which have been ejected from multiple systems during the early star-formation process or from circumstellar disks.

  10. Optical and physical properties of stratospheric aerosols from balloon measurements in the visible and near-infrared domains. 1. Analysis of aerosol extinction spectra from the AMON and SALOMON balloonborne spectrometers.

    Science.gov (United States)

    Berthet, Gwenaël; Renard, Jean-Baptiste; Brogniez, Colette; Robert, Claude; Chartier, Michel; Pirre, Michel

    2002-12-20

    Aerosol extinction coefficients have been derived in the 375-700-nm spectral domain from measurement in the stratosphere since 1992, at night, at mid- and high latitudes from 15 to 40 km, by two balloonborne spectrometers, Absorption par les Minoritaires Ozone et NO(chi) (AMON) and Spectroscopie d'Absorption Lunaire pour l'Observation des Minoritaires Ozone et NO(chi) (SALOMON). Log-normal size distributions associated with the Mie-computed extinction spectra that best fit the measurements permit calculation of integrated properties of the distributions. Although measured extinction spectra that correspond to background aerosols can be reproduced by the Mie scattering model by use of monomodal log-normal size distributions, each flight reveals some large discrepancies between measurement and theory at several altitudes. The agreement between measured and Mie-calculated extinction spectra is significantly improved by use of bimodal log-normal distributions. Nevertheless, neither monomodal nor bimodal distributions permit correct reproduction of some of the measured extinction shapes, especially for the 26 February 1997 AMON flight, which exhibited spectral behavior attributed to particles from a polar stratospheric cloud event.

  11. Optical and physical properties of stratospheric aerosols from balloon measurements in the visible and near-infrared domains. I. Analysis of aerosol extinction spectra from the AMON and SALOMON balloonborne spectrometers

    Science.gov (United States)

    Berthet, Gwenaël; Renard, Jean-Baptiste; Brogniez, Colette; Robert, Claude; Chartier, Michel; Pirre, Michel

    2002-12-01

    Aerosol extinction coefficients have been derived in the 375-700-nm spectral domain from measurements in the stratosphere since 1992, at night, at mid- and high latitudes from 15 to 40 km, by two balloonborne spectrometers, Absorption par les Minoritaires Ozone et NOx (AMON) and Spectroscopie d'Absorption Lunaire pour l'Observation des Minoritaires Ozone et NOx (SALOMON). Log-normal size distributions associated with the Mie-computed extinction spectra that best fit the measurements permit calculation of integrated properties of the distributions. Although measured extinction spectra that correspond to background aerosols can be reproduced by the Mie scattering model by use of monomodal log-normal size distributions, each flight reveals some large discrepancies between measurement and theory at several altitudes. The agreement between measured and Mie-calculated extinction spectra is significantly improved by use of bimodal log-normal distributions. Nevertheless, neither monomodal nor bimodal distributions permit correct reproduction of some of the measured extinction shapes, especially for the 26 February 1997 AMON flight, which exhibited spectral behavior attributed to particles from a polar stratospheric cloud event.

  12. Entrainment Rate in Shallow Cumuli: Dependence on Entrained Dry Air Sources and Probability Density Functions

    Science.gov (United States)

    Lu, C.; Liu, Y.; Niu, S.; Vogelmann, A. M.

    2012-12-01

    In situ aircraft cumulus observations from the RACORO field campaign are used to estimate entrainment rate for individual clouds using a recently developed mixing fraction approach. The entrainment rate is computed based on the observed state of the cloud core and the state of the air that is laterally mixed into the cloud at its edge. The computed entrainment rate decreases when the air is entrained from increasing distance from the cloud core edge; this is because the air farther away from cloud edge is drier than the neighboring air that is within the humid shells around cumulus clouds. Probability density functions of entrainment rate are well fitted by lognormal distributions at different heights above cloud base for different dry air sources (i.e., different source distances from the cloud core edge). Such lognormal distribution functions are appropriate for inclusion into future entrainment rate parameterization in large scale models. To the authors' knowledge, this is the first time that probability density functions of entrainment rate have been obtained in shallow cumulus clouds based on in situ observations. The reason for the wide spread of entrainment rate is that the observed clouds are affected by entrainment mixing processes to different extents, which is verified by the relationships between the entrainment rate and cloud microphysics/dynamics. The entrainment rate is negatively correlated with liquid water content and cloud droplet number concentration due to the dilution and evaporation in entrainment mixing processes. The entrainment rate is positively correlated with relative dispersion (i.e., ratio of standard deviation to mean value) of liquid water content and droplet size distributions, consistent with the theoretical expectation that entrainment mixing processes are responsible for microphysics fluctuations and spectral broadening. The entrainment rate is negatively correlated with vertical velocity and dissipation rate because entrainment

  13. Functional Boxplots

    KAUST Repository

    Sun, Ying

    2011-01-01

    This article proposes an informative exploratory tool, the functional boxplot, for visualizing functional data, as well as its generalization, the enhanced functional boxplot. Based on the center outward ordering induced by band depth for functional data, the descriptive statistics of a functional boxplot are: the envelope of the 50% central region, the median curve, and the maximum non-outlying envelope. In addition, outliers can be detected in a functional boxplot by the 1.5 times the 50% central region empirical rule, analogous to the rule for classical boxplots. The construction of a functional boxplot is illustrated on a series of sea surface temperatures related to the El Niño phenomenon and its outlier detection performance is explored by simulations. As applications, the functional boxplot and enhanced functional boxplot are demonstrated on children growth data and spatio-temporal U.S. precipitation data for nine climatic regions, respectively. This article has supplementary material online. © 2011 American Statistical Association.

  14. Functionalized Calixpyrroles

    DEFF Research Database (Denmark)

    Vargas-Zúñiga, Gabriela; Sessler, Jonathan; Bähring, Steffen

    2016-01-01

    as the extraction and transport of anionic species and ion pairs including cesium halide and sulfate salts. It is divided into seven sections. The first section describes the synthetic methods employed to functionalized calix[4]pyrrole. The second section focuses on functionalized calix[4]pyrroles that display...... enhanced anion binding properties compared to the non-functionalized parent system, octamethylcalix[4]pyrrole. The use of functionalized calix[4]pyrroles containing a fluorescent group or functionalized calix[4]pyrroles as building blocks for the preparation of stimulus-responsive materials is discussed...... and the eventual development of therapeutics that function via the transport of anions across cell membranes, are discussed....

  15. The clump mass function of the dense clouds in the Carina nebula complex

    Science.gov (United States)

    Pekruhl, S.; Preibisch, T.; Schuller, F.; Menten, K.

    2013-02-01

    Context. The question how the initial conditions in a star-forming region affect the resulting mass function of the forming stars is one of the most fundamental open topics in star formation theory. Aims: We want to characterize the properties of the cold dust clumps in the Carina nebula complex, which is one of the most massive star forming regions in our Galaxy and shows a very high level of massive star feedback. We derive the clump mass function (ClMF), explore the reliability of different clump extraction algorithms, and investigate the influence of the temperatures within the clouds on the resulting shape of the ClMF. Methods: We analyze a 1.25° × 1.25° wide-field submillimeter map obtained with LABOCA at the APEX telescope, which provides the first spatially complete survey of the clouds in the Carina nebula complex. We use the three clump-finding algorithms CLUMPFIND, GAUSSCLUMPS and SExtractor to identify individual clumps and determine their total fluxes. In addition to assuming a common "typical" temperature for all clouds, we also employ an empirical relation between cloud column densities and temperature to determine an estimate of the individual clump temperatures, and use this to determine individual clump masses. Results: We find that the ClMFs resulting from the different extraction methods show considerable differences in their shape. While the ClMF based on the CLUMPFIND extraction is very well described by a power-law (for clump masses well above the completeness limit), the ClMFs based on the extractions with GAUSSCLUMPS and SExtractor are better represented by a log-normal distribution. We also find that the use of individual clump temperatures leads to a shallower ClMF slope than the (often used) assumption of a common temperature (e.g. 20 K) of all clumps. Conclusions: The power-law of dN/dM ∝ M-1.95 we find for the CLUMPFIND sample is in good agreement with ClMF slopes found in previous studies of the ClMFs of other regions. The

  16. Universal functional form of 1-minute raindrop size distribution?

    Science.gov (United States)

    Cugerone, Katia; De Michele, Carlo

    2015-04-01

    Rainfall remains one of the poorly quantified phenomena of the hydrological cycle, despite its fundamental role. No universal laws describing the rainfall behavior are available in literature. This is probably due to the continuous description of rainfall, which is a discrete phenomenon, made by drops. From the statistical point of view, the rainfall variability at particle size scale, is described by the drop size distribution (DSD). With this term, it is generally indicated as the concentration of raindrops per unit volume and diameter, as the probability density function of drop diameter at the ground, according to the specific problem of interest. Raindrops represent the water exchange, under liquid form, between atmosphere and earth surface, and the number of drops and their size have impacts in a wide range of hydrologic, meteorologic, and ecologic phenomena. DSD is used, for example, to measure the multiwavelength rain attenuation for terrestrial and satellite systems, it is an important input for the evaluation of the below cloud scavenging coefficient of the aerosol by precipitation, and is of primary importance to make estimates of rainfall rate through radars. In literature, many distributions have been used to this aim (Gamma and Lognormal above all), without statistical supports and with site-specific studies. Here, we present an extensive investigation of raindrop size distribution based on 18 datasets, consisting in 1-minute disdrometer data, sampled using Joss-Waldvogel or Thies instrument in different locations on Earth's surface. The aim is to understand if an universal functional form of 1-minute drop diameter variability exists. The study consists of three main steps: analysis of the high order moments, selection of the model through the AIC index and test of the model with the use of goodness-of-fit tests.

  17. The Bolocam Galactic Plane Survey. XIII. Physical Properties and Mass Functions of Dense Molecular Cloud Structures

    CERN Document Server

    Ellsworth-Bowers, Timothy P; Riley, Allyssa; Rosolowsky, Erik; Ginsburg, Adam; Evans, Neal J; Bally, John; Battersby, Cara; Shirley, Yancy L; Merello, Manuel

    2015-01-01

    We use the distance probability density function (DPDF) formalism of Ellsworth-Bowers et al. (2013, 2015) to derive physical properties for the collection of 1,710 Bolocam Galactic Plane Survey (BGPS) version 2 sources with well-constrained distance estimates. To account for Malmquist bias, we estimate that the present sample of BGPS sources is 90% complete above 400 $M_\\odot$ and 50% complete above 70 $M_\\odot$. The mass distributions for the entire sample and astrophysically motivated subsets are generally fitted well by a lognormal function, with approximately power-law distributions at high mass. Power-law behavior emerges more clearly when the sample population is narrowed in heliocentric distance (power-law index $\\alpha = 2.0\\pm0.1$ for sources nearer than 6.5 kpc and $\\alpha = 1.9\\pm0.1$ for objects between 2 kpc and 10 kpc). The high-mass power-law indices are generally $1.85 \\leq \\alpha \\leq 2.05$ for various subsamples of sources, intermediate between that of giant molecular clouds and the stellar ...

  18. The relation between accretion rates and the initial mass function in hydrodynamical simulations of star formation

    CERN Document Server

    Maschberger, Th; Clarke, C J; Moraux, E

    2013-01-01

    We analyse a hydrodynamical simulation of star formation. Sink particles in the simulations which represent stars show episodic growth, which is presumably accretion from a core that can be regularly replenished in response to the fluctuating conditions in the local environment. The accretion rates follow $\\dot{m} \\propto m^{2/3}$, as expected from accretion in a gas-dominated potential, but with substantial variations over-laid on this. The growth times follow an exponential distribution which is tapered at long times due to the finite length of the simulation. The initial collapse masses have an approximately lognormal distribution with already an onset of a power-law at large masses. The sink particle mass function can be reproduced with a non-linear stochastic process, with fluctuating accretion rates $\\propto m^{2/3}$, a distribution of seed masses and a distribution of growth times. All three factors contribute equally to the form of the final sink mass function. We find that the upper power law tail of...

  19. Scaling of maximum probability density functions of velocity and temperature increments in turbulent systems

    CERN Document Server

    Huang, Y X; Zhou, Q; Qiu, X; Shang, X D; Lu, Z M; Liu, and Y L

    2014-01-01

    In this paper, we introduce a new way to estimate the scaling parameter of a self-similar process by considering the maximum probability density function (pdf) of tis increments. We prove this for $H$-self-similar processes in general and experimentally investigate it for turbulent velocity and temperature increments. We consider turbulent velocity database from an experimental homogeneous and nearly isotropic turbulent channel flow, and temperature data set obtained near the sidewall of a Rayleigh-B\\'{e}nard convection cell, where the turbulent flow is driven by buoyancy. For the former database, it is found that the maximum value of increment pdf $p_{\\max}(\\tau)$ is in a good agreement with lognormal distribution. We also obtain a scaling exponent $\\alpha\\simeq 0.37$, which is consistent with the scaling exponent for the first-order structure function reported in other studies. For the latter one, we obtain a scaling exponent $\\alpha_{\\theta}\\simeq0.33$. This index value is consistent with the Kolmogorov-Ob...

  20. Investigating different approaches to develop informative priors in hierarchical Bayesian safety performance functions.

    Science.gov (United States)

    Yu, Rongjie; Abdel-Aty, Mohamed

    2013-07-01

    The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made.

  1. A finer view of the conditional galaxy luminosity function and magnitude-gap statistics

    Science.gov (United States)

    Trevisan, M.; Mamon, G. A.

    2017-10-01

    The gap between first- and second-ranked galaxy magnitudes in groups is often considered a tracer of their merger histories, which in turn may affect galaxy properties, and also serves to test galaxy luminosity functions (LFs). We remeasure the conditional luminosity function (CLF) of the Main Galaxy Sample of the SDSS in an appropriately cleaned subsample of groups from the Yang catalogue. We find that, at low group masses, our best-fitting CLF has steeper satellite high ends, yet higher ratios of characteristic satellite to central luminosities in comparison with the CLF of Yang et al. The observed fractions of groups with large and small magnitude gaps as well as the Tremaine & Richstone statistics are not compatible with either a single Schechter LF or with a Schechter-like satellite plus lognormal central LF. These gap statistics, which naturally depend on the size of the subsamples, and also on the maximum projected radius, Rmax, for defining the second brightest galaxy, can only be reproduced with two-component CLFs if we allow small gap groups to preferentially have two central galaxies, as expected when groups merge. Finally, we find that the trend of higher gap for higher group velocity dispersion, σv, at a given richness, discovered by Hearin et al., is strongly reduced when we consider σv in bins of richness, and virtually disappears when we use group mass instead of σv. This limits the applicability of gaps in refining cosmographic studies based on cluster counts.

  2. Entire functions

    CERN Document Server

    Markushevich, A I

    1966-01-01

    Entire Functions focuses on complex numbers and the algebraic operations on them and the basic principles of mathematical analysis.The book first elaborates on the concept of an entire function, including the natural generalization of the concept of a polynomial and power series. The text then takes a look at the maximum absolute value and the order of an entire function, as well as calculations for the coefficients of power series representing a given function, use of integrals, and complex numbers. The publication elaborates on the zeros of an entire function and the fundamen

  3. Functionalized amphipols

    DEFF Research Database (Denmark)

    Della Pia, Eduardo Antonio; Hansen, Randi Westh; Zoonens, Manuela

    2014-01-01

    Amphipols are amphipathic polymers that stabilize membrane proteins isolated from their native membrane. They have been functionalized with various chemical groups in the past years for protein labeling and protein immobilization. This large toolbox of functionalized amphipols combined with their......Amphipols are amphipathic polymers that stabilize membrane proteins isolated from their native membrane. They have been functionalized with various chemical groups in the past years for protein labeling and protein immobilization. This large toolbox of functionalized amphipols combined...... surfaces for various applications in synthetic biology. This review summarizes the properties of functionalized amphipols suitable for synthetic biology approaches....

  4. Lightness functions

    DEFF Research Database (Denmark)

    Campi, Stefano; Gardner, Richard; Gronchi, Paolo;

    2012-01-01

    Variants of the brightness function of a convex body K in n-dimensional Euclidean are investigated. The Lambertian lightness function L(K; v , w ) gives the total reflected light resulting from illumination by a light source at infinity in the direction w that is visible when looking...... in the direction v . The partial brightness function R( K ; v , w ) gives the area of the projection orthogonal to v of the portion of the surface of K that is both illuminated by a light source from the direction w and visible when looking in the direction v . A class of functions called lightness functions...... is introduced that includes L(K;.) and R(K;.) as special cases. Much of the theory of the brightness function like uniqueness, stability, and the existence and properties of convex bodies of maximal and minimal volume with finitely many function values equal to those of a given convex body, is extended...

  5. Functional analysis

    CERN Document Server

    Kantorovich, L V

    1982-01-01

    Functional Analysis examines trends in functional analysis as a mathematical discipline and the ever-increasing role played by its techniques in applications. The theory of topological vector spaces is emphasized, along with the applications of functional analysis to applied analysis. Some topics of functional analysis connected with applications to mathematical economics and control theory are also discussed. Comprised of 18 chapters, this book begins with an introduction to the elements of the theory of topological spaces, the theory of metric spaces, and the theory of abstract measure space

  6. Stability Functions

    CERN Document Server

    Burns, Daniel; Wang, Zuoqin

    2008-01-01

    In this article we discuss the role of stability functions in geometric invariant theory and apply stability function techniques to problems in toric geometry. In particular we show how one can use these techniques to recover results of Burns-Guillemin-Uribe and Shiffman-Tate-Zelditch on asymptotic properties of sections of holomorphic line bundles over toric varieties.

  7. POWER FUNCTIONS

    Institute of Scientific and Technical Information of China (English)

    王雷

    2008-01-01

    <正>A power function of degree n is a function of the form f(x)=ax~n.For large n,it appears that the graph coincide with the x-axis near the origin,but it does not;the graph actually touches the x—axis only at the origin.

  8. The Luminosity and Mass Functions of Low-Mass Stars in the Galactic Disk: I. The Calibration Region

    CERN Document Server

    Covey, Kevin R; Bochanski, John J; West, Andrew A; Reid, I Neill; Golimowski, David A; Davenport, James R A; Henry, Todd; Uomoto, Alan

    2008-01-01

    We present measurements of the luminosity and mass functions of low-mass stars constructed from a catalog of matched Sloan Digital Sky Survey (SDSS) and 2 Micron All Sky Survey (2MASS) detections. This photometric catalog contains more than 25,000 matched SDSS and 2MASS point sources spanning ~30 square degrees on the sky. We have obtained follow-up spectroscopy, complete to J=16, of more than 500 low mass dwarf candidates within a 1 square degree sub-sample, and thousands of additional dwarf candidates in the remaining 29 square degrees. This spectroscopic sample verifies that the photometric sample is complete, uncontaminated, and unbiased at the 99% level globally, and at the 95% level in each color range. We use this sample to derive the luminosity and mass functions of low-mass stars over nearly a decade in mass (0.7 M_sun > M_* > 0.1 M_sun). We find that the logarithmically binned mass function is best fit with an M_c=0.29 log-normal distribution, with a 90% confidence interval of M_c=0.20--0.50. These ...

  9. CARDINAL FUNCTIONS AND INTEGRAL FUNCTIONS

    OpenAIRE

    MIRCEA E. ŞELARIU; FLORENTIN SMARANDACHE; MARIAN NIŢU

    2015-01-01

    This paper presents the correspondences of the eccentric mathematics of cardinal and integral functions and centric mathematics, or ordinary mathematics. Centric functions will also be presented in the introductory section, because they are, although widely used in undulatory physics, little known.

  10. CARDINAL FUNCTIONS AND INTEGRAL FUNCTIONS

    OpenAIRE

    MIRCEA E. ŞELARIU; FLORENTIN SMARANDACHE; MARIAN NIŢU

    2015-01-01

    This paper presents the correspondences of the eccentric mathematics of cardinal and integral functions and centric mathematics, or ordinary mathematics. Centric functions will also be presented in the introductory section, because they are, although widely used in undulatory physics, little known.

  11. Detection of two power-law tails in the probability distribution functions of massive GMCs

    CERN Document Server

    Schneider, N; Girichidis, P; Rayner, T; Motte, F; Andre, P; Russeil, D; Abergel, A; Anderson, L; Arzoumanian, D; Benedettini, M; Csengeri, T; Didelon, P; Francesco, J D; Griffin, M; Hill, T; Klessen, R S; Ossenkopf, V; Pezzuto, S; Rivera-Ingraham, A; Spinoglio, L; Tremblin, P; Zavagno, A

    2015-01-01

    We report the novel detection of complex high-column density tails in the probability distribution functions (PDFs) for three high-mass star-forming regions (CepOB3, MonR2, NGC6334), obtained from dust emission observed with Herschel. The low column density range can be fit with a lognormal distribution. A first power-law tail starts above an extinction (Av) of ~6-14. It has a slope of alpha=1.3-2 for the rho~r^-alpha profile for an equivalent density distribution (spherical or cylindrical geometry), and is thus consistent with free-fall gravitational collapse. Above Av~40, 60, and 140, we detect an excess that can be fitted by a flatter power law tail with alpha>2. It correlates with the central regions of the cloud (ridges/hubs) of size ~1 pc and densities above 10^4 cm^-3. This excess may be caused by physical processes that slow down collapse and reduce the flow of mass towards higher densities. Possible are: 1. rotation, which introduces an angular momentum barrier, 2. increasing optical depth and weaker...

  12. Young and embedded clusters in Cygnus-X: evidence for building up the initial mass function?

    Science.gov (United States)

    Maia, F. F. S.; Moraux, E.; Joncour, I.

    2016-05-01

    We provide a new view on the Cygnus-X north complex by accessing for the first time the low mass content of young stellar populations in the region. Canada-France-Hawaii Telescope/Wide-Field Infrared Camera was used to perform a deep near-infrared survey of this complex, sampling stellar masses down to ˜0.1 M⊙. Several analysis tools, including a extinction treatment developed in this work, were employed to identify and uniformly characterize a dozen unstudied young star clusters in the area. Investigation of their mass distributions in low-mass domain revealed a relatively uniform log-normal initial mass function (IMF) with a characteristic mass of 0.32 ± 0.08 M⊙ and mass dispersion of 0.40 ± 0.06. In the high-mass regime, their derived slopes showed that while the youngest clusters (age build up' their IMF by accreting low-mass stars formed in their vicinity during their first ˜3 Myr, before the gas expulsion phase, emerging at the age of ˜4 Myr with a fully fledged IMF. Finally, the derived distances to these clusters confirmed the existence of at least three different star-forming regions throughout Cygnus-X north complex, at distances of 500-900 pc, 1.4-1.7 and 3.0 kpc, and revealed evidence of a possible interaction between some of these stellar populations and the Cygnus OB2 association.

  13. Globular Cluster Systems in Brightest Cluster Galaxies: A Near-Universal Luminosity Function?

    CERN Document Server

    Harris, William E; Gnedin, Oleg Y; O'Halloran, Heather; Blakeslee, John P; Whitmore, Bradley C; Cote, Patrick; Geisler, Douglas; Peng, Eric W; Bailin, Jeremy; Rothberg, Barry; Cockcroft, Robert; DeGraaff, Regina Barber

    2014-01-01

    We present the first results from our HST Brightest Cluster Galaxy (BCG) survey of seven central supergiant cluster galaxies and their globular cluster (GC) systems. We measure a total of 48000 GCs in all seven galaxies, representing the largest single GC database. We find that a log-normal shape accurately matches the observed luminosity function (LF) of the GCs down to the GCLF turnover point, which is near our photometric limit. In addition, the LF has a virtually identical shape in all seven galaxies. Our data underscore the similarity in the formation mechanism of massive star clusters in diverse galactic environments. At the highest luminosities (log L > 10^7 L_Sun) we find small numbers of "superluminous" objects in five of the galaxies; their luminosity and color ranges are at least partly consistent with those of UCDs (Ultra-Compact Dwarfs). Lastly, we find preliminary evidence that in the outer halo (R > 20 kpc), the LF turnover point shows a weak dependence on projected distance, scaling as L_0 ~ R...

  14. A Multivariate Fit Luminosity Function and World Model for Long GRBs

    CERN Document Server

    Shahmoradi, Amir

    2012-01-01

    It is proposed that the luminosity function, the comoving-frame spectral correlations and distributions of cosmological Long-duration Gamma-Ray Bursts (LGRBs) may be very well described as multivariate log-normal distribution. This result is based on careful selection, analysis and modeling of the spectral parameters of LGRBs in the largest catalog of Gamma-Ray Bursts available to date: 2130 BATSE GRBs, while taking into account the detection threshold and possible selection effects on observational data. Constraints on the joint quadru-variate distribution of the isotropic peak luminosity, the total isotropic emission, the comoving-frame time-integrated spectral peak energy and the comoving-frame duration of LGRBs are derived. Extensive goodness-of-fit tests are performed. The presented analysis provides evidence for a relatively large fraction of LGRBs that have been missed by BATSE detector with total isotropic emissions extending down to 10^49 [erg] and observed spectral peak energies as low as 5 [KeV]. T...

  15. Ecosystem functioning

    National Research Council Canada - National Science Library

    Jax, Kurt

    2010-01-01

    "In the face of decreasing biodiversity and ongoing global changes, maintaining ecosystem functioning is seen both as a means to preserve biological diversity as well as for safeguarding human well...

  16. Transfer functions

    Science.gov (United States)

    Taback, I.

    1979-01-01

    The vulnerability of electronic equipment to carbon fibers is studied. The effectiveness of interfaces, such as filters, doors, window screens, and cabinets, which affect the concentration, exposure, or deposition of carbon fibers on both (internal and external) sides of the interface is examined. The transfer function of multilayer aluminum mesh, wet and dry, polyurethane foam, and window screen are determined as a function of air velocity. FIlters installed in typical traffic control boxes and air conditioners are also considered.

  17. Functional Unparsing

    DEFF Research Database (Denmark)

    Danvy, Olivier

    1998-01-01

    A string-formatting function such as printf in C seemingly requires dependent types, because its control string determines the rest of its arguments. We show how changing the representation of the control string makes it possible to program printf in ML (which does not allow dependent types......). The result is well typed and perceptibly more efficient than the corresponding library functions in Standard ML of New Jersey and in Caml....

  18. Functional unparsing

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2000-01-01

    A string-formatting function such as printf in C seemingly requires dependent types, because its control string determines the rest of its arguments. Examples: formula here We show how changing the representation of the control string makes it possible to program printf in ML (which does not allow...... dependent types). The result is well typed and perceptibly more efficient than the corresponding library functions in Standard ML of New Jersey and in Caml....

  19. interval functions

    Directory of Open Access Journals (Sweden)

    J. A. Chatfield

    1978-01-01

    Full Text Available Suppose N is a Banach space of norm |•| and R is the set of real numbers. All integrals used are of the subdivision-refinement type. The main theorem [Theorem 3] gives a representation of TH where H is a function from R×R to N such that H(p+,p+, H(p,p+, H(p−,p−, and H(p−,p each exist for each p and T is a bounded linear operator on the space of all such functions H. In particular we show that TH=(I∫abfHdα+∑i=1∞[H(xi−1,xi−1+−H(xi−1+,xi−1+]β(xi−1+∑i=1∞[H(xi−,xi−H(xi−,xi−]Θ(xi−1,xiwhere each of α, β, and Θ depend only on T, α is of bounded variation, β and Θ are 0 except at a countable number of points, fH is a function from R to N depending on H and {xi}i=1∞ denotes the points P in [a,b]. for which [H(p,p+−H(p+,p+]≠0 or [H(p−,p−H(p−,p−]≠0. We also define an interior interval function integral and give a relationship between it and the standard interval function integral.

  20. Bessel functions

    CERN Document Server

    Nambudiripad, K B M

    2014-01-01

    After presenting the theory in engineers' language without the unfriendly abstraction of pure mathematics, several illustrative examples are discussed in great detail to see how the various functions of the Bessel family enter into the solution of technically important problems. Axisymmetric vibrations of a circular membrane, oscillations of a uniform chain, heat transfer in circular fins, buckling of columns of varying cross-section, vibrations of a circular plate and current density in a conductor of circular cross-section are considered. The problems are formulated purely from physical considerations (using, for example, Newton's law of motion, Fourier's law of heat conduction electromagnetic field equations, etc.) Infinite series expansions, recurrence relations, manipulation of expressions involving Bessel functions, orthogonality and expansion in Fourier-Bessel series are also covered in some detail. Some important topics such as asymptotic expansions, generating function and Sturm-Lioville theory are r...

  1. Algebraic functions

    CERN Document Server

    Bliss, Gilbert Ames

    1933-01-01

    This book, immediately striking for its conciseness, is one of the most remarkable works ever produced on the subject of algebraic functions and their integrals. The distinguishing feature of the book is its third chapter, on rational functions, which gives an extremely brief and clear account of the theory of divisors.... A very readable account is given of the topology of Riemann surfaces and of the general properties of abelian integrals. Abel's theorem is presented, with some simple applications. The inversion problem is studied for the cases of genus zero and genus unity. The chapter on t

  2. Comparable analysis of the distribution functions of runup heights of the 1896, 1933 and 2011 Japanese Tsunamis in the Sanriku area

    Directory of Open Access Journals (Sweden)

    B. H. Choi

    2012-05-01

    Full Text Available Data from a field survey of the 2011 Tohoku-oki tsunami in the Sanriku area of Japan is used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated by a theoretical log-normal curve. The characteristics of the distribution functions of the 2011 event are compared with data from two previous catastrophic tsunamis (1896 and 1933 that occurred in almost the same region. The number of observations during the last tsunami is very large, which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and the number of observed runup heights suggested by Kajiura (1983 based on a small amount of data on previous tsunamis. The distribution function of the 2011 event demonstrates the sensitivity to the number of measurements (many of them cannot be considered independent measurements and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.

  3. POLYNOMIAL FUNCTIONS

    Institute of Scientific and Technical Information of China (English)

    王雷

    2008-01-01

    <正>Polynomial functions are among the sim- plest expressions in algebra.They are easy to evaluate:only addition and repeated multipli- cation are required.Because of this,they are often used to approximate other more compli-

  4. Functional dyspepsia

    NARCIS (Netherlands)

    Kleibeuker, JH; Thijs, JC

    2004-01-01

    Purpose of review Functional dyspepsia is a common disorder, most of the time of unknown etiology and with variable pathophysiology. Therapy has been and still is largely empirical. Data from recent studies provide new clues for targeted therapy based on knowledge of etiology and pathophysiologic me

  5. Functional Organometallics

    Institute of Scientific and Technical Information of China (English)

    K.H.DTZ

    2007-01-01

    1 Results The lecture will address aspects of functional organometallics related to the development of novel organometallic materials.In chromium complexes of fused arenes-regio-and diastereoselectively accessible by chromium-templated benzannulation of arylcarbenes by alkynes[1]-a haptotropic migration of the chromium fragment along the π-face of fused arenes is controlled by both thermodynamics and the substitution pattern of the arene and the metal coligand sphere,and can be applied towards an organo...

  6. Functional Literacy

    Directory of Open Access Journals (Sweden)

    Fani Nolimal

    2000-12-01

    Full Text Available The author first defines literacy as the ability of co-operation in all fields of life and points at the features of illiterate or semi-literate individuals. The main stress is laid upon the assessment of literacy and illiteracy. In her opinion the main weak­ ness of this kind of evaluation are its vague psycho-metric characteristics, which leads to results valid in a single geographical or cultural environment only. She also determines the factors causing illiteracy, and she states that the level of functional literacy is more and more becoming a national indicator of successfulness.

  7. Lung function

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    2005200 The effect of body position changes on lung function, lung CT imaging and pathology in an oleic acid induced acute lung injury model. JI Xin-ping (戢新平), et al. Dept Emergency, 1st Affili Hosp, China Med Univ, Shenyang 110001. Chin J Tuberc Respir Dis, 2005;28(1) :33-36. Objective: To study the effect of body position changes on lung mechanics, oxygenation, CT images and pathology in an oleic acid-induced acute lung injury (ALl) model. Methods: The study groups con-

  8. Study on damages constitutive model of rocks based on lognormal distribution

    Institute of Scientific and Technical Information of China (English)

    LI Shu-chun; XU Jiang; TAO Yun-qi; TANG Xiao-jun

    2007-01-01

    The damage constitutive relation of entire rock failure process was established using the theory of representative volume element obeying the Iognormal distribution law,and the integrated damages constitutive model of rock under triaxial compression was established. Comparing with triaxial compression test result, it shows that this model correctly reflects the relationship of stress-strain. At the same time, according to the principle of the rock fatigue failure that conforms to completely the static entire process curve, a new method of establishing cyclic fatigue damage evolution equation was discussed, this method form is simple and the physics significance is clear, it may join preferably the damage relations of the rock static entire process curve.

  9. Lognormal firing rate distribution reveals prominent fluctuation-driven regime in spinal motor networks

    DEFF Research Database (Denmark)

    Petersen, Peter C.; Berg, Rune W.

    2016-01-01

    When spinal circuits generate rhythmic movements it is important that the neuronal activity remains within stable bounds to avoid saturation and to preserve responsiveness. Here, we simultaneously record from hundreds of neurons in lumbar spinal circuits of turtles and establish the neuronal frac...

  10. Modelling the Skinner Thesis: Consequences of a Lognormal or a Bimodal Resource Base Distribution

    NARCIS (Netherlands)

    Auping, W.L.

    2014-01-01

    The copper case is often used as an example in resource depletion studies. Despite these studies, several profound uncertainties remain in the system. One of these uncertainties is the distribution of copper grades in the lithosphere. The Skinner thesis promotes the idea that copper grades may be

  11. Modelling the Skinner Thesis: Consequences of a Lognormal or a Bimodal Resource Base Distribution

    NARCIS (Netherlands)

    Auping, W.L.

    2014-01-01

    The copper case is often used as an example in resource depletion studies. Despite these studies, several profound uncertainties remain in the system. One of these uncertainties is the distribution of copper grades in the lithosphere. The Skinner thesis promotes the idea that copper grades may be di

  12. The effect of ignoring individual heterogeneity in Weibull log-normal sire frailty models

    DEFF Research Database (Denmark)

    Damgaard, Lars Holm; Korsgaard, Inge Riis; Simonsen, J;

    2006-01-01

    The objective of this study was, by means of simulation, to quantify the effect of ignoring individual heterogeneity in Weibull sire frailty models on parameter estimates and to address the consequences for genetic inferences. Three simulation studies were evaluated, which included 3 levels...... the software Survival Kit for the incomplete sire model. For the incomplete sire model, the Monte Carlo and Survival Kit parameter estimates were similar. This study established that when unobserved individual heterogeneity was ignored, the parameter estimates that included sire effects were biased toward zero...

  13. Log-normal spray drop distribution...analyzed by two new computer programs

    Science.gov (United States)

    Gerald S. Walton

    1968-01-01

    Results of U.S. Forest Service research on chemical insecticides suggest that large drops are not as effective as small drops in carrying insecticides to target insects. Two new computer programs have been written to analyze size distribution properties of drops from spray nozzles. Coded in Fortran IV, the programs have been tested on both the CDC 6400 and the IBM 7094...

  14. Combining sigma-lognormal modeling and classical features for analyzing graphomotor performances in kindergarten children.

    Science.gov (United States)

    Duval, Thérésa; Rémi, Céline; Plamondon, Réjean; Vaillant, Jean; O'Reilly, Christian

    2015-10-01

    This paper investigates the advantage of using the kinematic theory of rapid human movements as a complementary approach to those based on classical dynamical features to characterize and analyze kindergarten children's ability to engage in graphomotor activities as a preparation for handwriting learning. This study analyzes nine different movements taken from 48 children evenly distributed among three different school grades corresponding to pupils aged 3, 4, and 5 years. On the one hand, our results show that the ability to perform graphomotor activities depends on kindergarten grades. More importantly, this study shows which performance criteria, from sophisticated neuromotor modeling as well as more classical kinematic parameters, can differentiate children of different school grades. These criteria provide a valuable tool for studying children's graphomotor control learning strategies. On the other hand, from a practical point of view, it is observed that school grades do not clearly reflect pupils' graphomotor performances. This calls for a large-scale investigation, using a more efficient experimental design based on the various observations made throughout this study regarding the choice of the graphic shapes, the number of repetitions and the features to analyze. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Functional metamirrors

    CERN Document Server

    Asadchy, V S; Vehmas, J; Tretyakov, S A

    2014-01-01

    Conventional mirrors obey Snell's reflection law: a plane wave is reflected as a plane wave, at the same angle. To engineer spatial distributions of fields reflected from a mirror, one can either shape the reflector (for example, creating a parabolic reflector) or position some phase-correcting elements on top of a mirror surface (for example, designing a reflectarray antenna). Here we show, both theoretically and experimentally, that full-power reflection with general control over reflected wave phase is possible with a single-layer array of deeply sub-wavelength inclusions. These proposed artificial surfaces, metamirrors, provide various functions of shaped or nonuniform reflectors without utilizing any mirror. This can be achieved only if the forward and backward scattering of the inclusions in the array can be engineered independently, and we prove that it is possible using electrically and magnetically polarizable inclusions. The proposed sub-wavelength inclusions possess desired reflecting properties at...

  16. Functional analysis

    CERN Document Server

    Kesavan, S

    2009-01-01

    The material presented in this book is suited for a first course in Functional Analysis which can be followed by Masters students. While covering all the standard material expected of such a course, efforts have been made to illustrate the use of various theorems via examples taken from differential equations and the calculus of variations, either through brief sections or through exercises. In fact, this book will be particularly useful for students who would like to pursue a research career in the applications of mathematics. The book includes a chapter on weak and weak topologies and their applications to the notions of reflexivity, separability and uniform convexity. The chapter on the Lebesgue spaces also presents the theory of one of the simplest classes of Sobolev spaces. The book includes a chapter on compact operators and the spectral theory for compact self-adjoint operators on a Hilbert space. Each chapter has large collection of exercises at the end. These illustrate the results of the text, show ...

  17. Distribution functions to estimate radionuclide solid-liquid distribution coefficients in soils: the case of Cs

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)

    2014-07-01

    In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for

  18. HST/ACS imaging of M82: A comparison of mass and size distribution functions of the younger nuclear and older disk clusters

    CERN Document Server

    Mayya, Y D; Rodríguez-Merino, L H; Luna, A; Carrasco, L; Rosa-Gonzalez, D

    2008-01-01

    We present the results obtained from an objective search for stellar clusters, both in the currently active nuclear starburst region, and in the post-starburst disk of M82. Images obtained with the HST/ACS in F435W(B), F555W(V), and F814W(I) filters were used in the search for the clusters. We detected 653 clusters of which 393 are located outside the central 450 pc in the post-starburst disk of M82. The luminosity function of the detected clusters show an apparent turnover at B=22 mag (M_B=-5.8), which we interpret from Monte Carlo simulations as due to incompleteness in the detection of faint clusters, rather than an intrinsic log-normal distribution. We derived a photometric mass of every detected cluster from models of simple stellar populations assuming a mean age of either an 8 (nuclear clusters) or 100 (disk clusters) million years old. The mass functions of the disk (older) and the nuclear (younger) clusters follow power-laws, the former being marginally flatter (alpha=1.5+/-0.1) than the latter (alph...

  19. The VIMOS Public Extragalactic Redshift Survey (VIPERS). On the recovery of the count-in-cell probability distribution function

    Science.gov (United States)

    Bel, J.; Branchini, E.; Di Porto, C.; Cucciati, O.; Granett, B. R.; Iovino, A.; de la Torre, S.; Marinoni, C.; Guzzo, L.; Moscardini, L.; Cappi, A.; Abbas, U.; Adami, C.; Arnouts, S.; Bolzonella, M.; Bottini, D.; Coupon, J.; Davidzon, I.; De Lucia, G.; Fritz, A.; Franzetti, P.; Fumana, M.; Garilli, B.; Ilbert, O.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marulli, F.; McCracken, H. J.; Paioro, L.; Polletta, M.; Pollo, A.; Schlagenhaufer, H.; Scodeggio, M.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.; Burden, A.; Marchetti, A.; Mellier, Y.; Nichol, R. C.; Peacock, J. A.; Percival, W. J.; Phleps, S.; Wolk, M.

    2016-04-01

    We compare three methods to measure the count-in-cell probability density function of galaxies in a spectroscopic redshift survey. From this comparison we found that, when the sampling is low (the average number of object per cell is around unity), it is necessary to use a parametric method to model the galaxy distribution. We used a set of mock catalogues of VIPERS to verify if we were able to reconstruct the cell-count probability distribution once the observational strategy is applied. We find that, in the simulated catalogues, the probability distribution of galaxies is better represented by a Gamma expansion than a skewed log-normal distribution. Finally, we correct the cell-count probability distribution function from the angular selection effect of the VIMOS instrument and study the redshift and absolute magnitude dependency of the underlying galaxy density function in VIPERS from redshift 0.5 to 1.1. We found a very weak evolution of the probability density distribution function and that it is well approximated by a Gamma distribution, independently of the chosen tracers. Based on observations collected at the European Southern Observatory, Cerro Paranal, Chile, using the Very Large Telescope under programmes 182.A-0886 and partly 070.A-9007. Also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. The VIPERS web site is http://www.vipers.inaf.it/

  20. Functional Programming in R

    DEFF Research Database (Denmark)

    Mailund, Thomas

    2017-01-01

    Master functions and discover how to write functional programs in R. In this book, you'll make your functions pure by avoiding side-effects; you’ll write functions that manipulate other functions, and you’ll construct complex functions using simpler functions as building blocks. In Functional...... functions by combining simpler functions. You will: Write functions in R including infix operators and replacement functions Create higher order functions Pass functions to other functions and start using functions as data you can manipulate Use Filer, Map and Reduce functions to express the intent behind...... code clearly and safely Build new functions from existing functions without necessarily writing any new functions, using point-free programming Create functions that carry data along with them...

  1. Special functions & their applications

    CERN Document Server

    Lebedev, N N

    1972-01-01

    Famous Russian work discusses the application of cylinder functions and spherical harmonics; gamma function; probability integral and related functions; Airy functions; hyper-geometric functions; more. Translated by Richard Silverman.

  2. Functions of bounded variation

    OpenAIRE

    Lind, Martin

    2006-01-01

    The paper begins with a short survey of monotone functions. The functions of bounded variation are introduced and some basic properties of these functions are given. Finally the jump function of a function of bounded variation is defined.

  3. Understanding star formation in molecular clouds III. Probability distribution functions of molecular lines in Cygnus X

    CERN Document Server

    Schneider, N; Motte, F; Ossenkopf, V; Klessen, R S; Simon, R; Fechtenbaum, S; Herpin, F; Tremblin, P; Csengeri, T; Myers, P C; Hill, T; Cunningham, M; Federrath, C

    2015-01-01

    Column density (N) PDFs serve as a powerful tool to characterize the physical processes that influence the structure of molecular clouds. Star-forming clouds can best be characterized by lognormal PDFs for the lower N range and a power-law tail for higher N, commonly attributed to turbulence and self-gravity and/or pressure, respectively. We report here on PDFs obtained from observations of 12CO, 13CO, C18O, CS, and N2H+ in the Cygnus X North region and compare to a PDF derived from dust observations with the Herschel satellite. The PDF of 12CO is lognormal for Av~1-30, but is cut for higher Av due to optical depth effects. The PDFs of C18O and 13CO are mostly lognormal up for Av~1-15, followed by excess up to Av~40. Above that value, all CO PDFs drop, most likely due to depletion. The high density tracers CS and N2H+ exhibit only a power law distribution between Av~15 and 400, respectively. The PDF from dust is lognormal for Av~2-15 and has a power-law tail up to Av~500. Absolute values for the molecular lin...

  4. Relations between Lipschitz functions and convex functions

    Institute of Scientific and Technical Information of China (English)

    RUAN Yingbin

    2005-01-01

    We discuss the relationship between Lipschitz functions and convex functions.By these relations, we give a sufficient condition for the set of points where Lipschitz functions on a Hilbert space is Frechet differentiable to be residual.

  5. Gaussian Functions, Γ-Functions and Wavelets

    Institute of Scientific and Technical Information of China (English)

    蔡涛; 许天周

    2003-01-01

    The relations between Gaussian function and Γ-function is revealed first at one-dimensional situation. Then, the Fourier transformation of n-dimensional Gaussian function is deduced by a lemma. Following the train of thought in one-dimensional situation, the relation between n-dimensional Gaussian function and Γ-function is given. By these, the possibility of arbitrary derivative of an n-dimensional Gaussian function being a mother wavelet is indicated. The result will take some enlightening role in exploring the internal relations between Gaussian function and Γ-function as well as in finding high-dimensional mother wavelets.

  6. Fading probability density function of free-space optical communication channels with pointing error

    Science.gov (United States)

    Zhao, Zhijun; Liao, Rui

    2011-06-01

    The turbulent atmosphere causes wavefront distortion, beam wander, and beam broadening of a laser beam. These effects result in average power loss and instantaneous power fading at the receiver aperture and thus degrade performance of a free-space optical (FSO) communication system. In addition to the atmospheric turbulence, a FSO communication system may also suffer from laser beam pointing error. The pointing error causes excessive power loss and power fading. This paper proposes and studies an analytical method for calculating the FSO channel fading probability density function (pdf) induced by both atmospheric turbulence and pointing error. This method is based on the fast-tracked laser beam fading profile and the joint effects of beam wander and pointing error. In order to evaluate the proposed analytical method, large-scale numerical wave-optics simulations are conducted. Three types of pointing errors are studied , namely, the Gaussian random pointing error, the residual tracking error, and the sinusoidal sway pointing error. The FSO system employs a collimated Gaussian laser beam propagating along a horizontal path. The propagation distances range from 0.25 miles to 2.5 miles. The refractive index structure parameter is chosen to be Cn2 = 5×10-15m-2/3 and Cn2 = 5×10-13m-2/3. The studied cases cover from weak to strong fluctuations. The fading pdf curves of channels with pointing error calculated using the analytical method match accurately the corresponding pdf curves obtained directly from large-scale wave-optics simulations. They also give accurate average bit-error-rate (BER) curves and outage probabilities. Both the lognormal and the best-fit gamma-gamma fading pdf curves deviate from those of corresponding simulation curves, and they produce overoptimistic average BER curves and outage probabilities.

  7. The Initial Mass Function of the Inner Galaxy Measured from OGLE-III Microlensing Timescales

    Science.gov (United States)

    Wegg, Christopher; Gerhard, Ortwin; Portail, Matthieu

    2017-07-01

    We use the timescale distribution of ˜3000 microlensing events measured by the OGLE-III survey, together with accurate new made-to-measure dynamical models of the Galactic bulge/bar region, to measure the IMF in the inner Milky Way. The timescale of each event depends on the mass of the lensing object, together with the relative distances and velocities of the lens and source. The dynamical model statistically provides these distances and velocities, allowing us to constrain the lens mass function, and thereby infer the IMF. Parameterizing the IMF as a broken power-law, we find slopes in the main-sequence {α }{ms}=1.31+/- 0.10{| }{stat}+/- 0.10{| }{sys}, and brown dwarf region {α }{bd}=-0.7+/- 0.9{| }{stat}+/- 0.8{| }{sys}, where we use a fiducial 50% binary fraction, and the systematic uncertainty covers the range of binary fractions 0%-100%. Similarly, for a log-normal IMF we conclude {M}c=(0.17+/- 0.02{| }{stat}+/- 0.01{| }{sys}) {\\text{}}{M}⊙ and {σ }m=0.49+/- 0.07{| }{stat}+/- 0.06{| }{sys}. These values are very similar to a Kroupa or Chabrier IMF, respectively, showing that the IMF in the bulge is indistinguishable from that measured locally, despite the lenses lying in the inner Milky Way where the stars are mostly ˜10 Gyr old and formed on a fast α-element enhanced timescale. This therefore constrains models of IMF variation that depend on the properties of the collapsing gas cloud.

  8. On quasinearly subharmonic functions

    OpenAIRE

    Dovgoshey, O.; Riihentaus, J.

    2016-01-01

    We recall the definition of quasinearly subharmonic functions, point out that this function class includes, among others, subharmonic functions, quasisubharmonic functions, nearly subharmonic functions and essentially almost subharmonic functions. It is shown that the sum of two quasinearly subharmonic functions may not be quasinearly subharmonic. Moreover, we characterize the harmonicity via quasinearly subharmonicity.

  9. Self-Preservation of the Drop Size Distribution Function and Variation in the Stability Ratio for Rapid Coalescence of a Polydisperse Emulsion in a Simple Shear Field

    Science.gov (United States)

    Mishra; Kresta; Masliyah

    1998-01-01

    Coalescence of oil-in-water emulsion droplets in a simple shear flow produced by a Couette device is considered. A phase Doppler anemometer was used to measure the droplet size distribution as a function of time for shear rates ranging from 55 to 213 s-1 and for sodium chloride salt concentrations from 0.095 to 0.6 M. The initial droplet size distribution was log-normal. During the coalescence process, the size distribution was self-preserving in accordance with D. L. Swift and S. K. Friedlander's analysis [J. Colloid Sci. 19, 621 (1964)]. In the limiting case of negligible repulsive force due to the electric double layer, the calculated stability ratios, corrected for droplet polydispersity, agree well with the theoretical analyses of G. R. Zeichner and W. R. Schowalter [AIChE J. 23, 243 (1977)] and D. L. Feke and W. R. Schowalter [J. Fluid Mech. 133, 17 (1983)] for the case of solid particle aggregation. The good agreement between the stability ratios for the case of coalescence of droplets in the present study and those for aggregation of solid particles indicates that resistance to film deformation and thinning present in the case of coalescence is not important compared with the collision process. Copyright 1998 Academic Press. Copyright 1998Academic Press

  10. Bayesian function-on-function regression for multilevel functional data.

    Science.gov (United States)

    Meyer, Mark J; Coull, Brent A; Versace, Francesco; Cinciripini, Paul; Morris, Jeffrey S

    2015-09-01

    Medical and public health research increasingly involves the collection of complex and high dimensional data. In particular, functional data-where the unit of observation is a curve or set of curves that are finely sampled over a grid-is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data on a fine grid, presenting a simple model as well as a more extensive mixed model framework, and introducing various functional Bayesian inferential procedures that account for multiple testing. We examine these models via simulation and a data analysis with data from a study that used event-related potentials to examine how the brain processes various types of images.

  11. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    Science.gov (United States)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  12. Modelling distribution functions and fragmentation functions

    CERN Document Server

    Rodrigues, J; Mulders, P J

    1995-01-01

    We present examples for the calculation of the distribution and fragmentation functions using the representation in terms of non-local matrix elements of quark field operators. As specific examples, we use a simple spectator model to estimate the leading twist quark distribution functions and the fragmentation functions for a quark into a nucleon or a pion.

  13. Deep Advanced Camera for Surveys Imaging in the Globular Cluster NGC 6397: the Cluster Color-Magnitude Diagram and Luminosity Function

    Science.gov (United States)

    Richer, Harvey B.; Dotter, Aaron; Hurley, Jarrod; Anderson, Jay; King, Ivan; Davis, Saul; Fahlman, Gregory G.; Hansen, Brad M. S.; Kalirai, Jason; Paust, Nathaniel; Rich, R. Michael; Shara, Michael M.

    2008-06-01

    We present the color-magnitude diagram (CMD) from deep Hubble Space Telescope imaging in the globular cluster NGC 6397. The Advanced Camera for Surveys (ACS) was used for 126 orbits to image a single field in two colors (F814W, F606W) 5' SE of the cluster center. The field observed overlaps that of archival WFPC2 data from 1994 and 1997 which were used to proper motion (PM) clean the data. Applying the PM corrections produces a remarkably clean CMD which reveals a number of features never seen before in a globular cluster CMD. In our field, the main-sequence stars appeared to terminate close to the location in the CMD of the hydrogen-burning limit predicted by two independent sets of stellar evolution models. The faintest observed main-sequence stars are about a magnitude fainter than the least luminous metal-poor field halo stars known, suggesting that the lowest-luminosity halo stars still await discovery. At the bright end the data extend beyond the main-sequence turnoff to well up the giant branch. A populous white dwarf cooling sequence is also seen in the cluster CMD. The most dramatic features of the cooling sequence are its turn to the blue at faint magnitudes as well as an apparent truncation near F814W = 28. The cluster luminosity and mass functions were derived, stretching from the turnoff down to the hydrogen-burning limit. It was well modeled with either a very flat power-law or a lognormal function. In order to interpret these fits more fully we compared them with similar functions in the cluster core and with a full N-body model of NGC 6397 finding satisfactory agreement between the model predictions and the data. This exercise demonstrates the important role and the effect that dynamics has played in altering the cluster initial mass function.

  14. Generalized Probability Functions

    Directory of Open Access Journals (Sweden)

    Alexandre Souto Martinez

    2009-01-01

    Full Text Available From the integration of nonsymmetrical hyperboles, a one-parameter generalization of the logarithmic function is obtained. Inverting this function, one obtains the generalized exponential function. Motivated by the mathematical curiosity, we show that these generalized functions are suitable to generalize some probability density functions (pdfs. A very reliable rank distribution can be conveniently described by the generalized exponential function. Finally, we turn the attention to the generalization of one- and two-tail stretched exponential functions. We obtain, as particular cases, the generalized error function, the Zipf-Mandelbrot pdf, the generalized Gaussian and Laplace pdf. Their cumulative functions and moments were also obtained analytically.

  15. Riemann Zeta Matrix Function

    OpenAIRE

    Kargın, Levent; Kurt, Veli

    2015-01-01

    In this study, obtaining the matrix analog of the Euler's reflection formula for the classical gamma function we expand the domain of the gamma matrix function and give a infinite product expansion of sinπxP.  Furthermore we define Riemann zeta matrix function and evaluate some other matrix integrals. We prove a functional equation for Riemann zeta matrix function.

  16. Functionality and homogeneity.

    NARCIS (Netherlands)

    2011-01-01

    Functionality and homogeneity are two of the five Sustainable Safety principles. The functionality principle aims for roads to have but one exclusive function and distinguishes between traffic function (flow) and access function (residence). The homogeneity principle aims at differences in mass, spe

  17. Functionality and homogeneity.

    NARCIS (Netherlands)

    2011-01-01

    Functionality and homogeneity are two of the five Sustainable Safety principles. The functionality principle aims for roads to have but one exclusive function and distinguishes between traffic function (flow) and access function (residence). The homogeneity principle aims at differences in mass, spe

  18. Optical dual self functions

    Institute of Scientific and Technical Information of China (English)

    华建文; 刘立人; 王宁

    1997-01-01

    A recipe to construct the exact dual self-Fourier-Fresnel-transform functions is shown, where the Dirac comb function and transformable even periodic function are used. The mathematical proof and examples are given Then this kind of self-transform function is extended to the feasible optical dual self-transform functions.

  19. Platelet Function Tests

    Science.gov (United States)

    ... be limited. Home Visit Global Sites Search Help? Platelet Function Tests Share this page: Was this page helpful? ... their patients by ordering one or more platelet function tests. Platelet function testing may include one or more of ...

  20. Congenital platelet function defects

    Science.gov (United States)

    ... storage pool disorder; Glanzmann's thrombasthenia; Bernard-Soulier syndrome; Platelet function defects - congenital ... Congenital platelet function defects are bleeding disorders that ... function, even though there are normal platelet numbers. Most ...

  1. Extraocular muscle function testing

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003397.htm Extraocular muscle function testing To use the sharing features on this page, please enable JavaScript. Extraocular muscle function testing examines the function of the eye ...

  2. Properties of Bourbaki's Function

    CERN Document Server

    McCollum, James

    2010-01-01

    We examine Bourbaki's function, an easily-constructed continuous but nowhere-differentiable function, and explore properties including functional identities, the antiderivative, and the Hausdorff dimension of the graph.

  3. Random functions and turbulence

    CERN Document Server

    Panchev, S

    1971-01-01

    International Series of Monographs in Natural Philosophy, Volume 32: Random Functions and Turbulence focuses on the use of random functions as mathematical methods. The manuscript first offers information on the elements of the theory of random functions. Topics include determination of statistical moments by characteristic functions; functional transformations of random variables; multidimensional random variables with spherical symmetry; and random variables and distribution functions. The book then discusses random processes and random fields, including stationarity and ergodicity of random

  4. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in\\verb+~+\\$\\backsl......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in...

  5. Acquired platelet function defect

    Science.gov (United States)

    Acquired qualitative platelet disorders; Acquired disorders of platelet function ... blood clotting. Disorders that can cause problems in platelet function include: Idiopathic thrombocytopenic purpura Chronic myelogenous leukemia Multiple ...

  6. Functionalized nanosponges for controlled antibacterial and antihypocalcemic actions.

    Science.gov (United States)

    Deshmukh, Kiran; Tanwar, Yuveraj Singh; Sharma, Shailendra; Shende, Pravin; Cavalli, Roberta

    2016-12-01

    The aim of the present work was to develop lysozyme impregnated surface-active nanosponges to maintain its conformational stability and break bacterial cell walls by catalyzing the hydrolysis of 1,4-β-linkages between N-acetyl-d-glucosamine and N-acetylmuramic acid residues present in peptidoglycan layer surrounding the bacterial cell membrane, and for controlling the release of calcium in hypocalcemia condition. Different carbonyl diimidazole cross-linked β-cyclodextrin nanosponges with and without CaCO3 and CMC were prepared by polymer condensation method. The surface-active nanosponges were impregnated by lysozyme due to their ability to adsorb protein. Lysozyme impregnated nanosponges had a monomodal particle size distribution of 347.46±3.07 to 550.34±5.23nm, with a narrow distribution. The zeta potentials were sufficiently increased upon lysozyme impregnation, suggesting stable formulations by preventing aggregation. The in vitro release studies showed controlled release of lysozyme and calcium over a period of 24h. FTIR studies confirmed the impregnation of lysozyme on nanosponges and encapsulation of calcium in nanosponges. Lysozyme formulation showed promising conformational stability by DSC. It can be concluded that the stable nanosponges formulation is a promising carrier for antibacterial protein and preventing depletion of calcium in antibiotic associated hypocalcemic condition.

  7. Functional microorganisms for functional food quality.

    Science.gov (United States)

    Gobbetti, M; Cagno, R Di; De Angelis, M

    2010-09-01

    Functional microorganisms and health benefits represent a binomial with great potential for fermented functional foods. The health benefits of fermented functional foods are expressed either directly through the interactions of ingested live microorganisms with the host (probiotic effect) or indirectly as the result of the ingestion of microbial metabolites synthesized during fermentation (biogenic effect). Since the importance of high viability for probiotic effect, two major options are currently pursued for improving it--to enhance bacterial stress response and to use alternative products for incorporating probiotics (e.g., ice cream, cheeses, cereals, fruit juices, vegetables, and soy beans). Further, it seems that quorum sensing signal molecules released by probiotics may interact with human epithelial cells from intestine thus modulating several physiological functions. Under optimal processing conditions, functional microorganisms contribute to food functionality through their enzyme portfolio and the release of metabolites. Overproduction of free amino acids and vitamins are two classical examples. Besides, bioactive compounds (e.g., peptides, γ-amino butyric acid, and conjugated linoleic acid) may be released during food processing above the physiological threshold and they may exert various in vivo health benefits. Functional microorganisms are even more used in novel strategies for decreasing phenomenon of food intolerance (e.g., gluten intolerance) and allergy. By a critical approach, this review will aim at showing the potential of functional microorganisms for the quality of functional foods.

  8. BANYAN. IX. The Initial Mass Function and Planetary-mass Object Space Density of the TW HYA Association

    Science.gov (United States)

    Gagné, Jonathan; Faherty, Jacqueline K.; Mamajek, Eric E.; Malo, Lison; Doyon, René; Filippazzo, Joseph C.; Weinberger, Alycia J.; Donaldson, Jessica K.; Lépine, Sébastien; Lafrenière, David; Artigau, Étienne; Burgasser, Adam J.; Looper, Dagny; Boucher, Anne; Beletsky, Yuri; Camnasio, Sara; Brunette, Charles; Arboit, Geneviève

    2017-02-01

    A determination of the initial mass function (IMF) of the current, incomplete census of the 10 Myr-old TW Hya association (TWA) is presented. This census is built from a literature compilation supplemented with new spectra and 17 new radial velocities from ongoing membership surveys, as well as a reanalysis of Hipparcos data that confirmed HR 4334 (A2 Vn) as a member. Although the dominant uncertainty in the IMF remains census incompleteness, a detailed statistical treatment is carried out to make the IMF determination independent of binning while accounting for small number statistics. The currently known high-likelihood members are fitted by a log-normal distribution with a central mass of {0.21}-0.06+0.11 M ⊙ and a characteristic width of {0.8}-0.1+0.2 dex in the 12 M Jup–2 M ⊙ range, whereas a Salpeter power law with α ={2.2}-0.5+1.1 best describes the IMF slope in the 0.1–2 M ⊙ range. This characteristic width is higher than other young associations, which may be due to incompleteness in the current census of low-mass TWA stars. A tentative overpopulation of isolated planetary-mass members similar to 2MASS J11472421–2040204 and 2MASS J11193254–1137466 is identified: this indicates that there might be as many as {10}-5+13 similar members of TWA with hot-start model-dependent masses estimated at ∼5–7 M Jup, most of which would be too faint to be detected in 2MASS. Our new radial velocity measurements corroborate the membership of 2MASS J11472421–2040204, and secure TWA 28 (M8.5 γ), TWA 29 (M9.5 γ), and TWA 33 (M4.5 e) as members. The discovery of 2MASS J09553336–0208403, a young L7-type interloper unrelated to TWA, is also presented.

  9. Every storage function is a state function

    NARCIS (Netherlands)

    Trentelman, H.L.; Willems, J.C.

    1997-01-01

    It is shown that for linear dynamical systems with quadratic supply rates, a storage function can always be written as a quadratic function of the state of an associated linear dynamical system. This dynamical system is obtained by combining the dynamics of the original system with the dynamics of t

  10. Rough function model and rough membership function

    Institute of Scientific and Technical Information of China (English)

    Wang Yun; Guan Yanyong; Huang Zhiqin

    2008-01-01

    Two pairs of approximation operators, which are the scale lower and upper approximations as well as the real line lower and upper approximations, are defined. Their properties and antithesis characteristics are analyzed. The rough function model is generalized based on rough set theory, and the scheme of rough function theory is made more distinct and complete. Therefore, the transformation of the real function analysis from real line to scale is achieved. A series of basic concepts in rough function model including rough numbers, rough intervals, and rough membership functions are defined in the new scheme of the rough function model. Operating properties of rough intervals similar to rough sets are obtained. The relationship of rough inclusion and rough equality of rough intervals is defined by two kinds of tools, known as the lower (upper) approximation operator in real numbers domain and rough membership functions. Their relative properties are analyzed and proved strictly, which provides necessary theoretical foundation and technical support for the further discussion of properties and practical application of the rough function model.

  11. Positive random fields for modeling material stiffness and compliance

    DEFF Research Database (Denmark)

    Hasofer, Abraham Michael; Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob

    1998-01-01

    Positive random fields with known marginal properties and known correlation function are not numerous in the literature. The most prominent example is the log\\-normal field for which the complete distribution is known and for which the reciprocal field is also lognormal. It is of interest to supp...

  12. Entire functions sharing one small function

    Institute of Scientific and Technical Information of China (English)

    LI Yun-tong; CAO Yao

    2007-01-01

    The uniqueness problem of entire functions sharing one small function was studied. By Picard's Theorem, we proved that for two transcendental entire functions f (z) and g(z), a positive integer n≥9, and a(z) (not identically eaqual to zero) being a common small function related to f (z) and g(z), if f n(z)(f(z)-1)f'(z) and gn(z)(g'z)-1)g'(z) share a(z) CM, where CM is counting multiplicity, then g(z)≡f (z). This is an extended version of Fang and Hong's theorem [ Fang ML, Hong W, A unicity theorem for entire functions concerning differential polynomials, Journal of Indian Pure Applied Mathematics, 2001, 32 (9): 1343-1348].

  13. GENERALIZED WEAK FUNCTIONS

    Institute of Scientific and Technical Information of China (English)

    丁夏畦; 罗佩珠

    2004-01-01

    In this paper the authors introduce some new ideas on generalized numbers and generalized weak functions. They prove that the product of any two weak functions is a generalized weak function. So in particular they solve the problem of the multiplication of two generalized functions.

  14. Hepatic (Liver) Function Panel

    Science.gov (United States)

    ... 1- to 2-Year-Old Blood Test: Hepatic (Liver) Function Panel KidsHealth > For Parents > Blood Test: Hepatic (Liver) Function Panel Print A A A What's in ... Is The hepatic function panel, also known as liver function tests, is a group of seven tests ...

  15. Decomposable Effectivity Functions

    NARCIS (Netherlands)

    Otten, G.J.M.

    1995-01-01

    Decomposable effectivity functions are introduced as an extension of additive effectivity functions. Whereas additive effectivity functions are determined by pairs of additive TU-games, decomposable effectivity functions are generated by pairs of TU-games that need not be additive. It turns out that

  16. Teager Correlation Function

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen; Hamila, R.; Gabbouj, M.

    1998-01-01

    A new correlation function called the Teager correlation function is introduced in this paper. The connection between this function, the Teager energy operator and the conventional correlation function is established. Two applications are presented. The first is the minimization of the Teager error...... norm and the second one is the use of the instantaneous Teager correlation function for simultaneous estimation of TDOA and FDOA (Time and Frequency Difference of Arrivals)....

  17. Relations between Lipschitz functions and convex functions

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    [1]Zajicek, J., On the differentation of convex functions in finite and infinite dimensional spaces, Czech J. Math.,1979, 29: 340-348.[2]Hu, T. C., Klee, V. L., Larman, D. G., Optimization of globally convex functions, SIAM J. Control Optim., 1989,27: 1026-1047.[3]Cepedello Boiso, M., Approximation of Lipschitz functions by △-convex functions in Banach spaces, Israel J.Math., 1998, 106: 269-284.[4]Asplund, E., Frechet differentiability of convex functions, Acta Math., 1968, 121: 31-47.[5]Johnson, J. A., Lipschitz spaces, Pacific J. Math, 1974, 51: 177-186.[6]Stromberg, T., The operation of infimal convolution, Dissert. Math., (Rozprawy Mat.), 1996, 325: 58.[7]Kadison, R. V., Ringrose, J. R., Fundamentals of the theory of operator algebras, volume Ⅰ: Elementary Theory,Graduate Studies in Math., vol. 15, Amer. Math. Soc., 1997.[8]Phelps, R. R., Convex functions,monotone operators and differentiability, Lect. Notes in Math., vol. 1364,Springer-Verlag, 1977.[9]Lindenstrauss, J., On operators which attain their norm, Israel J. Math., 1963, 1: 139-148.[10]Press, D., Gateaux differentiable functions are somewhere Frechet differentiable, Rend. Circ. Mat. Palermo,1984, 33: 122-133.[11]Press, D., Differentiability of Lipschitz functions on Banach spaces, J. Funct. Anal., 1990, 91:312-345.[12]Lindenstrauss, J., Press, D., On Frechet differentiability of Lipschitz maps between Banach spaces, Annals of Math., 2003, 157: 257-288.[13]Press, D., Gateaux differentiable Lipschitz functions need not be Frechet differentiable on a residual set, Supplemento Rend. Circ. Mat. Palermo, Serie Ⅱ, 1982, 2: 217-222.

  18. Probability density functions characterizing PSC particle size distribution parameters for NAT and STS derived from in situ measurements between 1989 and 2010 above McMurdo Station, Antarctica, and between 1991-2004 above Kiruna, Sweden

    Science.gov (United States)

    Deshler, Terry

    2016-04-01

    Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.

  19. Sampling functions for geophysics

    Science.gov (United States)

    Giacaglia, G. E. O.; Lunquist, C. A.

    1972-01-01

    A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.

  20. Filter function synthesis by Gegenbauer generating function

    Directory of Open Access Journals (Sweden)

    Pavlović Vlastimir D.

    2006-01-01

    Full Text Available Low-pass all-pole transfer functions with non-monotonic amplitude characteristic in the pass-band and at least (n -1 flatness conditions for ω = 0 are considered in this paper. A new class of filters in explicit form with one free parameter is obtained by applying generating functions of Gegenbauer polynomials. This class of filters has good selectivity and good shape of amplitude characteristics in the pass-band. The amplitude characteristics of these transfer functions have gain in the upper part of pass-band with respect to the gain for ω = 0. This way we have greater margin of attenuation in the upper part of the pass-band. This means a greater tolerance of elements or for elements with given tolerances, greater ambient temperature changes. The appropriate choice of the free parameter enables us to generate filter functions obtained with Chebyshev polynomials of the first and second kind and Legendre polynomials.

  1. Evans Functions, Jost Functions, and Fredholm Determinants

    Science.gov (United States)

    Gesztesy, Fritz; Latushkin, Yuri; Makarov, Konstantin A.

    2007-12-01

    The principal results of this paper consist of an intrinsic definition of the Evans function in terms of newly introduced generalized matrix-valued Jost solutions for general first-order matrix-valued differential equations on the real line, and a proof of the fact that the Evans function, a finite-dimensional determinant by construction, coincides with a modified Fredholm determinant associated with a Birman-Schwinger-type integral operator up to an explicitly computable nonvanishing factor.

  2. Parton Distribution Function Uncertainties

    CERN Document Server

    Giele, Walter T.; Kosower, David A.; Giele, Walter T.; Keller, Stephane A.; Kosower, David A.

    2001-01-01

    We present parton distribution functions which include a quantitative estimate of its uncertainties. The parton distribution functions are optimized with respect to deep inelastic proton data, expressing the uncertainties as a density measure over the functional space of parton distribution functions. This leads to a convenient method of propagating the parton distribution function uncertainties to new observables, now expressing the uncertainty as a density in the prediction of the observable. New measurements can easily be included in the optimized sets as added weight functions to the density measure. Using the optimized method nowhere in the analysis compromises have to be made with regard to the treatment of the uncertainties.

  3. A Blue Lagoon Function

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    2007-01-01

    We consider a specific function of two variables whose graph surface resembles a blue lagoon. The function has a saddle point $p$, but when the function is restricted to any given straight line through $p$ it has a {\\em{strict local minimum}} along that line at $p$.......We consider a specific function of two variables whose graph surface resembles a blue lagoon. The function has a saddle point $p$, but when the function is restricted to any given straight line through $p$ it has a {\\em{strict local minimum}} along that line at $p$....

  4. Functionalized boron nitride nanotubes

    Science.gov (United States)

    Sainsbury, Toby; Ikuno, Takashi; Zettl, Alexander K

    2014-04-22

    A plasma treatment has been used to modify the surface of BNNTs. In one example, the surface of the BNNT has been modified using ammonia plasma to include amine functional groups. Amine functionalization allows BNNTs to be soluble in chloroform, which had not been possible previously. Further functionalization of amine-functionalized BNNTs with thiol-terminated organic molecules has also been demonstrated. Gold nanoparticles have been self-assembled at the surface of both amine- and thiol-functionalized boron nitride Nanotubes (BNNTs) in solution. This approach constitutes a basis for the preparation of highly functionalized BNNTs and for their utilization as nanoscale templates for assembly and integration with other nanoscale materials.

  5. Belief functions on lattices

    CERN Document Server

    Grabisch, Michel

    2008-01-01

    We extend the notion of belief function to the case where the underlying structure is no more the Boolean lattice of subsets of some universal set, but any lattice, which we will endow with a minimal set of properties according to our needs. We show that all classical constructions and definitions (e.g., mass allocation, commonality function, plausibility functions, necessity measures with nested focal elements, possibility distributions, Dempster rule of combination, decomposition w.r.t. simple support functions, etc.) remain valid in this general setting. Moreover, our proof of decomposition of belief functions into simple support functions is much simpler and general than the original one by Shafer.

  6. Scaled density functional theory correlation functionals.

    Science.gov (United States)

    Ghouri, Mohammed M; Singh, Saurabh; Ramachandran, B

    2007-10-18

    We show that a simple one-parameter scaling of the dynamical correlation energy estimated by the density functional theory (DFT) correlation functionals helps increase the overall accuracy for several local and nonlocal functionals. The approach taken here has been described as the "scaled dynamical correlation" (SDC) method [Ramachandran, J. Phys. Chem. A 2006, 110, 396], and its justification is the same as that of the scaled external correlation (SEC) method of Brown and Truhlar. We examine five local and five nonlocal (hybrid) DFT functionals, the latter group including three functionals developed specifically for kinetics by the Truhlar group. The optimum scale factors are obtained by use of a set of 98 data values consisting of molecules, ions, and transition states. The optimum scale factors, found with a linear regression relationship, are found to differ from unity with a high degree of correlation in nearly every case, indicating that the deviation of calculated results from the experimental values are systematic and proportional to the dynamic correlation energy. As a consequence, the SDC scaling of dynamical correlation decreases the mean errors (signed and unsigned) by significant amounts in an overwhelming majority of cases. These results indicate that there are gains to be realized from further parametrization of several popular exchange-correlation functionals.

  7. Functional Median Polish

    KAUST Repository

    Sun, Ying

    2012-08-03

    This article proposes functional median polish, an extension of univariate median polish, for one-way and two-way functional analysis of variance (ANOVA). The functional median polish estimates the functional grand effect and functional main factor effects based on functional medians in an additive functional ANOVA model assuming no interaction among factors. A functional rank test is used to assess whether the functional main factor effects are significant. The robustness of the functional median polish is demonstrated by comparing its performance with the traditional functional ANOVA fitted by means under different outlier models in simulation studies. The functional median polish is illustrated on various applications in climate science, including one-way and two-way ANOVA when functional data are either curves or images. Specifically, Canadian temperature data, U. S. precipitation observations and outputs of global and regional climate models are considered, which can facilitate the research on the close link between local climate and the occurrence or severity of some diseases and other threats to human health. © 2012 International Biometric Society.

  8. Smart hydrogel functional materials

    CERN Document Server

    Chu, Liang-Yin; Ju, Xiao-Jie

    2014-01-01

    This book systematically introduces smart hydrogel functional materials with the configurations ranging from hydrogels to microgels. It serves as an excellent reference for designing and fabricating artificial smart hydrogel functional materials.

  9. Functional Python programming

    CERN Document Server

    Lott, Steven

    2015-01-01

    This book is for developers who want to use Python to write programs that lean heavily on functional programming design patterns. You should be comfortable with Python programming, but no knowledge of functional programming paradigms is needed.

  10. Operations Between Functions

    DEFF Research Database (Denmark)

    Gardner, Richard J.; Kiderlen, Markus

    A structural theory of operations between real-valued (or extended-real-valued) functions on a nonempty subset A of Rn is initiated. It is shown, for example, that any operation ∗ on a cone of functions containing the constant functions, which is pointwise, positively homogeneous, monotonic......, and associative, must be one of 40 explicitly given types. In particular, this is the case for operations between pairs of arbitrary, or continuous, or differentiable functions. The term pointwise means that (f ∗g)(x) = F(f(x), g(x)), for all x ∈ A and some function F of two variables. Several results in the same...... spirit are obtained for operations between convex functions or between support functions. For example, it is shown that ordinary addition is the unique pointwise operation between convex functions satisfying the identity property, i.e., f ∗ 0 = 0 ∗ f = f, for all convex f, while other results classify Lp...

  11. Adding functionality to garments

    CSIR Research Space (South Africa)

    Hunter, L

    2014-11-01

    Full Text Available various functionalities, such as retention of appearance, durability, comfort, handle and tailorability can be enhanced in garments. The tests used to assess and quantify the different functionalities are described....

  12. Functionalized diamond nanoparticles

    KAUST Repository

    Beaujuge, Pierre M.

    2014-10-21

    A diamond nanoparticle can be functionalized with a substituted dienophile under ambient conditions, and in the absence of catalysts or additional reagents. The functionalization is thought to proceed through an addition reaction.

  13. Liver Function Tests

    Science.gov (United States)

    ... food, store energy, and remove poisons. Liver function tests are blood tests that check to see how well your liver ... hepatitis and cirrhosis. You may have liver function tests as part of a regular checkup. Or you ...

  14. Monotone Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Korshunov, A D [S.L. Sobolev Institute for Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)

    2003-10-31

    Monotone Boolean functions are an important object in discrete mathematics and mathematical cybernetics. Topics related to these functions have been actively studied for several decades. Many results have been obtained, and many papers published. However, until now there has been no sufficiently complete monograph or survey of results of investigations concerning monotone Boolean functions. The object of this survey is to present the main results on monotone Boolean functions obtained during the last 50 years.

  15. Phylogenetic molecular function annotation

    OpenAIRE

    Engelhardt, Barbara E.; Jordan, Michael I.; Repo, Susanna T; Brenner, Steven E.

    2009-01-01

    It is now easier to discover thousands of protein sequences in a new microbial genome than it is to biochemically characterize the specific activity of a single protein of unknown function. The molecular functions of protein sequences have typically been predicted using homology-based computational methods, which rely on the principle that homologous proteins share a similar function. However, some protein families include groups of proteins with different molecular functions. A phylogenetic ...

  16. Distributed processing; distributed functions?

    OpenAIRE

    Fox, Peter T.; FRISTON, KARL J

    2012-01-01

    After more than twenty years busily mapping the human brain, what have we learned from neuroimaging? This review (coda) considers this question from the point of view of structure–function relationships and the two cornerstones of functional neuroimaging; functional segregation and integration. Despite remarkable advances and insights into the brain’s functional architecture, the earliest and simplest challenge in human brain mapping remains unresolved: We do not have a principled way to map ...

  17. Pseudolinear functions and optimization

    CERN Document Server

    Mishra, Shashi Kant

    2015-01-01

    Pseudolinear Functions and Optimization is the first book to focus exclusively on pseudolinear functions, a class of generalized convex functions. It discusses the properties, characterizations, and applications of pseudolinear functions in nonlinear optimization problems.The book describes the characterizations of solution sets of various optimization problems. It examines multiobjective pseudolinear, multiobjective fractional pseudolinear, static minmax pseudolinear, and static minmax fractional pseudolinear optimization problems and their results. The authors extend these results to locally

  18. Spheroidal wave functions

    CERN Document Server

    Flammer, Carson

    2005-01-01

    Intended to facilitate the use and calculation of spheroidal wave functions, this applications-oriented text features a detailed and unified account of the properties of these functions. Addressed to applied mathematicians, mathematical physicists, and mathematical engineers, it presents tables that provide a convenient means for handling wave problems in spheroidal coordinates.Topics include separation of the scalar wave equation in spheroidal coordinates, angle and radial functions, integral representations and relations, and expansions in spherical Bessel function products. Additional subje

  19. Functional linear models

    OpenAIRE

    2015-01-01

    This work aims at the exposition of two different results we have obtained in Functional Data Analysis. The first is a variable selection method in Functional Regression which is an adaptation of the well known Lasso technique. The second is a brand new Random Walk test for Functional Time Series. Being the results afferent to different areas of Functional Data Analysis, as well as of general Statistics, the introduction will be divided in three parts. Firstly we expose the fundament...

  20. Functional Cantor equation

    Science.gov (United States)

    Shabat, A. B.

    2016-12-01

    We consider the class of entire functions of exponential type in relation to the scattering theory for the Schrödinger equation with a finite potential that is a finite Borel measure. These functions have a special self-similarity and satisfy q-difference functional equations. We study their asymptotic behavior and the distribution of zeros.

  1. Degenerate Euler zeta function

    OpenAIRE

    Kim, Taekyun

    2015-01-01

    Recently, T. Kim considered Euler zeta function which interpolates Euler polynomials at negative integer (see [3]). In this paper, we study degenerate Euler zeta function which is holomorphic function on complex s-plane associated with degenerate Euler polynomials at negative integers.

  2. Phylogenetic molecular function annotation

    Science.gov (United States)

    Engelhardt, Barbara E.; Jordan, Michael I.; Repo, Susanna T.; Brenner, Steven E.

    2009-07-01

    It is now easier to discover thousands of protein sequences in a new microbial genome than it is to biochemically characterize the specific activity of a single protein of unknown function. The molecular functions of protein sequences have typically been predicted using homology-based computational methods, which rely on the principle that homologous proteins share a similar function. However, some protein families include groups of proteins with different molecular functions. A phylogenetic approach for predicting molecular function (sometimes called "phylogenomics") is an effective means to predict protein molecular function. These methods incorporate functional evidence from all members of a family that have functional characterizations using the evolutionary history of the protein family to make robust predictions for the uncharacterized proteins. However, they are often difficult to apply on a genome-wide scale because of the time-consuming step of reconstructing the phylogenies of each protein to be annotated. Our automated approach for function annotation using phylogeny, the SIFTER (Statistical Inference of Function Through Evolutionary Relationships) methodology, uses a statistical graphical model to compute the probabilities of molecular functions for unannotated proteins. Our benchmark tests showed that SIFTER provides accurate functional predictions on various protein families, outperforming other available methods.

  3. Expanding Pseudorandom Functions

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Nielsen, Jesper Buus

    2002-01-01

    Given any weak pseudorandom function, we present a general and efficient technique transforming such a function to a new weak pseudorandom function with an arbitrary length output. This implies, among other things, an encryption mode for block ciphers. The mode is as efficient as known (and widely...

  4. Diplomacy and diplomatic functions

    OpenAIRE

    2011-01-01

    Through the main lines of this study we try to introduce specific approaches of diplomacy, diplomatic mission, diplomatic visits and diplomatic functions. Functions of diplomacy: representation, negotiation, information, diplomatic protection, international cooperation, consular function have been developed and analyzed more in depth

  5. Two Functions of Language

    Science.gov (United States)

    Feldman, Carol Fleisher

    1977-01-01

    Author advocates the view that meaning is necessarily dependent upon the communicative function of language and examines the objections, particularly those of Noam Chomsky, to this view. Argues that while Chomsky disagrees with the idea that communication is the essential function of language, he implicitly agrees that it has a function.…

  6. Two Functions of Language

    Science.gov (United States)

    Feldman, Carol Fleisher

    1977-01-01

    Author advocates the view that meaning is necessarily dependent upon the communicative function of language and examines the objections, particularly those of Noam Chomsky, to this view. Argues that while Chomsky disagrees with the idea that communication is the essential function of language, he implicitly agrees that it has a function.…

  7. Expanding Pseudorandom Functions

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Nielsen, Jesper Buus

    2002-01-01

    Given any weak pseudorandom function, we present a general and efficient technique transforming such a function to a new weak pseudorandom function with an arbitrary length output. This implies, among other things, an encryption mode for block ciphers. The mode is as efficient as known (and widel...

  8. What Is Functionalism?

    Science.gov (United States)

    Bates, Elizabeth; MacWhinney, Brian

    A defense of functionalism in linguistics, and more specifically the competition model of linguistic performance, examines six misconceptions about the functionalist approach. Functionalism is defined as the belief that the forms of natural languages are created, governed, constrained, acquired, and used for communicative functions. Functionalism…

  9. Clinical functional MRI. Presurgical functional neuroimaging

    Energy Technology Data Exchange (ETDEWEB)

    Stippich, C. (ed.) [Heidelberg Univ. (Germany). Div. of Neuroradiology

    2007-07-01

    Functional magnetic resonance imaging (fMRI) permits noninvasive imaging of the ''human brain at work'' under physiological conditions. This is the first textbook on clinical fMRI. It is devoted to preoperative fMRI in patients with brain tumors and epilepsies, which are the most well-established clinical applications. By localizing and lateralizing specific brain functions, as well as epileptogenic zones, fMRI facilitates the selection of a safe treatment and the planning and performance of function-preserving neurosurgery. State of the art fMRI procedures are presented, with detailed consideration of the physiological and methodological background, imaging and data processing, normal and pathological findings, diagnostic possibilities and limitations, and other related techniques. All chapters are written by recognized experts in their fields, and the book is designed to be of value to beginners, trained clinicians and experts alike. (orig.)

  10. The Cosmological Mass Function

    CERN Document Server

    Monaco, P

    1997-01-01

    This thesis aims to review the cosmological mass function problem, both from the theoretical and the observational point of view, and to present a new mass function theory, based on realistic approximations for the dynamics of gravitational collapse. Chapter 1 gives a general introduction on gravitational dynamics in cosmological models. Chapter 2 gives a complete review of the mass function theory. Chapters 3 and 4 present the ``dynamical'' mass function theory, based on truncated Lagrangian dynamics and on the excursion set approach. Chapter 5 reviews the observational state-of-the-art and the main applications of the mass function theories described before. Finally, Chapter 6 gives conclusions and future prospects.

  11. Planar Difference Functions

    CERN Document Server

    Hall, Joanne L; Donovan, Diane

    2012-01-01

    In 1980 Alltop produced a family of cubic phase sequences that nearly meet the Welch bound for maximum non-peak correlation magnitude. This family of sequences were shown by Wooters and Fields to be useful for quantum state tomography. Alltop's construction used a function that is not planar, but whose difference function is planar. In this paper we show that Alltop type functions cannot exist in fields of characteristic 3 and that for a known class of planar functions, $x^3$ is the only Alltop type function.

  12. Introduction to functional equations

    CERN Document Server

    Sahoo, Prasanna K

    2011-01-01

    Introduction to Functional Equations grew out of a set of class notes from an introductory graduate level course at the University of Louisville. This introductory text communicates an elementary exposition of valued functional equations where the unknown functions take on real or complex values. In order to make the presentation as manageable as possible for students from a variety of disciplines, the book chooses not to focus on functional equations where the unknown functions take on values on algebraic structures such as groups, rings, or fields. However, each chapter includes sections hig

  13. Managing Functional Power

    DEFF Research Database (Denmark)

    Rosenstand, Claus Andreas Foss; Laursen, Per Kyed

    2013-01-01

    How does one manage functional power relations between leading functions in vision driven digital media creation, and this from idea to master during the creation cycle? Functional power is informal, and it is understood as roles, e.g. project manager, that provide opportunities to contribute...... to the product quality. The area of interest is the vision driven digital media industry in general; however, the point of departure is the game industry due to its aesthetic complexity. The article's contribution to the area is a power graph, which shows the functional power of the leading functions according...

  14. Managing Functional Power

    DEFF Research Database (Denmark)

    Rosenstand, Claus Andreas Foss; Laursen, Per Kyed

    2013-01-01

    How does one manage functional power relations between leading functions in vision driven digital media creation, and this from idea to master during the creation cycle? Functional power is informal, and it is understood as roles, e.g. project manager, that provide opportunities to contribute...... to the product quality. The area of interest is the vision driven digital media industry in general; however, the point of departure is the game industry due to its aesthetic complexity. The article's contribution to the area is a power graph, which shows the functional power of the leading functions according...

  15. Transfer function combinations

    KAUST Repository

    Zhou, Liang

    2012-10-01

    Direct volume rendering has been an active area of research for over two decades. Transfer function design remains a difficult task since current methods, such as traditional 1D and 2D transfer functions, are not always effective for all data sets. Various 1D or 2D transfer function spaces have been proposed to improve classification exploiting different aspects, such as using the gradient magnitude for boundary location and statistical, occlusion, or size metrics. In this paper, we present a novel transfer function method which can provide more specificity for data classification by combining different transfer function spaces. In this work, a 2D transfer function can be combined with 1D transfer functions which improve the classification. Specifically, we use the traditional 2D scalar/gradient magnitude, 2D statistical, and 2D occlusion spectrum transfer functions and combine these with occlusion and/or size-based transfer functions to provide better specificity. We demonstrate the usefulness of the new method by comparing to the following previous techniques: 2D gradient magnitude, 2D occlusion spectrum, 2D statistical transfer functions and 2D size based transfer functions. © 2012 Elsevier Ltd.

  16. Multistate nested canalizing functions

    CERN Document Server

    Adeyeye, J O; Laubenbacher, R; Li, Y

    2013-01-01

    The concept of a nested canalizing Boolean function has been studied over the course of the last decade in the context of understanding the regulatory logic of molecular interaction networks, such as gene regulatory networks. Such functions appear preferentially in published models of such networks. Recently, this concept has been generalized to include multi-state functions, and a recursive formula has been derived for their number, as a function of the number of variables. This paper carries out a detailed analysis of the class of nested canalizing functions over an arbitrary finite field. Furthermore, the paper generalizes the concept further, and derives a closed formula for the number of such generalized functions. The paper also derives a closed formula for the number of equivalence classes under permutation of variables. This is motivated by the fact that two nested canalizing functions that differ by a permutation of the variables share many important properties with each other. The paper contributes ...

  17. ALGEBROIDAL FUNCTION AND ITS DERIVED FUNCTION IN UNIT CIRCULAR DISC

    Institute of Scientific and Technical Information of China (English)

    Huo Yingying; Sun Daochun

    2009-01-01

    In this article, the authors define the derived function of an algeboidal function in the unit disc, prove it is an algabriodal function, and study the order of algebroidal function and that of its derived function in unit circular disc.

  18. A MULTIVARIATE FIT LUMINOSITY FUNCTION AND WORLD MODEL FOR LONG GAMMA-RAY BURSTS

    Energy Technology Data Exchange (ETDEWEB)

    Shahmoradi, Amir, E-mail: amir@physics.utexas.edu [Institute for Fusion Studies, The University of Texas at Austin, TX 78712 (United States)

    2013-04-01

    It is proposed that the luminosity function, the rest-frame spectral correlations, and distributions of cosmological long-duration (Type-II) gamma-ray bursts (LGRBs) may be very well described as a multivariate log-normal distribution. This result is based on careful selection, analysis, and modeling of LGRBs' temporal and spectral variables in the largest catalog of GRBs available to date: 2130 BATSE GRBs, while taking into account the detection threshold and possible selection effects. Constraints on the joint rest-frame distribution of the isotropic peak luminosity (L{sub iso}), total isotropic emission (E{sub iso}), the time-integrated spectral peak energy (E{sub p,z}), and duration (T{sub 90,z}) of LGRBs are derived. The presented analysis provides evidence for a relatively large fraction of LGRBs that have been missed by the BATSE detector with E{sub iso} extending down to {approx}10{sup 49} erg and observed spectral peak energies (E{sub p} ) as low as {approx}5 keV. LGRBs with rest-frame duration T{sub 90,z} {approx}< 1 s or observer-frame duration T{sub 90} {approx}< 2 s appear to be rare events ({approx}< 0.1% chance of occurrence). The model predicts a fairly strong but highly significant correlation ({rho} = 0.58 {+-} 0.04) between E{sub iso} and E{sub p,z} of LGRBs. Also predicted are strong correlations of L{sub iso} and E{sub iso} with T{sub 90,z} and moderate correlation between L{sub iso} and E{sub p,z}. The strength and significance of the correlations found encourage the search for underlying mechanisms, though undermine their capabilities as probes of dark energy's equation of state at high redshifts. The presented analysis favors-but does not necessitate-a cosmic rate for BATSE LGRBs tracing metallicity evolution consistent with a cutoff Z/Z{sub Sun} {approx} 0.2-0.5, assuming no luminosity-redshift evolution.

  19. Antisymmetric Orbit Functions

    Directory of Open Access Journals (Sweden)

    Anatoliy Klimyk

    2007-02-01

    Full Text Available In the paper, properties of antisymmetric orbit functions are reviewed and further developed. Antisymmetric orbit functions on the Euclidean space $E_n$ are antisymmetrized exponential functions. Antisymmetrization is fulfilled by a Weyl group, corresponding to a Coxeter-Dynkin diagram. Properties of such functions are described. These functions are closely related to irreducible characters of a compact semisimple Lie group $G$ of rank $n$. Up to a sign, values of antisymmetric orbit functions are repeated on copies of the fundamental domain $F$ of the affine Weyl group (determined by the initial Weyl group in the entire Euclidean space $E_n$. Antisymmetric orbit functions are solutions of the corresponding Laplace equation in $E_n$, vanishing on the boundary of the fundamental domain $F$. Antisymmetric orbit functions determine a so-called antisymmetrized Fourier transform which is closely related to expansions of central functions in characters of irreducible representations of the group $G$. They also determine a transform on a finite set of points of $F$ (the discrete antisymmetric orbit function transform. Symmetric and antisymmetric multivariate exponential, sine and cosine discrete transforms are given.

  20. Testing Coverage Functions

    CERN Document Server

    Chakrabarty, Deeparnab

    2012-01-01

    A coverage function f over a ground set [m] is associated with a universe U of weighted elements and m subsets A_1,..., A_m of U, and for any subset T of [m], f(T) is defined as the total weight of the elements in the union $\\cup_{j\\in T} A_j$. Coverage functions are an important special case of submodular functions, and arise in many applications, for instance as a class of utility functions of agents in combinatorial auctions. Set functions such as coverage functions often lack succinct representations, and in algorithmic applications, an access to a value oracle is assumed. In this paper, we ask whether one can test if a given oracle is that of a coverage function or not. We demonstrate an algorithm which makes O(m|U|) queries to an oracle of a coverage function and completely reconstructs it. This gives a polytime tester for succinct coverage functions for which |U$ is polynomially bounded in m. In contrast, we demonstrate a set function which is "far" from coverage, but requires 2^{\\tilde{\\Theta}(m)} que...

  1. Elliptic hypergeometric functions

    CERN Document Server

    Spiridonov, V P

    2016-01-01

    This is author's Habilitation Thesis (Dr. Sci. dissertation) submitted at the beginning of September 2004. It is written in Russian and is posted due to the continuing requests for the manuscript. The content: 1. Introduction, 2. Nonlinear chains with the discrete time and their self-similar solutions, 3. General theory of theta hypergeometric series, 4. Theta hypergeometric integrals, 5. Biorthogonal functions, 6. Elliptic hypergeometric functions with |q|=1, 7. Conclusion, 8. References. It contains an outline of a general heuristic scheme for building univariate special functions through self-similar reductions of spectral transformation chains, which allowed construction of the differential-difference q-Painleve equations, as well as of the most general known set of elliptic biorthogonal functions comprising all classical orthogonal polynomials and biorthogonal rational functions. One of the key results of the thesis consists in the discovery of genuinely transcendental elliptic hypergeometric functions d...

  2. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    In data science, there are important parameters that affect the accuracy of the algorithms used. Some of these parameters are: the type of data objects, the membership assignments, and distance or similarity functions. This paper discusses similarity functions as fundamental elements in membership...... assignments. The paper introduces Weighted Feature Distance (WFD), and Prioritized Weighted Feature Distance (PWFD), two new distance functions that take into account the diversity in feature spaces. WFD functions perform better in supervised and unsupervised methods by comparing data objects on their feature...... spaces, in addition to their similarity in the vector space. Prioritized Weighted Feature Distance (PWFD) works similarly as WFD, but provides the ability to give priorities to desirable features. The accuracy of the proposed functions are compared with other similarity functions on several data sets...

  3. Intrinsic-Density Functionals

    CERN Document Server

    Engel, J

    2006-01-01

    The Hohenberg-Kohn theorem and Kohn-Sham procedure are extended to functionals of the localized intrinsic density of a self-bound system such as a nucleus. After defining the intrinsic-density functional, we modify the usual Kohn-Sham procedure slightly to evaluate the mean-field approximation to the functional, and carefully describe the construction of the leading corrections for a system of fermions in one dimension with a spin-degeneracy equal to the number of particles N. Despite the fact that the corrections are complicated and nonlocal, we are able to construct a local Skyrme-like intrinsic-density functional that, while different from the exact functional, shares with it a minimum value equal to the exact ground-state energy at the exact ground-state intrinsic density, to next-to-leading order in 1/N. We briefly discuss implications for real Skyrme functionals.

  4. Counting with symmetric functions

    CERN Document Server

    Mendes, Anthony

    2015-01-01

    This monograph provides a self-contained introduction to symmetric functions and their use in enumerative combinatorics.  It is the first book to explore many of the methods and results that the authors present. Numerous exercises are included throughout, along with full solutions, to illustrate concepts and also highlight many interesting mathematical ideas. The text begins by introducing fundamental combinatorial objects such as permutations and integer partitions, as well as generating functions.  Symmetric functions are considered in the next chapter, with a unique emphasis on the combinatorics of the transition matrices between bases of symmetric functions.  Chapter 3 uses this introductory material to describe how to find an assortment of generating functions for permutation statistics, and then these techniques are extended to find generating functions for a variety of objects in Chapter 4.  The next two chapters present the Robinson-Schensted-Knuth algorithm and a method for proving Pólya’s enu...

  5. Symmetric Boolean functions

    OpenAIRE

    Canteaut, Anne; Videau, Marion

    2005-01-01

    http://www.ieee.org/; We present an extensive study of symmetric Boolean functions, especially of their cryptographic properties. Our main result establishes the link between the periodicity of the simplified value vector of a symmetric Boolean function and its degree. Besides the reduction of the amount of memory required for representing a symmetric function, this property has some consequences from a cryptographic point of view. For instance, it leads to a new general bound on the order of...

  6. Functional bowel disease

    DEFF Research Database (Denmark)

    Rumessen, J J; Gudmand-Høyer, E

    1988-01-01

    Twenty-five patients with functional bowel disease were given fructose, sorbitol, fructose-sorbitol mixtures, and sucrose. The occurrence of malabsorption was evaluated by means of hydrogen breath tests and the gastrointestinal symptoms, if any, were recorded. One patient could not be evaluated...... with functional bowel disease. The findings may have direct influence on the dietary guidance given to a major group of patients with functional bowel disease and may make it possible to define separate entities in this disease complex....

  7. Spectral Functions in QFT

    CERN Document Server

    Pisani, Pablo

    2015-01-01

    We present a pedagogical exposition of some applications of functional methods in quantum field theory: we use heat-kernel and zeta-function techniques to study the Casimir effect, the pair production in strong electric fields, quantum fields at finite temperature and beta-functions for a self-interacting scalar field, QED and pure Yang-Mills theories. The more recent application to the UV/IR mixing phenomenon in noncommutative theories is also discussed in this framework.

  8. EVALUATING FUNCTIONAL REGIONS

    Directory of Open Access Journals (Sweden)

    Samo Drobne

    2012-12-01

    Full Text Available In the paper, we suggest an approach to evaluate the number and composition of functional regions. Suggested approach is based on basic characteristics of functional regions, that are (1 more intensive intra-regional than the inter-regional interactions and (2 internal social and economic heterogeneity. Those characteristics are measured by factors estimated in spatial interaction model. The approach to evaluate functional regions was applied to Slovenia for three time periods.

  9. Balance Function Disorders

    Science.gov (United States)

    1991-01-01

    Researchers at the Balance Function Laboratory and Clinic at the Minneapolis (MN) Neuroscience Institute on the Abbot Northwestern Hospital Campus are using a rotational chair (technically a "sinusoidal harmonic acceleration system") originally developed by NASA to investigate vestibular (inner ear) function in weightlessness to diagnose and treat patients with balance function disorders. Manufactured by ICS Medical Corporation, Schaumberg, IL, the chair system turns a patient and monitors his or her responses to rotational stimulation.

  10. Theory of functions

    CERN Document Server

    Knopp, Konrad

    1996-01-01

    This is a one-volume edition of Parts I and II of the classic five-volume set The Theory of Functions prepared by renowned mathematician Konrad Knopp. Concise, easy to follow, yet complete and rigorous, the work includes full demonstrations and detailed proofs.Part I stresses the general foundation of the theory of functions, providing the student with background for further books on a more advanced level.Part II places major emphasis on special functions and characteristic, important types of functions, selected from single-valued and multiple-valued classes.

  11. Control functions in MFM

    DEFF Research Database (Denmark)

    Lind, Morten

    2011-01-01

    Multilevel Flow Modeling (MFM) has been proposed as a tool for representing goals and functions of complex industrial plants and suggested as a basis for reasoning about control situations. Lind presents an introduction to MFM but do not describe how control functions are used in the modeling....... The purpose of the present paper is to serve as a companion paper to this introduction by explaining the basic principles used in MFM for representation of control functions. A theoretical foundation for modeling control functions is presented and modeling examples are given for illustration....

  12. Functional Nausea in Children.

    Science.gov (United States)

    Kovacic, Katja; Di Lorenzo, Carlo

    2016-03-01

    Chronic nausea is a highly prevalent, bothersome, and difficult-to-treat symptom among adolescents. When chronic nausea presents as the predominant symptom and is not associated with any underlying disease, it may be considered a functional gastrointestinal disorder and named "functional nausea." The clinical features of functional nausea and its association with comorbid conditions provide clues to the underlying pathophysiological mechanisms. These may include gastrointestinal motor and sensory disturbances, autonomic imbalance, altered central nervous system pathways, or a combination of these. This review summarizes the current knowledge on mechanisms and treatment strategies for chronic, functional nausea in children.

  13. The gamma function

    CERN Document Server

    Artin, Emil

    2015-01-01

    This brief monograph on the gamma function was designed by the author to fill what he perceived as a gap in the literature of mathematics, which often treated the gamma function in a manner he described as both sketchy and overly complicated. Author Emil Artin, one of the twentieth century's leading mathematicians, wrote in his Preface to this book, ""I feel that this monograph will help to show that the gamma function can be thought of as one of the elementary functions, and that all of its basic properties can be established using elementary methods of the calculus."" Generations of teachers

  14. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Thomsen, Søren Steffen

    2009-01-01

    Cryptographic hash functions are commonly used in many different areas of cryptography: in digital signatures and in public-key cryptography, for password protection and message authentication, in key derivation functions, in pseudo-random number generators, etc. Recently, cryptographic hash...... well-known designs, and also some design and cryptanalysis in which the author took part. The latter includes a construction method for hash functions and four designs, of which one was submitted to the SHA-3 hash function competition, initiated by the U.S. standardisation body NIST. It also includes...

  15. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2010-01-01

    value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash...... important applications has also been analysed. This successful cryptanalysis of the standard hash functions has made National Institute of Standards and Technology (NIST), USA to initiate an international public competition to select the most secure and efficient hash function as the future hash function...... based MACs are reported. The goals of NIST's SHA-3 competition and its current progress are outlined....

  16. Quantum Iterated Function Systems

    CERN Document Server

    Lozinski, A; Slomczynski, W; Lozinski, Artur; Zyczkowski, Karol; Slomczynski, Wojciech

    2003-01-01

    Iterated functions system (IFS) is defined by specifying a set of functions in a classical phase space, which act randomly on the initial point. In an analogous way, we define quantum iterated functions system (QIFS), where functions act randomly with prescribed probabilities in the Hilbert space. In a more general setting a QIFS consists of completely positive maps acting in the space of density operators. We present exemplary classical IFSs, the invariant measure of which exhibits fractal structure, and study properties of the corresponding QIFSs and their invariant state.

  17. Perceptual Audio Hashing Functions

    Directory of Open Access Journals (Sweden)

    Emin Anarım

    2005-07-01

    Full Text Available Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.

  18. Functional Object Analysis

    DEFF Research Database (Denmark)

    Raket, Lars Lau

    -effect formulations, where the observed functional signal is assumed to consist of both fixed and random functional effects. This thesis takes the initial steps toward the development of likelihood-based methodology for functional objects. We first consider analysis of functional data defined on high......-dimensional Euclidean spaces under the effect of additive spatially correlated effects, and then move on to consider how to include data alignment in the statistical model as a nonlinear effect under additive correlated noise. In both cases, we will give directions on how to generalize the methodology to more complex...

  19. ON A FUNCTIONAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    Ding Yi

    2009-01-01

    In this article, the author derives a functional equation η(s)=[(π/4)s-1/2√2/πг(1-s)sin(πs/2)]η(1-s) of the analytic function η(s) which is defined by η(s)=1-s-3-s-5-s+7-s…for complex variable s with Re s>1, and is defined by analytic continuation for other values of s. The author proves (1) by Ramanujan identity (see [1], [3]). Her method provides a new derivation of the functional equation of Riemann zeta function by using Poisson summation formula.

  20. Functional foods in pediatrics.

    Science.gov (United States)

    Van den Driessche, M; Veereman-Wauters, G

    2002-01-01

    The philosophy that food can be health promoting beyond its nutritional value is gaining acceptance. Known disease preventive aspects of nutrition have led to a new science, the 'functional food science'. Functional foods, first introduced in Japan, have no universally accepted definition but can be described as foods or food ingredients that may provide health benefits and prevent diseases. Currently, there is a growing interest in these products. However, not all regulatory issues have been settled yet. Five categories of foods can be classified as functional foods: dietary fibers, vitamins and minerals, bioactive substances, fatty acids and pro-, pre- and symbiotics. The latter are currently the main focus of research. Functional foods can be applied in pediatrics: during pregnancy, nutrition is 'functional' since it has prenatal influences on the intra-uterine development of the baby, after birth, 'functional' human milk supports adequate growth of infants and pro- and prebiotics can modulate the flora composition and as such confer certain health advantages. Functional foods have also been studied in pediatric diseases. The severity of necrotising enterocolitis (NEC), diarrhea, irritable bowel syndrome, intestinal allergy and lactose intolerance may be reduced by using functional foods. Functional foods have proven to be valuable contributors to the improvement of health and the prevention of diseases in pediatric populations.

  1. Comparison of Multivariate Poisson lognormal spatial and temporal crash models to identify hot spots of intersections based on crash types.

    Science.gov (United States)

    Cheng, Wen; Gill, Gurdiljot Singh; Dasu, Ravi; Xie, Meiquan; Jia, Xudong; Zhou, Jiao

    2017-02-01

    Most of the studies are focused on the general crashes or total crash counts with considerably less research dedicated to different crash types. This study employs the Systemic approach for detection of hotspots and comprehensively cross-validates five multivariate models of crash type-based HSID methods which incorporate spatial and temporal random effects. It is anticipated that comparison of the crash estimation results of the five models would identify the impact of varied random effects on the HSID. The data over a ten year time period (2003-2012) were selected for analysis of a total 137 intersections in the City of Corona, California. The crash types collected in this study include: Rear-end, Head-on, Side-swipe, Broad-side, Hit object, and Others. Statistically significant correlations among crash outcomes for the heterogeneity error term were observed which clearly demonstrated their multivariate nature. Additionally, the spatial random effects revealed the correlations among neighboring intersections across crash types. Five cross-validation criteria which contains, Residual Sum of Squares, Kappa, Mean Absolute Deviation, Method Consistency Test, and Total Rank Difference, were applied to assess the performance of the five HSID methods at crash estimation. In terms of accumulated results which combined all crash types, the model with spatial random effects consistently outperformed the other competing models with a significant margin. However, the inclusion of spatial random effect in temporal models fell short of attaining the expected results. The overall observation from the model fitness and validation results failed to highlight any correlation among better model fitness and superior crash estimation.

  2. Normal Functions as a New Way of Defining Computable Functions

    Directory of Open Access Journals (Sweden)

    Leszek Dubiel

    2004-01-01

    Full Text Available Report sets new method of defining computable functions. This is formalization of traditional function descriptions, so it allows to define functions in very intuitive way. Discovery of Ackermann function proved that not all functions that can be easily computed can be so easily described with Hilbert's system of recursive functions. Normal functions lack this disadvantage.

  3. Normal Functions As A New Way Of Defining Computable Functions

    Directory of Open Access Journals (Sweden)

    Leszek Dubiel

    2004-01-01

    Full Text Available Report sets new method of defining computable functions. This is formalization of traditional function descriptions, so it allows to define functions in very intuitive way. Discovery of Ackermann function proved that not all functions that can be easily computed can be so easily described with Hilbert’s system of recursive functions. Normal functions lack this disadvantage.

  4. Function spaces, 1

    CERN Document Server

    Pick, Luboš; John, Oldrich; Fucík, Svatopluk

    2012-01-01

    This is the first part of the second revised and extended edition of a well established monograph. It is an introduction to function spaces defined in terms of differentiability and integrability classes. It provides a catalogue of various spaces and benefits as a handbook for those who use function spaces to study other topics such as partial differential equations. Volum

  5. Functional Magnetic Resonance Imaging

    Science.gov (United States)

    Voos, Avery; Pelphrey, Kevin

    2013-01-01

    Functional magnetic resonance imaging (fMRI), with its excellent spatial resolution and ability to visualize networks of neuroanatomical structures involved in complex information processing, has become the dominant technique for the study of brain function and its development. The accessibility of in-vivo pediatric brain-imaging techniques…

  6. Fundamentals of Functional Analysis

    CERN Document Server

    Kutateladze, S S; Slovák, Jan

    2001-01-01

    A concise guide to basic sections of modern functional analysis. Included are such topics as the principles of Banach and Hilbert spaces, the theory of multinormed and uniform spaces, the Riesz-Dunford holomorphic functional calculus, the Fredholm index theory, convex analysis and duality theory for locally convex spaces with applications to the Schwartz spaces of distributions and Radon measures.

  7. ON UNIVALENT BLOCH FUNCTIONS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new characterization of univalent Bloch functions is given by investigating the growth order of an essentially increasing function. Our contribution can be considered as a slight improvement of the well-known Pommerenke's result and its all generalizations, and the proof presented in this paper is independently developed.

  8. Functional Magnetic Resonance Imaging

    Science.gov (United States)

    Voos, Avery; Pelphrey, Kevin

    2013-01-01

    Functional magnetic resonance imaging (fMRI), with its excellent spatial resolution and ability to visualize networks of neuroanatomical structures involved in complex information processing, has become the dominant technique for the study of brain function and its development. The accessibility of in-vivo pediatric brain-imaging techniques…

  9. Distribution Functions of Copulas

    Institute of Scientific and Technical Information of China (English)

    LI Yong-hong; He Ping

    2007-01-01

    A general method was proposed to evaluate the distribution function of 〈C1|C2〉 . Some examples were presented to validate the application of the method. Then the sufficient and necessary condition for that the distribution function ofis uniform was proved.

  10. Pulmonary Function Tests

    OpenAIRE

    Ranu, H; Wilde, M.; Madden, B

    2011-01-01

    Pulmonary function tests are valuable investigations in the management of patients with suspected or previously diagnosed respiratory disease. They aid diagnosis, help monitor response to treatment and can guide decisions regarding further treatment and intervention. The interpretation of pulmonary functions tests requires knowledge of respiratory physiology. In this review we describe investigations routinely used and discuss their clinical implications.

  11. Construction of Resilient Functions

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jie; WEN Qiao-yan

    2005-01-01

    Based on the relationship between nonlinearity and resiliency of a multi-output function, we present a method for constructing nonintersecting linear codes from packing design. Through these linear codes, we obtain n-variable, moutput, t-resilient functions with very high nonlinearity.Their nonlinearities are currently the best results for most of cases.

  12. Thyroid function in pregnancy☆

    OpenAIRE

    Leung, Angela M.

    2012-01-01

    Iodine is required for the production of thyroid hormones. Normal thyroid function during pregnancy is important for both the mother and developing fetus. This review discusses the changes in thyroid physiology that occur during pregnancy, the significance of thyroid function tests and thyroid antibody titers assessed during pregnancy, and the potential obstetric complications associated with maternal hypothyroidism.

  13. Functional and cognitive grammars

    Institute of Scientific and Technical Information of China (English)

    Anna Siewierska

    2011-01-01

    This paper presents a comprehensive review of the functional approach and cognitive approach to the nature of language and its relation to other aspects of human cognition. The paper starts with a brief discussion of the origins and the core tenets of the two approaches in Section 1. Section 2 discusses the similarities and differences between the three full-fledged structural functional grammars subsumed in the functional approach: Halliday's Systemic Functional Grammar (SFG), Dik's Functional Grammar (FG), and Van Valin's Role and Reference Grammar (RRG). Section 3 deals with the major features of the three cognitive frameworks: Langacker's Cognitive Grammar (CG), Goldberg's Cognitive Construction Grammar (CCG), and Croft's Radical Construction Grammar (RCG). Section 4 compares the two approaches and attempts to provide a unified functional-cognitive grammar. In the last section, the author concludes the paper with remarks on the unidirectional shift from functional grammar to cognitive grammar that may indicate a reinterpretation of the traditional relationship between functional and cognitive models of grammar.

  14. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    In data science, there are important parameters that affect the accuracy of the algorithms used. Some of these parameters are: the type of data objects, the membership assignments, and distance or similarity functions. This paper discusses similarity functions as fundamental elements in membership...

  15. FUNCTIONS(Ⅱ)

    Institute of Scientific and Technical Information of China (English)

    王雷

    2008-01-01

    <正>EXAMPLE 1 Determining Wheth- er a Relation Represents a Function. Determining whether the following relation represent functions. (a)For this relation, the domain represents the employee of Saraxvs Pre -Owner Car Mart and the range repre- sents their base salary.

  16. On minimal round functions

    NARCIS (Netherlands)

    Khimshiashvili, G.; Siersma, D.

    2001-01-01

    We describe the structure of minimal round functions on closed surfaces and three-folds. The minimal possible number of critical loops is determined and typical non-equisingular round function germs are interpreted in the spirit of isolated line singularities. We also discuss a version of Lusternik-

  17. Marine functional food

    NARCIS (Netherlands)

    Luten, J.B.

    2009-01-01

    This book reviews the research on seafood and health, the use and quality aspects of marine lipids and seafood proteins as ingredients in functional foods and consumer acceptance of (marine) functional food. The first chapter covers novel merging areas where seafood may prevent disease and improve h

  18. Transfer-function parameters

    Science.gov (United States)

    Seidel, R. C.

    1977-01-01

    Computer program fits linear-factored form transfer function to given frequency-response data. Program is based on conjugate-gradient search procedure that minimizes error between given frequency-response data and frequency response of transfer function that is supplied by user.

  19. The Colored Jones Function

    Institute of Scientific and Technical Information of China (English)

    HAN You-fa; YAN Xin-ming; LV Li-li

    2012-01-01

    In this paper,we discuss the properties of the colored Jones function of knots.Particularly,we calculate the colored Jones function of some knots(31,41,51,52).Furthermore,one can compute the Kashaev's invariants and study some properties of the Kashaev's conjecture.

  20. Monadic Functional Reactive Programming

    NARCIS (Netherlands)

    Ploeg, A.J. van der; Shan, C

    2013-01-01

    Functional Reactive Programming (FRP) is a way to program reactive systems in functional style, eliminating many of the problems that arise from imperative techniques. In this paper, we present an alternative FRP formulation that is based on the notion of a reactive computation: a monadic computatio

  1. properties and luminosity functions

    Directory of Open Access Journals (Sweden)

    Hektor Monteiro

    2007-01-01

    Full Text Available In this article, we present an investigation of a sample of 1072 stars extracted from the Villanova Catalog of Spectroscopically Identified White Dwarfs (2005 on-line version, studying their distribution in the Galaxy, their physical properties and their luminosity functions. The distances and physical properties of the white dwarfs are determined through interpolation of their (B-V or (b-y colors in model grids. The solar position relative to the Galactic plane, luminosity function, as well as separate functions for each white dwarf spectral type are derived and discussed. We show that the binary fraction does not vary significantly as a function of distance from the Galactic disk out to 100 pc. We propose that the formation rates of DA and non-DAs have changed over time and/or that DAs evolve into non-DA types. The luminosity functions for DAs and DBs have peaks possibly related to a star burst event.

  2. A Functional HAZOP Methodology

    DEFF Research Database (Denmark)

    Liin, Netta; Lind, Morten; Jensen, Niels

    2010-01-01

    A HAZOP methodology is presented where a functional plant model assists in a goal oriented decomposition of the plant purpose into the means of achieving the purpose. This approach leads to nodes with simple functions from which the selection of process and deviation variables follow directly....... The functional HAZOP methodology lends itself directly for implementation into a computer aided reasoning tool to perform root cause and consequence analysis. Such a tool can facilitate finding causes and/or consequences far away from the site of the deviation. A functional HAZOP assistant is proposed...... and investigated in a HAZOP study of an industrial scale Indirect Vapor Recompression Distillation pilot Plant (IVaRDiP) at DTU-Chemical and Biochemical Engineering. The study shows that the functional HAZOP methodology provides a very efficient paradigm for facilitating HAZOP studies and for enabling reasoning...

  3. Ego functions in epilepsy

    DEFF Research Database (Denmark)

    Sørensen, A S; Hansen, H; Høgenhaven, H;

    1988-01-01

    served as controls: 15 patients with a non-neurological but relapsing disorder, psoriasis, and 15 healthy volunteers. Compared with the group of healthy volunteers, a decreased adaptive level of ego functioning was found in the epilepsy groups, regardless of seizure types and EEG findings, and......, to a lesser extent, compared with the psoriasis group. Areas of ego functioning most affected were "reality testing", "cognitive functioning", "integrative functioning" and "regulation and control of drives". Patients with more than one type of seizure were the most affected, as were patients who were younger...... than 15 years when the disease began. The number of anticonvulsants administered did not influence the results. No difference on adaptive level of ego functioning was found between the group with primary generalized epilepsy and the group with temporal lobe epilepsy. Similarly, the temporal lobe...

  4. Submodular functions and optimization

    CERN Document Server

    Fujishige, Satoru

    2005-01-01

    It has widely been recognized that submodular functions play essential roles in efficiently solvable combinatorial optimization problems. Since the publication of the 1st edition of this book fifteen years ago, submodular functions have been showing further increasing importance in optimization, combinatorics, discrete mathematics, algorithmic computer science, and algorithmic economics, and there have been made remarkable developments of theory and algorithms in submodular functions. The 2nd edition of the book supplements the 1st edition with a lot of remarks and with new two chapters: "Submodular Function Minimization" and "Discrete Convex Analysis." The present 2nd edition is still a unique book on submodular functions, which is essential to students and researchers interested in combinatorial optimization, discrete mathematics, and discrete algorithms in the fields of mathematics, operations research, computer science, and economics. Key features: - Self-contained exposition of the theory of submodular ...

  5. Functional data analysis

    CERN Document Server

    Ramsay, J O

    1997-01-01

    Scientists today collect samples of curves and other functional observations. This monograph presents many ideas and techniques for such data. Included are expressions in the functional domain of such classics as linear regression, principal components analysis, linear modelling, and canonical correlation analysis, as well as specifically functional techniques such as curve registration and principal differential analysis. Data arising in real applications are used throughout for both motivation and illustration, showing how functional approaches allow us to see new things, especially by exploiting the smoothness of the processes generating the data. The data sets exemplify the wide scope of functional data analysis; they are drwan from growth analysis, meterology, biomechanics, equine science, economics, and medicine. The book presents novel statistical technology while keeping the mathematical level widely accessible. It is designed to appeal to students, to applied data analysts, and to experienced researc...

  6. Function Photonic Crystals

    CERN Document Server

    Wu, Xiang-Yao; Yang, Jing-Hai; Liu, Xiao-Jing; Ba, Nuo; Wu, Yi-Heng; Wang, Qing-Cai; Li, Jing-Wu

    2010-01-01

    In the paper, we present a new kind of function photonic crystals, which refractive index is a function of space position. Unlike conventional PCs, which structure grow from two materials, A and B, with different dielectric constants $\\epsilon_{A}$ and $\\epsilon_{B}$. By Fermat principle, we give the motion equations of light in one-dimensional, two-dimensional and three-dimensional function photonic crystals. For one-dimensional function photonic crystals, we study the dispersion relation, band gap structure and transmissivity, and compare them with conventional photonic crystals. By choosing various refractive index distribution function $n(z)$, we can obtain more width or more narrow band gap structure than conventional photonic crystals.

  7. Implementing function spreadsheets

    DEFF Research Database (Denmark)

    Sestoft, Peter

    2008-01-01

    A large amount of end-user development is done with spreadsheets. The spreadsheet metaphor is attractive because it is visual and accommodates interactive experimentation, but as observed by Peyton Jones, Blackwell and Burnett, the spreadsheet metaphor does not admit even the most basic abstraction......: that of turning an expression into a named function. Hence they proposed a way to define a function in terms of a worksheet with designated input and output cells; we shall call it a function sheet. The goal of our work is to develop implementations of function sheets and study their application to realistic...... examples. Therefore, we are also developing a simple yet comprehensive spreadsheet core implementation for experimentation with this technology. Here we report briefly on our experiments with function sheets as well as other uses of our spreadsheet core implementation....

  8. Ego functions in epilepsy

    DEFF Research Database (Denmark)

    Sørensen, A S; Hansen, H; Høgenhaven, H

    1988-01-01

    served as controls: 15 patients with a non-neurological but relapsing disorder, psoriasis, and 15 healthy volunteers. Compared with the group of healthy volunteers, a decreased adaptive level of ego functioning was found in the epilepsy groups, regardless of seizure types and EEG findings, and......, to a lesser extent, compared with the psoriasis group. Areas of ego functioning most affected were "reality testing", "cognitive functioning", "integrative functioning" and "regulation and control of drives". Patients with more than one type of seizure were the most affected, as were patients who were younger...... than 15 years when the disease began. The number of anticonvulsants administered did not influence the results. No difference on adaptive level of ego functioning was found between the group with primary generalized epilepsy and the group with temporal lobe epilepsy. Similarly, the temporal lobe...

  9. On barely continuous functions

    Directory of Open Access Journals (Sweden)

    Richard Stephens

    1988-01-01

    Full Text Available The term barely continuous is a topological generalization of Baire-1 according to F. Gerlits of the Mathematical Institute of the Hungarian Academy of Sciences, and thus worthy of further study. This paper compares barely continuous functions and continuous functions on an elementary level. Knowing how the continuity of the identity function between topologies on a given set yields the lattice structure for those topologies, the barely continuity of the identity function between topologies on a given set is investigated and used to add to the structure of that lattice. Included are certain sublattices generated by the barely continuity of the identity function between those topologies. Much attention is given to topologies on finite sets.

  10. Time Functions as Utilities

    Science.gov (United States)

    Minguzzi, E.

    2010-09-01

    Every time function on spacetime gives a (continuous) total preordering of the spacetime events which respects the notion of causal precedence. The problem of the existence of a (semi-)time function on spacetime and the problem of recovering the causal structure starting from the set of time functions are studied. It is pointed out that these problems have an analog in the field of microeconomics known as utility theory. In a chronological spacetime the semi-time functions correspond to the utilities for the chronological relation, while in a K-causal (stably causal) spacetime the time functions correspond to the utilities for the K + relation (Seifert’s relation). By exploiting this analogy, we are able to import some mathematical results, most notably Peleg’s and Levin’s theorems, to the spacetime framework. As a consequence, we prove that a K-causal (i.e. stably causal) spacetime admits a time function and that the time or temporal functions can be used to recover the K + (or Seifert) relation which indeed turns out to be the intersection of the time or temporal orderings. This result tells us in which circumstances it is possible to recover the chronological or causal relation starting from the set of time or temporal functions allowed by the spacetime. Moreover, it is proved that a chronological spacetime in which the closure of the causal relation is transitive (for instance a reflective spacetime) admits a semi-time function. Along the way a new proof avoiding smoothing techniques is given that the existence of a time function implies stable causality, and a new short proof of the equivalence between K-causality and stable causality is given which takes advantage of Levin’s theorem and smoothing techniques.

  11. Time functions as utilities

    CERN Document Server

    Minguzzi, E

    2009-01-01

    Every time function on spacetime gives a (continuous) total preordering of the spacetime events which respects the notion of causal precedence. The problem of the existence of a (semi-)time function on spacetime and the problem of recovering the causal structure starting from the set of time functions are studied. It is pointed out that these problems have an analog in the field of microeconomics known as utility theory. In a chronological spacetime the semi-time functions correspond to the utilities for the chronological relation, while in a K-causal (stably causal) spacetime the time functions correspond to the utilities for the K^+ relation (Seifert's relation). By exploiting this analogy, we are able to import some mathematical results, most notably Peleg's and Levin's theorems, to the spacetime framework. As a consequence, we prove that a K-causal (i.e. stably causal) spacetime admits a time function and that the time or temporal functions can be used to recover the K^+ (or Seifert) relation which indeed...

  12. Learning Submodular Functions

    CERN Document Server

    Balcan, Maria-Florina

    2010-01-01

    There has recently been significant interest in the machine learning community on understanding and using submodular functions. Despite this recent interest, little is known about submodular functions from a learning theory perspective. Motivated by applications such as pricing goods in economics, this paper considers PAC-style learning of submodular functions in a distributional setting. A problem instance consists of a distribution on {0,1}^n and a real-valued function on {0,1}^n that is non-negative, monotone and submodular. We are given poly(n) samples from this distribution, along with the values of the function at those sample points. The task is to approximate the value of the function to within a multiplicative factor at subsequent sample points drawn from the same distribution, with sufficiently high probability. We prove several results for this problem. (1) If the function is Lipschitz and the distribution is a product distribution, such as the uniform distribution, then a good approximation is pos...

  13. Unification of Filled Function and Tunnelling Function in Global Optimization

    Institute of Scientific and Technical Information of China (English)

    Wei Wang; Yong-jian Yang; Lian-sheng Zhang

    2007-01-01

    In this paper, two auxiliary functions for global optimization are proposed. These two auxiliary functions possess all characters of tunnelling functions and filled functions under certain general assumptions.Thus, they can be considered as the unification of filled function and tunnelling function. Moreover, the process of tunneling or filling for global optimization can be unified as the minimization of such auxiliary functions.Result of numerical experiments shows that such two auxiliary functions are effective.

  14. The H-Function

    CERN Document Server

    Mathai, AM

    2009-01-01

    The two main topics emphasized in this book, special functions and fractional calculus, are currently under fast development in theory and application to many problems in statistics, physics, and engineering, particularly in condensed matter physics, plasma physics, and astrophysics. The book begins by setting forth definitions, contours, existence conditions, and particular cases of the H-function. The authors then deal with Laplace, Fourier, Hankel, and other transforms. As these relations are explored, fractional calculus and its relations to H-functions emerge with important results on fra

  15. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications...

  16. Mean-periodic functions

    Directory of Open Access Journals (Sweden)

    Carlos A. Berenstein

    1980-01-01

    Full Text Available We show that any mean-periodic function f can be represented in terms of exponential-polynomial solutions of the same convolution equation f satisfies, i.e., u∗f=0(μ∈E′(ℝn. This extends to n-variables the work of L. Schwartz on mean-periodicity and also extends L. Ehrenpreis' work on partial differential equations with constant coefficients to arbitrary convolutors. We also answer a number of open questions about mean-periodic functions of one variable. The basic ingredient is our work on interpolation by entire functions in one and several complex variables.

  17. Integrals of Bessel functions

    CERN Document Server

    Luke, Yudell L

    2014-01-01

    Integrals of Bessel Functions concerns definite and indefinite integrals, the evaluation of which is necessary to numerous applied problems. A massive compendium of useful information, this volume represents a resource for applied mathematicians in many areas of academia and industry as well as an excellent text for advanced undergraduates and graduate students of mathematics.Starting with an extensive introductory chapter on basic formulas, the treatment advances to indefinite integrals, examining them in terms of Lommel and Bessel functions. Subsequent chapters explore airy functions, incomp

  18. Quantum iterated function systems.

    Science.gov (United States)

    Łoziński, Artur; Zyczkowski, Karol; Słomczyński, Wojciech

    2003-10-01

    An iterated function system (IFS) is defined by specifying a set of functions in a classical phase space, which act randomly on an initial point. In an analogous way, we define a quantum IFS (QIFS), where functions act randomly with prescribed probabilities in the Hilbert space. In a more general setting, a QIFS consists of completely positive maps acting in the space of density operators. This formalism is designed to describe certain problems of nonunitary quantum dynamics. We present exemplary classical IFSs, the invariant measure of which exhibits fractal structure, and study properties of the corresponding QIFSs and their invariant states.

  19. Correlation Functions and Spin

    CERN Document Server

    Tyc, T

    2000-01-01

    The k-electron correlation function of a free chaotic electron beam is derived with the spin degree of freedom taken into account. It is shown that it can be expressed with the help of correlation functions for a polarized electron beam of all orders up to k and the degree of spin polarization. The form of the correlation function suggests that if the electron beam is not highly polarized, observing multi-particle correlations should be difficult. The result can be applied also to chaotic photon beams, the degree of spin polarization being replaced by the degree of polarization.

  20. Linking structural and functional connectivity in a simple runoff-runon model over soils with heterogeneous infiltrability

    Science.gov (United States)

    Harel, M.; Mouche, E.

    2012-12-01

    Runoff production on a hillslope during a rainfall event may be simplified as follows. Given a soil of constant infiltrability I, which is the maximum amount of water that the soil can infiltrate, and a constant rainfall intensity R, runoff is observed wherever R is greater than I. The infiltration rate equals the infiltrability where runoff is produced, R otherwise. When ponding time, topography, and overall spatial and temporal variations of physical parameters, such as R and I, are neglected, the runoff equation remains simple. In this study, we consider soils of spatially variable infiltrability. As runoff can re-infiltrate on down-slope areas of higher infiltrabilities (runon process), the resulting process is highly non-linear. The stationary runoff equation is: Qn+1 = max (Qn + (R - In)*Δx , 0) where Qn is the runoff arriving on pixel n of size Δx [L2/T], R and In the rainfall intensity and infiltrability on that same pixel [L/T]. The non-linearity is due to the dependence of infiltration on R and Qn, that is runon. This re-infiltration process generates patterns of runoff along the slope, patterns that organise and connect differently to each other depending on the rainfall intensity and the nature of the soil heterogeneity. In order to characterize the runoff patterns and their connectivity, we use the connectivity function defined by Allard (1993) in Geostatistics. Our aim is to assess, in a stochastic framework, the runoff organization on 1D and 2D slopes with random infiltrabilities (log-normal, exponential and bimodal distributions) by means of numerical simulations. Firstly, we show how runoff is produced and organized in patterns along a 2D slope according to the infiltrability distribution. We specifically illustrate and discuss the link between the statistical nature of the infiltrability and that of the flow-rate, with a special focus on the relations between the connectivities of both fields: the structural connectivity (infiltrability patterns

  1. Platelet function in dogs

    DEFF Research Database (Denmark)

    Nielsen, Line A.; Zois, Nora Elisabeth; Pedersen, Henrik D.

    2007-01-01

    Cairn Terriers, 10 Boxers, and 11 Labrador Retrievers) were included in the study. Platelet function was assessed by whole-blood aggregation with ADP (1, 5, 10, and 20 µM) as agonist and by PFA-100 using collagen and epinephrine (Col + Epi) and Cpæ + ADP as agonists. Plasma thromboxane B2 concentration......Background: Clinical studies investigating platelet function in dogs have had conflicting results that may be caused by normal physiologic variation in platelet response to agonists. Objectives: The objective of this study was to investigate platelet function in clinically healthy dogs of 4...... different breeds by whole-blood aggregometry and with a point-of-care platelet function analyzer (PFA-100), and to evaluate the effect of acetylsalicylic acid (ASA) administration on the results from both methods. Methods: Forty-five clinically healthy dogs (12 Cavalier King Charles Spaniels [CKCS], 12...

  2. Bioprinting: Functional droplet networks

    Science.gov (United States)

    Durmus, Naside Gozde; Tasoglu, Savas; Demirci, Utkan

    2013-06-01

    Tissue-mimicking printed networks of droplets separated by lipid bilayers that can be functionalized with membrane proteins are able to spontaneously fold and transmit electrical currents along predefined paths.

  3. A Totient Function Inequality

    Directory of Open Access Journals (Sweden)

    N. Carella

    2013-09-01

    Full Text Available A new unconditional inequality of the totient function is contributed to the literature. This result is associated with various unsolved problems about the distribution of prime numbers.  

  4. [Vascular endothelial Barrier Function].

    Science.gov (United States)

    Ivanov, A N; Puchinyan, D M; Norkin, I A

    2015-01-01

    Endothelium is an important regulator of selective permeability of the vascular wall for different molecules and cells. This review summarizes current data on endothelial barrier function. Endothelial glycocalyx structure, its function and role in the molecular transport and leukocytes migration across the endothelial barrier are discussed. The mechanisms of transcellular transport of macromolecules and cell migration through endothelial cells are reviewed. Special section of this article addresses the structure and function of tight and adherens endothelial junction, as well as their importance for the regulation of paracellular transport across the endothelial barrier. Particular attention is paid to the signaling mechanism of endothelial barrier function regulation and the factors that influence on the vascular permeability.

  5. On Network Functional Compression

    CERN Document Server

    Feizi, Soheil

    2010-01-01

    In this paper, we consider different aspects of the network functional compression problem where computation of a function (or, some functions) of sources located at certain nodes in a network is desired at receiver(s). The rate region of this problem has been considered in the literature under certain restrictive assumptions, particularly in terms of the network topology, the functions and the characteristics of the sources. In this paper, we present results that significantly relax these assumptions. Firstly, we consider this problem for an arbitrary tree network and asymptotically lossless computation. We show that, for depth one trees with correlated sources, or for general trees with independent sources, a modularized coding scheme based on graph colorings and Slepian-Wolf compression performs arbitrarily closely to rate lower bounds. For a general tree network with independent sources, optimal computation to be performed at intermediate nodes is derived. We introduce a necessary and sufficient condition...

  6. ON BANDLIMITED SCALING FUNCTION

    Institute of Scientific and Technical Information of China (English)

    Wei Chen; Qiao Yang; Wei-jun Jiang; Si-long Peng

    2002-01-01

    This paper discuss band-limited scaling function, especially on the interval band case and three interval bands case, its relationship to oversampling property and weakly translation invariance are also studied. At the end, we propose an open problem.

  7. Characterisation of Functional Surfaces

    DEFF Research Database (Denmark)

    Lonardo, P.M.; De Chiffre, Leonardo; Bruzzone, A.A.

    2004-01-01

    Characterisation of surfaces is of fundamental importance to control the manufacturing process and the functional performance of the part. Many applications concern contact and tribology problems, which include friction, wear and lubrication. This paper presents the techniques and instruments for...

  8. Holographic Three point Functions

    DEFF Research Database (Denmark)

    Bissi, Agnese

    In this thesis it is addressed the problem of the computation of three point correlation functions within the AdS=CFT correspondence. In the context of the AdS 5=CFT4 correspondence we present three computations. First we compare the results of tree level three point functions of two giant...... gravitons and a point like graviton and its dual counterpart, namely two Schur polynomials and a single trace chiral primary. Secondly we compute the one loop correction to planar, non extremal three point functions of two heavy and one light operators, both from the gauge and string side in the Frolov......-Tseytlin regime. Finally we generalize the scalar product of two states belonging to the SO(6) sector of N = 4 SYM with implications on the construction of three point functions of 3 non-BPS operators from the gauge theory side. On the other hand in the AdS4=CFT3 correspondence we compare the computations...

  9. Center for Functional Nanomaterials

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Functional Nanomaterials (CFN) explores the unique properties of materials and processes at the nanoscale. The CFN is a user-oriented research center...

  10. Reasoning about Function Objects

    Science.gov (United States)

    Nordio, Martin; Calcagno, Cristiano; Meyer, Bertrand; Müller, Peter; Tschannen, Julian

    Modern object-oriented languages support higher-order implementations through function objects such as delegates in C#, agents in Eiffel, or closures in Scala. Function objects bring a new level of abstraction to the object-oriented programming model, and require a comparable extension to specification and verification techniques. We introduce a verification methodology that extends function objects with auxiliary side-effect free (pure) methods to model logical artifacts: preconditions, postconditions and modifies clauses. These pure methods can be used to specify client code abstractly, that is, independently from specific instantiations of the function objects. To demonstrate the feasibility of our approach, we have implemented an automatic prover, which verifies several non-trivial examples.

  11. Contributing to Functionality

    DEFF Research Database (Denmark)

    Törpel, Bettina

    2006-01-01

    advocated in this paper, emerges in the specific dynamic interplay of actors, objectives, structures, practices and means. In this view, functionality is the result of creating, harnessing and inhabiting computer supported joint action spaces. The successful creation and further development of a computer......The objective of this paper is the design of computer supported joint action spaces. It is argued against a view of functionality as residing in computer applications. In such a view the creation of functionality is equivalent to the creation of computer applications. Functionality, in the view...... supported joint action space comprises a whole range of appropriate design contributions. The approach is illustrated by the example of the creation of the computer supported joint action space "exchange network of voluntary union educators". As part of the effort a group of participants created...

  12. ON COMMUTING FUNCTIONS,

    Science.gov (United States)

    The methods used seem to be new, and the author feels that further work in this direction may eventually yield a proof of the conjecture in the case when one of the functions is of bounded variation . (Author)

  13. Functionally Graded Media

    OpenAIRE

    Campos, Cédric M.; Epstein, Marcelo; De León, Manuel

    2007-01-01

    The notions of uniformity and homogeneity of elastic materials are reviewed in terms of Lie groupoids and frame bundles. This framework is also extended to consider the case Functionally Graded Media, which allows us to obtain some homogeneity conditions.

  14. Fundamentals of functional analysis

    CERN Document Server

    Farenick, Douglas

    2016-01-01

    This book provides a unique path for graduate or advanced undergraduate students to begin studying the rich subject of functional analysis with fewer prerequisites than is normally required. The text begins with a self-contained and highly efficient introduction to topology and measure theory, which focuses on the essential notions required for the study of functional analysis, and which are often buried within full-length overviews of the subjects. This is particularly useful for those in applied mathematics, engineering, or physics who need to have a firm grasp of functional analysis, but not necessarily some of the more abstruse aspects of topology and measure theory normally encountered. The reader is assumed to only have knowledge of basic real analysis, complex analysis, and algebra. The latter part of the text provides an outstanding treatment of Banach space theory and operator theory, covering topics not usually found together in other books on functional analysis. Written in a clear, concise manner,...

  15. Bosonic Partition Functions

    CERN Document Server

    Kellerstein, M; Verbaarschot, J J M

    2016-01-01

    The behavior of quenched Dirac spectra of two-dimensional lattice QCD is consistent with spontaneous chiral symmetry breaking which is forbidden according to the Coleman-Mermin-Wagner theorem. One possible resolution of this paradox is that, because of the bosonic determinant in the partially quenched partition function, the conditions of this theorem are violated allowing for spontaneous symmetry breaking in two dimensions or less. This goes back to work by Niedermaier and Seiler on nonamenable symmetries of the hyperbolic spin chain and earlier work by two of the auhtors on bosonic partition functions at nonzero chemical potential. In this talk we discuss chiral symmetry breaking for the bosonic partition function of QCD at nonzero isospin chemical potential and a bosonic random matrix theory at imaginary chemical potential and compare the results with the fermionic counterpart. In both cases the chiral symmetry group of the bosonic partition function is noncompact.

  16. Discrete Wigner function dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Klimov, A B; Munoz, C [Departamento de Fisica, Universidad de Guadalajara, Revolucion 1500, 44410, Guadalajara, Jalisco (Mexico)

    2005-12-01

    We study the evolution of the discrete Wigner function for prime and the power of prime dimensions using the discrete version of the star-product operation. Exact and semiclassical dynamics in the limit of large dimensions are considered.

  17. Pair Correlation Function Integrals

    DEFF Research Database (Denmark)

    Wedberg, Nils Hejle Rasmus Ingemar; O'Connell, John P.; Peters, Günther H.J.;

    2011-01-01

    numerical tests complementing previous results. Pure molecular fluids are here studied in the isothermal-isobaric ensemble with isothermal compressibilities evaluated from the total correlation function integrals and compared with values derived from volume fluctuations. For systems where the radial......We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long......, and J. Abildskov, Mol. Simul. 36, 1243 (2010); Fluid Phase Equilib. 302, 32 (2011)], but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report...

  18. The Protostellar Luminosity Function

    CERN Document Server

    Offner, Stella

    2011-01-01

    The protostellar luminosity function (PLF) is the present-day luminosity function of the protostars in a region of star formation. It is determined using the protostellar mass function (PMF) in combination with a stellar evolutionary model that provides the luminosity as a function of instantaneous and final stellar mass. As in McKee & Offner (2010), we consider three main accretion models: the Isothermal Sphere model, the Turbulent Core model, and an approximation of the Competitive Accretion model. We also consider the effect of an accretion rate that tapers off linearly in time and an accelerating star formation rate. For each model, we characterize the luminosity distribution using the mean, median, maximum, ratio of the median to the mean, standard deviation of the logarithm of the luminosity, and the fraction of very low luminosity objects. We compare the models with bolometric luminosities observed in local star forming regions and find that models with an approximately constant accretion time, suc...

  19. The Grindahl Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Rechberger, Christian; Thomsen, Søren Steffen

    2007-01-01

    In this paper we propose the Grindahl hash functions, which are based on components of the Rijndael algorithm. To make collision search sufficiently difficult, this design has the important feature that no low-weight characteristics form collisions, and at the same time it limits access to the st......In this paper we propose the Grindahl hash functions, which are based on components of the Rijndael algorithm. To make collision search sufficiently difficult, this design has the important feature that no low-weight characteristics form collisions, and at the same time it limits access...... to the state. We propose two concrete hash functions, Grindahl-256 and Grindahl-512 with claimed security levels with respect to collision, preimage and second preimage attacks of 2^128 and 2^256, respectively. Both proposals have lower memory requirements than other hash functions at comparable speeds...

  20. Catalytic Functions of Standards

    NARCIS (Netherlands)

    K. Blind (Knut)

    2009-01-01

    textabstractThe three different areas and the examples have illustrated several catalytic functions of standards for innovation. First, the standardisation process reduces the time to market of inventions, research results and innovative technologies. Second, standards themselves promote the diffusi

  1. Lectures on Functional Analysis

    CERN Document Server

    Kurepa, Svetozar; Kraljević, Hrvoje

    1987-01-01

    This volume consists of a long monographic paper by J. Hoffmann-Jorgensen and a number of shorter research papers and survey articles covering different aspects of functional analysis and its application to probability theory and differential equations.

  2. Phonon Green's function.

    OpenAIRE

    1991-01-01

    The concepts of source and quantum action principle are used to produce the phonon Green's function appropriate for an initial phonon vacuum state. An application to the Mossbauer effect is presented.

  3. Transfer-Function Simulator

    Science.gov (United States)

    Kavaya, M. J.

    1985-01-01

    Transfer function simulator constructed from analog or both analog and digital components substitute for device that has faults that confound analysis of feedback control loop. Simulator is substitute for laser and spectrophone.

  4. Functional Use Database (FUse)

    Data.gov (United States)

    U.S. Environmental Protection Agency — There are five different files for this dataset: 1. A dataset listing the reported functional uses of chemicals (FUse) 2. All 729 ToxPrint descriptors obtained from...

  5. Holographic Three point Functions

    DEFF Research Database (Denmark)

    Bissi, Agnese

    In this thesis it is addressed the problem of the computation of three point correlation functions within the AdS=CFT correspondence. In the context of the AdS 5=CFT4 correspondence we present three computations. First we compare the results of tree level three point functions of two giant...... gravitons and a point like graviton and its dual counterpart, namely two Schur polynomials and a single trace chiral primary. Secondly we compute the one loop correction to planar, non extremal three point functions of two heavy and one light operators, both from the gauge and string side in the Frolov......-Tseytlin regime. Finally we generalize the scalar product of two states belonging to the SO(6) sector of N = 4 SYM with implications on the construction of three point functions of 3 non-BPS operators from the gauge theory side. On the other hand in the AdS4=CFT3 correspondence we compare the computations...

  6. The triad value function

    DEFF Research Database (Denmark)

    Vedel, Mette

    2016-01-01

    Purpose - The purpose of the paper is to explicate how connectedness of relationships results in varying value potentials of triads. Design/methodology/approach - First connectedness is re-described as an actor-perceived and actor-interpreted phenomenon. The re-description is used to theorize...... the triad value function. Next, the applicability and validity of the concept is examined in a case study of four closed vertical supply chain triads. Findings - The case study demonstrates that the triad value function facilitates the analysis and understanding of an apparent paradox; that distributors...... are not dis-intermediated in spite of their limited contribution to activities in the triads. The results indicate practical adequacy of the triad value function. Research limitations/implications - The triad value function is difficult to apply in the study of expanded networks as the number of connections...

  7. Catalytic Functions of Standards

    NARCIS (Netherlands)

    K. Blind (Knut)

    2009-01-01

    textabstractThe three different areas and the examples have illustrated several catalytic functions of standards for innovation. First, the standardisation process reduces the time to market of inventions, research results and innovative technologies. Second, standards themselves promote the

  8. Center for Functional Nanomaterials

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Functional Nanomaterials (CFN) explores the unique properties of materials and processes at the nanoscale. The CFN is a user-oriented research center...

  9. Normal Functioning Family

    Science.gov (United States)

    ... Spread the Word Shop AAP Find a Pediatrician Family Life Medical Home Family Dynamics Adoption & Foster Care ... Español Text Size Email Print Share Normal Functioning Family Page Content Article Body Is there any way ...

  10. Functional Task Test: Data Review

    Science.gov (United States)

    Cromwell, Ronita

    2014-01-01

    After space flight there are changes in multiple physiological systems including: Cardiovascular function; Sensorimotor function; and Muscle function. How do changes in these physiological system impact astronaut functional performance?

  11. Polarized Antenna Splitting Functions

    Energy Technology Data Exchange (ETDEWEB)

    Larkoski, Andrew J.; Peskin, Michael E.; /SLAC

    2009-10-17

    We consider parton showers based on radiation from QCD dipoles or 'antennae'. These showers are built from 2 {yields} 3 parton splitting processes. The question then arises of what functions replace the Altarelli-Parisi splitting functions in this approach. We give a detailed answer to this question, applicable to antenna showers in which partons carry definite helicity, and to both initial- and final-state emissions.

  12. Functionally graded materials

    CERN Document Server

    Mahamood, Rasheedat Modupe

    2017-01-01

    This book presents the concept of functionally graded materials as well as their use and different fabrication processes. The authors describe the use of additive manufacturing technology for the production of very complex parts directly from the three dimension computer aided design of the part by adding material layer after layer. A case study is also presented in the book on the experimental analysis of functionally graded material using laser metal deposition process.

  13. Partly occupied Wannier functions

    DEFF Research Database (Denmark)

    Thygesen, Kristian Sommer; Hansen, Lars Bruno; Jacobsen, Karsten Wedel

    2005-01-01

    We introduce a scheme for constructing partly occupied, maximally localized Wannier functions (WFs) for both molecular and periodic systems. Compared to the traditional occupied WFs the partly occupied WFs possess improved symmetry and localization properties achieved through a bonding-antibondin......We introduce a scheme for constructing partly occupied, maximally localized Wannier functions (WFs) for both molecular and periodic systems. Compared to the traditional occupied WFs the partly occupied WFs possess improved symmetry and localization properties achieved through a bonding...

  14. Production Functions Behaving Badly

    DEFF Research Database (Denmark)

    Fredholm, Thomas

    This paper reconsiders Anwar Shaikh's critique of the neoclassical theory of growth and distribution based on its use of aggregate production functions. This is done by reconstructing and extending Franklin M. Fisher's 1971 computer simulations, which Shaikh used to support his critique. Together...... with other recent extensions to Shaikh's seminal work, my results support and strengthen the evidence against the use of aggregate production functions....

  15. LDF (Lag Dependence Functions)

    DEFF Research Database (Denmark)

    2000-01-01

    LDF (Lag Dependence Functions) is an S-PLUS library for identification of non-linear dependencies in univariate time series. The methods can be considered generalizations of the tools applicable for linear time series.......LDF (Lag Dependence Functions) is an S-PLUS library for identification of non-linear dependencies in univariate time series. The methods can be considered generalizations of the tools applicable for linear time series....

  16. Purely Functional Structured Programming

    OpenAIRE

    Obua, Steven

    2010-01-01

    The idea of functional programming has played a big role in shaping today's landscape of mainstream programming languages. Another concept that dominates the current programming style is Dijkstra's structured programming. Both concepts have been successfully married, for example in the programming language Scala. This paper proposes how the same can be achieved for structured programming and PURELY functional programming via the notion of LINEAR SCOPE. One advantage of this proposal is that m...

  17. Structure function monitor

    Energy Technology Data Exchange (ETDEWEB)

    McGraw, John T [Placitas, NM; Zimmer, Peter C [Albuquerque, NM; Ackermann, Mark R [Albuquerque, NM

    2012-01-24

    Methods and apparatus for a structure function monitor provide for generation of parameters characterizing a refractive medium. In an embodiment, a structure function monitor acquires images of a pupil plane and an image plane and, from these images, retrieves the phase over an aperture, unwraps the retrieved phase, and analyzes the unwrapped retrieved phase. In an embodiment, analysis yields atmospheric parameters measured at spatial scales from zero to the diameter of a telescope used to collect light from a source.

  18. Inequalities for Humbert functions

    Directory of Open Access Journals (Sweden)

    Ayman Shehata

    2014-04-01

    Full Text Available This paper is motivated by an open problem of Luke’s theorem. We consider the problem of developing a unified point of view on the theory of inequalities of Humbert functions and of their general ratios are obtained. Some particular cases and refinements are given. Finally, we obtain some important results involving inequalities of Bessel and Whittaker’s functions as applications.

  19. Applied functional analysis

    CERN Document Server

    Griffel, DH

    2002-01-01

    A stimulating introductory text, this volume examines many important applications of functional analysis to mechanics, fluid mechanics, diffusive growth, and approximation. Detailed enough to impart a thorough understanding, the text is also sufficiently straightforward for those unfamiliar with abstract analysis. Its four-part treatment begins with distribution theory and discussions of Green's functions. Essentially independent of the preceding material, the second and third parts deal with Banach spaces, Hilbert space, spectral theory, and variational techniques. The final part outlines the

  20. Green's functions with applications

    CERN Document Server

    Duffy, Dean G

    2015-01-01

    This second edition systematically leads readers through the process of developing Green's functions for ordinary and partial differential equations. In addition to exploring the classical problems involving the wave, heat, and Helmholtz equations, the book includes special sections on leaky modes, water waves, and absolute/convective instability. The book helps readers develop an intuition about the behavior of Green's functions, and considers the questions of the computational efficiency and possible methods for accelerating the process.

  1. Proteins: Form and function

    OpenAIRE

    Roy D Sleator

    2012-01-01

    An overwhelming array of structural variants has evolved from a comparatively small number of protein structural domains; which has in turn facilitated an expanse of functional derivatives. Herein, I review the primary mechanisms which have contributed to the vastness of our existing, and expanding, protein repertoires. Protein function prediction strategies, both sequence and structure based, are also discussed and their associated strengths and weaknesses assessed.

  2. LDF (Lag Dependence Functions)

    DEFF Research Database (Denmark)

    2000-01-01

    LDF (Lag Dependence Functions) is an S-PLUS library for identification of non-linear dependencies in univariate time series. The methods can be considered generalizations of the tools applicable for linear time series.......LDF (Lag Dependence Functions) is an S-PLUS library for identification of non-linear dependencies in univariate time series. The methods can be considered generalizations of the tools applicable for linear time series....

  3. Nonlinear functional analysis

    Directory of Open Access Journals (Sweden)

    W. L. Fouché

    1983-03-01

    Full Text Available In this article we discuss some aspects of nonlinear functional analysis. It included reviews of Banach’s contraction theorem, Schauder’s fixed point theorem, globalising techniques and applications of homotopy theory to nonlinear functional analysis. The author emphasises that fundamentally new ideas are required in order to achieve a better understanding of phenomena which contain both nonlinear and definite infinite dimensional features.

  4. Function, anticipation, representation

    Science.gov (United States)

    Bickhard, Mark. H.

    2001-06-01

    Function emerges in certain kinds of far-from-equilibrium systems. One important kind of function is that of interactive anticipation, an adaptedness to temporal complexity. Interactive anticipation is the locus of the emergence of normative representational content, and, thus, of representation in general: interactive anticipation is the naturalistic core of the evolution of cognition. Higher forms of such anticipation are involved in the subsequent macro-evolutionary sequence of learning, emotions, and reflexive consciousness.

  5. Functions & Features of Idioms

    Institute of Scientific and Technical Information of China (English)

    周来纳

    2015-01-01

    Idioms, or conventionalized multiword expressions, often but not always non-literal, are hardly marginal in English, though they have been relatively neglected in lexical studies of the language. This neglect is especially evident in respect of the functions of idioms. The aim of this article, accordingly, is to account for the functions of idioms by analyzing what they do with the features of idioms.

  6. Handbook of functional equations functional inequalities

    CERN Document Server

    2014-01-01

    As Richard Bellman has so elegantly stated at the Second International Conference on General Inequalities (Oberwolfach, 1978), “There are three reasons for the study of inequalities: practical, theoretical, and aesthetic.” On the aesthetic aspects, he said, “As has been pointed out, beauty is in the eye of the beholder. However, it is generally agreed that certain pieces of music, art, or mathematics are beautiful. There is an elegance to inequalities that makes them very attractive.” The content of the Handbook focuses mainly on both old and recent developments on approximate homomorphisms, on a relation between the Hardy–Hilbert and the Gabriel inequality, generalized Hardy–Hilbert type inequalities on multiple weighted Orlicz spaces, half-discrete Hilbert-type inequalities, on affine mappings, on contractive operators, on multiplicative Ostrowski and trapezoid inequalities, Ostrowski type inequalities for the  Riemann–Stieltjes integral, means and related functional inequalities, Weighted G...

  7. Functional neuroimaging of sleep.

    Science.gov (United States)

    Nofzinger, Eric A

    2005-03-01

    Sleep and sleep disorders have traditionally been viewed from a polysomnographic perspective. Although these methods provide information on the timing of various stages of sleep and wakefulness, they do not provide information regarding function in brain structures that have been implicated in the generation of sleep and that may be abnormal in different sleep disorders. Functional neuroimaging methods provide information regarding changes in brain function across the sleep-wake cycle that provides information for models of sleep dysregulation in a variety of sleep disorders. Early studies show reliable increases in function in limbic and anterior paralimbic cortex in rapid eye movement (REM) sleep and decreases in function in higher-order cortical regions in known thalamocortical networks during non-REM sleep. Although most of the early work in this area has been devoted to the study of normal sleep mechanisms, a collection of studies in diverse sleep disorders such as sleep deprivation, depression, insomnia, dyssomnias, narcolepsy, and sleep apnea suggest that functional neuroimaging methods have the potential to clarify the pathophysiology of sleep disorders and to guide treatment strategies.

  8. Network Flows for Functions

    CERN Document Server

    Shah, Virag; Manjunath, D

    2010-01-01

    We consider in-network computation of an arbitrary function over an arbitrary communication network. A network with capacity constraints on the links is given. Some nodes in the network generate data, e.g., like sensor nodes in a sensor network. An arbitrary function of this distributed data is to be obtained at a terminal node. The structure of the function is described by a given computation schema, which in turn is represented by a directed tree. We design computing and communicating schemes to obtain the function at the terminal at the maximum rate. For this, we formulate linear programs to determine network flows that maximize the computation rate. We then develop fast combinatorial primal-dual algorithm to obtain $\\epsilon$-approximate solutions to these linear programs. We then briefly describe extensions of our techniques to the cases of multiple terminals wanting different functions, multiple computation schemas for a function, computation with a given desired precision, and to networks with energy c...

  9. Functional balance tests

    Directory of Open Access Journals (Sweden)

    Parvin Raji

    2012-12-01

    Full Text Available Background and Aim: All activities of daily living need to balance control in static and dynamic movements. In recent years, a numerous increase can be seen in the functional balance assessment tools. Functional balance tests emphasize on static and dynamic balance, balance in weight transfer, the equilibrium response to the imbalances, and functional mobility. These standardized and available tests assess performance and require minimal or no equipment and short time to run. Functional balance is prerequisite for the most static and dynamic activities in daily life and needs sufficient interaction between sensory and motor systems. According to the critical role of balance in everyday life, and wide application of functional balance tests in the diagnosis and assessment of patients, a review of the functional balance tests was performed.Methods: The Google Scholar, PubMed, Science Direct, Scopus, Magiran, Iran Medex, and IranDoc databases were reviewed and the reliable and valid tests which were mostly used by Iranian researchers were assessed.Conclusion: It seems that Berg balance scale (BBS have been studied by Iranian and foreign researches more than the other tests. This test has high reliability and validity in elderly and in the most neurological disorders.

  10. Microstructure Statistics Property Relations of Anisotropic Polydisperse Particulate Composites using Tomography

    Science.gov (United States)

    2012-10-09

    cosmic radiation using Minkowski functionals [21]. While these statis- tical descriptors have been used for decades, accurately obtaining higher...within the micro-CT scanner. The container has a diameter of 62 mm and height of 65 mm, and hemispherical beads (3 mm radius ) are attached to the...fractions of a single constituent for monomodal spheres [52]. However, as the smallest semiaxes of the ellipsoids are similar to the average radius of the

  11. Space race functional responses.

    Science.gov (United States)

    Sjödin, Henrik; Brännström, Åke; Englund, Göran

    2015-02-22

    We derive functional responses under the assumption that predators and prey are engaged in a space race in which prey avoid patches with many predators and predators avoid patches with few or no prey. The resulting functional response models have a simple structure and include functions describing how the emigration of prey and predators depend on interspecific densities. As such, they provide a link between dispersal behaviours and community dynamics. The derived functional response is general but is here modelled in accordance with empirically documented emigration responses. We find that the prey emigration response to predators has stabilizing effects similar to that of the DeAngelis-Beddington functional response, and that the predator emigration response to prey has destabilizing effects similar to that of the Holling type II response. A stability criterion describing the net effect of the two emigration responses on a Lotka-Volterra predator-prey system is presented. The winner of the space race (i.e. whether predators or prey are favoured) is determined by the relationship between the slopes of the species' emigration responses. It is predicted that predators win the space race in poor habitats, where predator and prey densities are low, and that prey are more successful in richer habitats.

  12. Functional integration over geometries

    CERN Document Server

    Mottola, E

    1995-01-01

    The geometric construction of the functional integral over coset spaces {\\cal M}/{\\cal G} is reviewed. The inner product on the cotangent space of infinitesimal deformations of \\cal M defines an invariant distance and volume form, or functional integration measure on the full configuration space. Then, by a simple change of coordinates parameterizing the gauge fiber \\cal G, the functional measure on the coset space {\\cal M}/{\\cal G} is deduced. This change of integration variables leads to a Jacobian which is entirely equivalent to the Faddeev-Popov determinant of the more traditional gauge fixed approach in non-abelian gauge theory. If the general construction is applied to the case where \\cal G is the group of coordinate reparametrizations of spacetime, the continuum functional integral over geometries, {\\it i.e.} metrics modulo coordinate reparameterizations may be defined. The invariant functional integration measure is used to derive the trace anomaly and effective action for the conformal part of the me...

  13. The Functions of Sleep

    Directory of Open Access Journals (Sweden)

    Samson Z Assefa

    2015-08-01

    Full Text Available Sleep is a ubiquitous component of animal life including birds and mammals. The exact function of sleep has been one of the mysteries of biology. A considerable number of theories have been put forward to explain the reason(s for the necessity of sleep. To date, while a great deal is known about what happens when animals sleep, there is no definitive comprehensive explanation as to the reason that sleep is an inevitable part of animal functioning. It is well known that sleep is a homeostatically regulated body process, and that prolonged sleep deprivation is fatal in animals. In this paper, we present some of the theories as to the functions of sleep and provide a review of some hypotheses as to the overall physiologic function of sleep. To better understand the purpose for sleeping, we review the effects of sleep deprivation on physical, neurocognitive and psychic function. A better understanding of the purpose for sleeping will be a great advance in our understanding of the nature of the animal kingdom, including our own.

  14. Advanced carbon nanotubes functionalization

    Science.gov (United States)

    Setaro, A.

    2017-10-01

    Similar to graphene, carbon nanotubes are materials made of pure carbon in its sp2 form. Their extended conjugated π-network provides them with remarkable quantum optoelectronic properties. Frustratingly, it also brings drawbacks. The π–π stacking interaction makes as-produced tubes bundle together, blurring all their quantum properties. Functionalization aims at modifying and protecting the tubes while hindering π–π stacking. Several functionalization strategies have been developed to circumvent this limitation in order for nanotubes applications to thrive. In this review, we summarize the different approaches established so far, emphasizing the balance between functionalization efficacy and the preservation of the tubes’ properties. Much attention will be given to a functionalization strategy overcoming the covalent–noncovalent dichotomy and to the implementation of two advanced functionalization schemes: (a) conjugation with molecular switches, to yield hybrid nanosystems with chemo-physical properties that can be tuned in a controlled and reversible way, and; (b) plasmonic nanosystems, whose ability to concentrate and enhance the electromagnetic fields can be taken advantage of to enhance the optical response of the tubes.

  15. Sperm function test

    Directory of Open Access Journals (Sweden)

    Pankaj Talwar

    2015-01-01

    Full Text Available With absolute normal semen analysis parameters it may not be necessary to shift to specialized tests early but in cases with borderline parameters or with history of fertilization failure in past it becomes necessary to do a battery of tests to evaluate different parameters of spermatozoa. Various sperm function tests are proposed and endorsed by different researchers in addition to the routine evaluation of fertility. These tests detect function of a certain part of spermatozoon and give insight on the events in fertilization of the oocyte. The sperms need to get nutrition from the seminal plasma in the form of fructose and citrate (this can be assessed by fructose qualitative and quantitative estimation, citrate estimation. They should be protected from the bad effects of pus cells and reactive oxygen species (ROS (leukocyte detection test, ROS estimation. Their number should be in sufficient in terms of (count, structure normal to be able to fertilize eggs (semen morphology. Sperms should have intact and functioning membrane to survive harsh environment of vagina and uterine fluids (vitality and hypo-osmotic swelling test, should have good mitochondrial function to be able to provide energy (mitochondrial activity index test. They should also have satisfactory acrosome function to be able to burrow a hole in zona pellucida (acrosome intactness test, zona penetration test. Finally, they should have properly packed DNA in the nucleus to be able to transfer the male genes (nuclear chromatic decondensation test to the oocyte during fertilization.

  16. Functional imaging and endoscopy

    Institute of Scientific and Technical Information of China (English)

    Jian-Guo Zhang; Hai-Feng Liu

    2011-01-01

    The emergence of endoscopy for the diagnosis of gastrointestinal diseases and the treatment of gastrointestinal diseases has brought great changes.The mere observation of anatomy with the imaging mode using modern endoscopy has played a significant role in this regard.However,increasing numbers of endoscopies have exposed additional deficiencies and defects such as anatomically similar diseases.Endoscopy can be used to examine lesions that are difficult to identify and diagnose.Early disease detection requires that substantive changes in biological function should be observed,but in the absence of marked morphological changes,endoscopic detection and diagnosis are difficult.Disease detection requires not only anatomic but also functional imaging to achieve a comprehensive interpretation and understanding.Therefore,we must ask if endoscopic examination can be integrated with both anatomic imaging and functional imaging.In recent years,as molecular biology and medical imaging technology have further developed,more functional imaging methods have emerged.This paper is a review of the literature related to endoscopic optical imaging methods in the hopes of initiating integration of functional imaging and anatomical imaging to yield a new and more effective type of endoscopy.

  17. Power-functional network

    Science.gov (United States)

    Sun, Yong; Kurths, Jürgen; Zhan, Meng

    2017-08-01

    Power grids and their properties have been studied broadly in many aspects. In this paper, we propose a novel concept, power-flow-based power grid, as a typical power-functional network, based on the calculation of power flow distribution from power electrical engineering. We compare it with structural networks based on the shortest path length and effective networks based on the effective electrical distance and study the relationship among these three kinds of networks. We find that they have roughly positive correlations with each other, indicating that in general any close nodes in the topological structure are actually connected in function. However, we do observe some counter-examples that two close nodes in a structural network can have a long distance in a power-functional network, namely, two physically connected nodes can actually be separated in function. In addition, we find that power grids in the structural network tend to be heterogeneous, whereas those in the effective and power-functional networks tend to be homogeneous. These findings are expected to be significant not only for power grids but also for various other complex networks.

  18. Learning View Generalization Functions

    CERN Document Server

    Breuel, Thomas M

    2007-01-01

    Learning object models from views in 3D visual object recognition is usually formulated either as a function approximation problem of a function describing the view-manifold of an object, or as that of learning a class-conditional density. This paper describes an alternative framework for learning in visual object recognition, that of learning the view-generalization function. Using the view-generalization function, an observer can perform Bayes-optimal 3D object recognition given one or more 2D training views directly, without the need for a separate model acquisition step. The paper shows that view generalization functions can be computationally practical by restating two widely-used methods, the eigenspace and linear combination of views approaches, in a view generalization framework. The paper relates the approach to recent methods for object recognition based on non-uniform blurring. The paper presents results both on simulated 3D ``paperclip'' objects and real-world images from the COIL-100 database sho...

  19. The Initial Mass Function of Low-Mass Stars and Brown Dwarfs in Young Clusters

    Science.gov (United States)

    Luhman, K. L.; Rieke, G. H.; Young, Erick T.; Cotera, Angela S.; Chen, H.; Rieke, Marcia J.; Schneider, Glenn; Thompson, Rodger I.

    2000-09-01

    We have obtained images of the Trapezium Cluster (140''×140'' 0.3 pc×0.3 pc) with the Hubble Space Telescope Near-Infrared Camera and Multi-Object Spectrometer (NICMOS). Combining these data with new ground-based K-band spectra (R=800) and existing spectral types and photometry, we have constructed an H-R diagram and used it and other arguments to infer masses and ages. To allow comparison with the results of our previous studies of IC 348 and ρ Oph, we first use the models of D'Antona & Mazzitelli. With these models, the distributions of ages of comparable samples of stars in the Trapezium, ρ Oph, and IC 348 indicate median ages of ~0.4 Myr for the first two regions and ~1-2 Myr for the latter. The low-mass initial mass functions (IMFs) in these sites of clustered star formation are similar over a wide range of stellar densities (ρ Oph, n=0.2-1×103 pc-3 IC 348, n=1×103 pc-3 Trapezium, n=1-5×104 pc-3) and other environmental conditions (e.g., presence or absence of OB stars). With current data, we cannot rule out modest variations in the substellar mass functions among these clusters. We then make the best estimate of the true form of the IMF in the Trapezium by using the evolutionary models of Baraffe et al. and an empirically adjusted temperature scale and compare this mass function to recent results for the Pleiades and the field. All of these data are consistent with an IMF that is flat or rises slowly from the substellar regime to about 0.6 Msolar and then rolls over into a power law that continues from about 1 Msolar to higher masses with a slope similar to or somewhat larger than the Salpeter value of 1.35. For the Trapezium, this behavior holds from our completeness limit of ~0.02 Msolar and probably, after a modest completeness correction, even from 0.01-0.02 Msolar. These data include ~50 likely brown dwarfs. We test the predictions of theories of the IMF against (1) the shape of the IMF, which is not log-normal, in clusters and the field, (2) the

  20. Algal functional annotation tool

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, D. [UCLA; Casero, D. [UCLA; Cokus, S. J. [UCLA; Merchant, S. S. [UCLA; Pellegrini, M. [UCLA

    2012-07-01

    The Algal Functional Annotation Tool is a web-based comprehensive analysis suite integrating annotation data from several pathway, ontology, and protein family databases. The current version provides annotation for the model alga Chlamydomonas reinhardtii, and in the future will include additional genomes. The site allows users to interpret large gene lists by identifying associated functional terms, and their enrichment. Additionally, expression data for several experimental conditions were compiled and analyzed to provide an expression-based enrichment search. A tool to search for functionally-related genes based on gene expression across these conditions is also provided. Other features include dynamic visualization of genes on KEGG pathway maps and batch gene identifier conversion.

  1. Partition Function of Spacetime

    CERN Document Server

    Makela, Jarmo

    2008-01-01

    We consider a microscopic model of spacetime, where spacetime is assumed to be a specific graph with Planck size quantum black holes on its vertices. As a thermodynamical system under consideration we take a certain uniformly accelerating, spacelike two-surface of spacetime which we call, for the sake of brevity and simplicity, as {\\it acceleration surface}. Using our model we manage to obtain an explicit and surprisingly simple expression for the partition function of an acceleration surface. Our partition function implies, among other things, the Unruh and the Hawking effects. It turns out that the Unruh and the Hawking effects are consequences of a specific phase transition, which takes place in spacetime, when the temperature of spacetime equals, from the point of view of an observer at rest with respect to an acceleration surface, to the Unruh temperature measured by that observer. When constructing the partition function of an acceleration surface we are forced to introduce a quantity which plays the ro...

  2. Spaces of continuous functions

    CERN Document Server

    Groenewegen, G L M

    2016-01-01

    The space C(X) of all continuous functions on a compact space X carries the structure of a normed vector space, an algebra and a lattice. On the one hand we study the relations between these structures and the topology of X, on the other hand we discuss a number of classical results according to which an algebra or a vector lattice can be represented as a C(X). Various applications of these theorems are given. Some attention is devoted to related theorems, e.g. the Stone Theorem for Boolean algebras and the Riesz Representation Theorem. The book is functional analytic in character. It does not presuppose much knowledge of functional analysis; it contains introductions into subjects such as the weak topology, vector lattices and (some) integration theory.

  3. [Functional mitral regurgitation].

    Science.gov (United States)

    Sade, Leyla Elif

    2009-07-01

    Functional mitral regurgitation (FMR) is the mitral regurgitation that occurs due to myocardial disease in the presence of normal mitral valve leaflets. This scenario has the different characteristics than the organic mitral regurgitation. Functional mitral regurgitation is a disease of the ventricle and occurs by the deformation of the mitral valve leaflets. This morbid entity is frequent and has bad prognosis. Functional mitral regurgitation is the result of complex pathophysiologic process including left ventricular local and global remodeling, mitral annular and papillary muscle dysfunction, and left ventricular dysfunction. The dynamic behavior of FMR complicates the quantification of the regurgitation. Exercise stress echocardiography is of particular importance in the evaluation of the hemodynamic burden of FMR. Particular pathophysiological properties of the FMR necessitate different therapeutic approaches than the current ones for classical mitral regurgitation.

  4. Ghrelin and Functional Dyspepsia

    Directory of Open Access Journals (Sweden)

    Takashi Akamizu

    2010-01-01

    Full Text Available The majority of patients with dyspepsia have no identifiable cause of their disease, leading to a diagnosis of functional dyspepsia (FD. While a number of different factors affect gut activity, components of the nervous and endocrine systems are essential for normal gut function. Communication between the brain and gut occurs via direct neural connections or endocrine signaling events. Ghrelin, a peptide produced by the stomach, affects gastric motility/emptying and secretion, suggesting it may play a pathophysiological role in FD. It is also possible that the functional abnormalities in FD may affect ghrelin production in the stomach. Plasma ghrelin levels are reported to be altered in FD, correlating with FD symptom score. Furthermore, some patients with FD suffer from anorexia with body-weight loss. As ghrelin increases gastric emptying and promotes feeding, ghrelin therapy may be a new approach to the treatment of FD.

  5. Quantal density functional theory

    CERN Document Server

    Sahni, Viraht

    2016-01-01

    This book deals with quantal density functional theory (QDFT) which is a time-dependent local effective potential theory of the electronic structure of matter. The treated time-independent QDFT constitutes a special case. In the 2nd edition, the theory is extended to include the presence of external magnetostatic fields. The theory is a description of matter based on the ‘quantal Newtonian’ first and second laws which is in terms of “classical” fields that pervade all space, and their quantal sources. The fields, which are explicitly defined, are separately representative of electron correlations due to the Pauli exclusion principle, Coulomb repulsion, correlation-kinetic, correlation-current-density, and correlation-magnetic effects. The book further describes Schrödinger theory from the new physical perspective of fields and quantal sources. It also describes traditional Hohenberg-Kohn-Sham DFT, and explains via QDFT the physics underlying the various energy functionals and functional derivatives o...

  6. Functional Programming Using F#

    DEFF Research Database (Denmark)

    Hansen, Michael Reichhardt; Rischel, Hans

    This comprehensive introduction to the principles of functional programming using F# shows how to apply basic theoretical concepts to produce succinct and elegant programs. It demonstrates the role of functional programming in a wide spectrum of applications including databases and systems....... Coverage also includes advanced features in the .NET library, the imperative features of F# and topics such as text processing, sequences, computation expressions and asynchronous computation. With a broad spectrum of examples and exercises, the book is perfect for courses in functional programming...... and for self-study. Enhancing its use as a text is an accompanying website with downloadable programs, lecture slides, a mini-projects and links to further F# sources....

  7. Platelet function in dogs

    DEFF Research Database (Denmark)

    Nielsen, Line A.; Zois, Nora Elisabeth; Pedersen, Henrik D.

    2007-01-01

    Background: Clinical studies investigating platelet function in dogs have had conflicting results that may be caused by normal physiologic variation in platelet response to agonists. Objectives: The objective of this study was to investigate platelet function in clinically healthy dogs of 4...... different breeds by whole-blood aggregometry and with a point-of-care platelet function analyzer (PFA-100), and to evaluate the effect of acetylsalicylic acid (ASA) administration on the results from both methods. Methods: Forty-five clinically healthy dogs (12 Cavalier King Charles Spaniels [CKCS], 12...... applied. However, the importance of these breed differences remains to be investigated. The PFA-100 method with Col + Epi as agonists, and ADP-induced platelet aggregation appear to be sensitive to ASA in dogs....

  8. Functional Programming Using F#

    DEFF Research Database (Denmark)

    Hansen, Michael Reichhardt; Rischel, Hans

    This comprehensive introduction to the principles of functional programming using F# shows how to apply basic theoretical concepts to produce succinct and elegant programs. It demonstrates the role of functional programming in a wide spectrum of applications including databases and systems....... Coverage also includes advanced features in the .NET library, the imperative features of F# and topics such as text processing, sequences, computation expressions and asynchronous computation. With a broad spectrum of examples and exercises, the book is perfect for courses in functional programming...... and for self-study. Enhancing its use as a text is an accompanying website with downloadable programs, lecture slides, a mini-projects and links to further F# sources....

  9. Functional illiteracy in Slovenia

    Directory of Open Access Journals (Sweden)

    Ester Možina

    1999-12-01

    Full Text Available The author draws attention to the fact that, in determining functional illiteracy, there remain many terminological disagreements and diverse opinions regarding illiteracy. Furthermore, there are also different methods for measuring writing abilities, thus leading to disparate results. The introductory section presents the dilemmas relating to the term of functional illiteracy, while the second part is concerned with the various methods for measuring literacy. Thus, the author also critically assesses the research studies aimed at evaluating the scope of literacy amongst adults in Slovenia during the past decade. ln this paper, she has adopted a methodology which would not determine what is functional and what is not in our society, in order to avoid limiting the richness of individual writing praxis.

  10. Harmonic function theory

    CERN Document Server

    Axler, Sheldon; Ramey, Wade

    2013-01-01

    This is a book about harmonic functions in Euclidean space. Readers with a background in real and complex analysis at the beginning graduate level will feel comfortable with the material presented here. The authors have taken unusual care to motivate concepts and simplify proofs. Topics include: basic properties of harmonic functions, Poisson integrals, the Kelvin transform, spherical harmonics, harmonic Hardy spaces, harmonic Bergman spaces, the decomposition theorem, Laurent expansions, isolated singularities, and the Dirichlet problem. The new edition contains a completely rewritten chapter on spherical harmonics, a new section on extensions of Bocher's Theorem, new exercises and proofs, as well as revisions throughout to improve the text. A unique software package-designed by the authors and available by e-mail - supplements the text for readers who wish to explore harmonic function theory on a computer.

  11. Protein Functionalized Nanodiamond Arrays

    Directory of Open Access Journals (Sweden)

    Liu YL

    2010-01-01

    Full Text Available Abstract Various nanoscale elements are currently being explored for bio-applications, such as in bio-images, bio-detection, and bio-sensors. Among them, nanodiamonds possess remarkable features such as low bio-cytotoxicity, good optical property in fluorescent and Raman spectra, and good photostability for bio-applications. In this work, we devise techniques to position functionalized nanodiamonds on self-assembled monolayer (SAMs arrays adsorbed on silicon and ITO substrates surface using electron beam lithography techniques. The nanodiamond arrays were functionalized with lysozyme to target a certain biomolecule or protein specifically. The optical properties of the nanodiamond-protein complex arrays were characterized by a high throughput confocal microscope. The synthesized nanodiamond-lysozyme complex arrays were found to still retain their functionality in interacting with E. coli.

  12. Generating functions for symmetric and shifted symmetric functions

    OpenAIRE

    Jing, Naihuan; Rozhkovskaya, Natasha

    2016-01-01

    We describe generating functions for several important families of classical symmetric functions and shifted Schur functions. The approach is originated from vertex operator realization of symmetric functions and offers a unified method to treat various families of symmetric functions and their shifted analogues.

  13. Generating functions for symmetric and shifted symmetric functions

    OpenAIRE

    Jing, Naihuan; Rozhkovskaya, Natasha

    2016-01-01

    We describe generating functions for several important families of classical symmetric functions and shifted Schur functions. The approach is originated from vertex operator realization of symmetric functions and offers a unified method to treat various families of symmetric functions and their shifted analogues.

  14. Études d'un réacteur micro-ondes monomode de type cuve agitée pour la synthèse chimique et proposition d'une méthodologie d'extrapolation

    OpenAIRE

    Ballestas Castro, Dairo

    2010-01-01

    Microwave (MW) assisted organic synthesis has been employed in many laboratories since more than 20 years. There is a controversy concerning the effects of MW on the kinetics of reactions since some enhancement of reaction rates have been observed. While MW heating advantages could be of interest for processes intensification, this technique has rarely been employed for large-scale productions. Scaling-up methods are rare and the existed techniques are generally empirical. The aim of our proj...

  15. Chromatin Structure and Function

    CERN Document Server

    Wolffe, Alan P

    1999-01-01

    The Third Edition of Chromatin: Structure and Function brings the reader up-to-date with the remarkable progress in chromatin research over the past three years. It has been extensively rewritten to cover new material on chromatin remodeling, histone modification, nuclear compartmentalization, DNA methylation, and transcriptional co-activators and co-repressors. The book is written in a clear and concise fashion, with 60 new illustrations. Chromatin: Structure and Function provides the reader with a concise and coherent account of the nature, structure, and assembly of chromatin and its active

  16. Algal functional annotation tool

    Energy Technology Data Exchange (ETDEWEB)

    2012-07-12

    Abstract BACKGROUND: Progress in genome sequencing is proceeding at an exponential pace, and several new algal genomes are becoming available every year. One of the challenges facing the community is the association of protein sequences encoded in the genomes with biological function. While most genome assembly projects generate annotations for predicted protein sequences, they are usually limited and integrate functional terms from a limited number of databases. Another challenge is the use of annotations to interpret large lists of 'interesting' genes generated by genome-scale datasets. Previously, these gene lists had to be analyzed across several independent biological databases, often on a gene-by-gene basis. In contrast, several annotation databases, such as DAVID, integrate data from multiple functional databases and reveal underlying biological themes of large gene lists. While several such databases have been constructed for animals, none is currently available for the study of algae. Due to renewed interest in algae as potential sources of biofuels and the emergence of multiple algal genome sequences, a significant need has arisen for such a database to process the growing compendiums of algal genomic data. DESCRIPTION: The Algal Functional Annotation Tool is a web-based comprehensive analysis suite integrating annotation data from several pathway, ontology, and protein family databases. The current version provides annotation for the model alga Chlamydomonas reinhardtii, and in the future will include additional genomes. The site allows users to interpret large gene lists by identifying associated functional terms, and their enrichment. Additionally, expression data for several experimental conditions were compiled and analyzed to provide an expression-based enrichment search. A tool to search for functionally-related genes based on gene expression across these conditions is also provided. Other features include dynamic visualization of genes

  17. Rarefied elliptic hypergeometric functions

    CERN Document Server

    Spiridonov, V P

    2016-01-01

    We prove exact evaluation formulae for two multiple rarefied elliptic beta integrals related to the simplest lens space. These integrals generalize the multiple type I and II van Diejen-Spiridonov integrals attached to the root system $C_n$. Symmetries of the rarefied elliptic analogue of the Euler-Gauss hypergeometric function are described and the corresponding generalization of the hypergeometric equation is constructed. An extension of the latter function to the root system $C_n$ and applications to some eigenvalue problems are briefly discussed.

  18. A stabilized pairing functional

    CERN Document Server

    Erler, J; Reinhard, P --G

    2008-01-01

    We propose a modified pairing functional for nuclear structure calculations which avoids the abrupt phase transition between pairing and non-pairing states. The intended application is the description of nuclear collective motion where the smoothing of the transition is compulsory to remove singularities. The stabilized pairing functional allows a thoroughly variational formulation, unlike the Lipkin-Nogami (LN) scheme which is often used for the purpose of smoothing. First applications to nuclear ground states and collective excitations prove the reliability and efficiency of the proposed stabilized pairing.

  19. Complex function theory

    CERN Document Server

    Sarason, Donald

    2007-01-01

    Complex Function Theory is a concise and rigorous introduction to the theory of functions of a complex variable. Written in a classical style, it is in the spirit of the books by Ahlfors and by Saks and Zygmund. Being designed for a one-semester course, it is much shorter than many of the standard texts. Sarason covers the basic material through Cauchy's theorem and applications, plus the Riemann mapping theorem. It is suitable for either an introductory graduate course or an undergraduate course for students with adequate preparation. The first edition was published with the title Notes on Co

  20. The function genomics study

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    @@ Genomics is a biology term appeared ten years ago, used to describe the researches of genomic mapping, sequencing, and structure analysis, etc. Genomics, the first journal for publishing papers on genomics research was born in 1986. In the past decade, the concept of genomics has been widely accepted by scientists who are engaging in biology research. Meanwhile, the research scope of genomics has been extended continuously, from simple gene mapping and sequencing to function genomics study. To reflect the change, genomics is divided into two parts now, the structure genomics and the function genomics.