The exponentiated generalized Pareto distribution | Adeyemi | Ife ...
African Journals Online (AJOL)
Recently Gupta et al. (1998) introduced the exponentiated exponential distribution as a generalization of the standard exponential distribution. In this paper, we introduce a three-parameter generalized Pareto distribution, the exponentiated generalized Pareto distribution (EGP). We present a comprehensive treatment of the ...
Robust bayesian inference of generalized Pareto distribution ...
African Journals Online (AJOL)
En utilisant une etude exhaustive de Monte Carlo, nous prouvons que, moyennant une fonction perte generalisee adequate, on peut construire un estimateur Bayesien robuste du modele. Key words: Bayesian estimation; Extreme value; Generalized Fisher information; Gener- alized Pareto distribution; Monte Carlo; ...
van Zyl, J. Martin
2012-01-01
Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good wit...
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
Prediction in Partial Duration Series With Generalized Pareto-Distributed Exceedances
DEFF Research Database (Denmark)
Rosbjerg, Dan; Madsen, Henrik; Rasmussen, Peter Funder
1992-01-01
As a generalization of the common assumption of exponential distribution of the exceedances in Partial duration series the generalized Pareto distribution has been adopted. Estimators for the parameters are presented using estimation by both method of moments and probability-weighted moments......-weighted moments. Maintaining the generalized Pareto distribution as the parent exceedance distribution the T-year event is estimated assuming the exceedances to be exponentially distributed. For moderately long-tailed exceedance distributions and small to moderate sample sizes it is found, by comparing mean...... square errors of the T-year event estimators, that the exponential distribution is preferable to the correct generalized Pareto distribution despite the introduced model error and despite a possible rejection of the exponential hypothesis by a test of significance. For moderately short-tailed exceedance...
A New Generalization of the Pareto Distribution and Its Application to Insurance Data
Directory of Open Access Journals (Sweden)
Mohamed E. Ghitany
2018-02-01
Full Text Available The Pareto classical distribution is one of the most attractive in statistics and particularly in the scenario of actuarial statistics and finance. For example, it is widely used when calculating reinsurance premiums. In the last years, many alternative distributions have been proposed to obtain better adjustments especially when the tail of the empirical distribution of the data is very long. In this work, an alternative generalization of the Pareto distribution is proposed and its properties are studied. Finally, application of the proposed model to the earthquake insurance data set is presented.
Prediction in Partial Duration Series With Generalized Pareto-Distributed Exceedances
DEFF Research Database (Denmark)
Rosbjerg, Dan; Madsen, Henrik; Rasmussen, Peter Funder
1992-01-01
As a generalization of the common assumption of exponential distribution of the exceedances in Partial duration series the generalized Pareto distribution has been adopted. Estimators for the parameters are presented using estimation by both method of moments and probability-weighted moments...... distributions (with physically justified upper limit) the correct exceedance distribution should be applied despite a possible acceptance of the exponential assumption by a test of significance....
Suarez, R
2001-01-01
In this paper an alternative non-parametric historical simulation approach, the Mixing Unconditional Disturbances model with constant volatility, where price paths are generated by reshuffling disturbances for S&P 500 Index returns over the period 1950 - 1998, is used to estimate a Generalized Extreme Value Distribution and a Generalized Pareto Distribution. An ordinary back-testing for period 1999 - 2008 was made to verify this technique, providing higher accuracy returns level under upper ...
GENERALIZED DOUBLE PARETO SHRINKAGE.
Armagan, Artin; Dunson, David B; Lee, Jaeyong
2013-01-01
We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferences in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, forming a bridge between the Laplace and Normal-Jeffreys' priors. While it has a spike at zero like the Laplace density, it also has a Student's t -like tail behavior. Bayesian computation is straightforward via a simple Gibbs sampling algorithm. We investigate the properties of the maximum a posteriori estimator, as sparse estimation plays an important role in many problems, reveal connections with some well-established regularization procedures, and show some asymptotic results. The performance of the prior is tested through simulations and an application.
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Mann, Jakob; Rathmann, Ole
2015-01-01
This study examines the various sources to the uncertainties in the application of two widely used extreme value distribution functions, the generalized extreme value distribution (GEVD) and the generalized Pareto distribution (GPD). The study is done through the analysis of measurements from...... as a guideline for applying GEVD and GPD to wind time series of limited length. The data analysis shows that, with reasonable choice of relevant parameters, GEVD and GPD give consistent estimates of the return winds. For GEVD, the base period should be chosen in accordance with the occurrence of the extreme wind...
Bivariate generalized Pareto distribution for extreme atmospheric particulate matter
Amin, Nor Azrita Mohd; Adam, Mohd Bakri; Ibrahim, Noor Akma; Aris, Ahmad Zaharin
2015-02-01
The high particulate matter (PM10) level is the prominent issue causing various impacts to human health and seriously affecting the economics. The asymptotic theory of extreme value is apply for analyzing the relation of extreme PM10 data from two nearby air quality monitoring stations. The series of daily maxima PM10 for Johor Bahru and Pasir Gudang stations are consider for year 2001 to 2010 databases. The 85% and 95% marginal quantile apply to determine the threshold values and hence construct the series of exceedances over the chosen threshold. The logistic, asymmetric logistic, negative logistic and asymmetric negative logistic models areconsidered as the dependence function to the joint distribution of a bivariate observation. Maximum likelihood estimation is employed for parameter estimations. The best fitted model is chosen based on the Akaike Information Criterion and the quantile plots. It is found that the asymmetric logistic model gives the best fitted model for bivariate extreme PM10 data and shows the weak dependence between two stations.
MATLAB implementation of satellite positioning error overbounding by generalized Pareto distribution
Ahmad, Khairol Amali; Ahmad, Shahril; Hashim, Fakroul Ridzuan
2018-02-01
In the satellite navigation community, error overbound has been implemented in the process of integrity monitoring. In this work, MATLAB programming is used to implement the overbounding of satellite positioning error CDF. Using a trajectory of reference, the horizontal position errors (HPE) are computed and its non-parametric distribution function is given by the empirical Cumulative Distribution Function (ECDF). According to the results, these errors have a heavy-tailed distribution. Sınce the ECDF of the HPE in urban environment is not Gaussian distributed, the ECDF is overbound with the CDF of the generalized Pareto distribution (GPD).
Group Acceptance Sampling Plan for Lifetime Data Using Generalized Pareto Distribution
Directory of Open Access Journals (Sweden)
Muhammad Aslam
2010-02-01
Full Text Available In this paper, a group acceptance sampling plan (GASP is introduced for the situations when lifetime of the items follows the generalized Pareto distribution. The design parameters such as minimum group size and acceptance number are determined when the consumer’s risk and the test termination time are specified. The proposed sampling plan is compared with the existing sampling plan. It is concluded that the proposed sampling plan performs better than the existing plan in terms of minimum sample size required to reach the same decision.
Directory of Open Access Journals (Sweden)
Kareema Abed Al-Kadim
2017-12-01
Full Text Available In this paper Rayleigh Pareto distribution have introduced denote by( R_PD. We stated some useful functions. Therefor we give some of its properties like the entropy function, mean, mode, median , variance , the r-th moment about the mean, the rth moment about the origin, reliability, hazard functions, coefficients of variation, of sekeness and of kurtosis. Finally, we estimate the parameters so the aim of this search is to introduce a new distribution
Higher moments method for generalized Pareto distribution in flood frequency analysis
Zhou, C. R.; Chen, Y. F.; Huang, Q.; Gu, S. H.
2017-08-01
The generalized Pareto distribution (GPD) has proven to be the ideal distribution in fitting with the peak over threshold series in flood frequency analysis. Several moments-based estimators are applied to estimating the parameters of GPD. Higher linear moments (LH moments) and higher probability weighted moments (HPWM) are the linear combinations of Probability Weighted Moments (PWM). In this study, the relationship between them will be explored. A series of statistical experiments and a case study are used to compare their performances. The results show that if the same PWM are used in LH moments and HPWM methods, the parameter estimated by these two methods is unbiased. Particularly, when the same PWM are used, the PWM method (or the HPWM method when the order equals 0) shows identical results in parameter estimation with the linear Moments (L-Moments) method. Additionally, this phenomenon is significant when r ≥ 1 that the same order PWM are used in HPWM and LH moments method.
Modelling road accident blackspots data with the discrete generalized Pareto distribution.
Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María
2014-10-01
This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup
2015-01-01
In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF
Energy Technology Data Exchange (ETDEWEB)
Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-02-15
In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the
On the Truncated Pareto Distribution with applications
Zaninetti, Lorenzo; Ferraro, Mario
2008-01-01
The Pareto probability distribution is widely applied in different fields such us finance, physics, hydrology, geology and astronomy. This note deals with an application of the Pareto distribution to astrophysics and more precisely to the statistical analysis of mass of stars and of diameters of asteroids. In particular a comparison between the usual Pareto distribution and its truncated version is presented. Finally a possible physical mechanism that produces Pareto tails for the distributio...
Record Values of a Pareto Distribution.
Ahsanullah, M.
The record values of the Pareto distribution, labelled Pareto (II) (alpha, beta, nu), are reviewed. The best linear unbiased estimates of the parameters in terms of the record values are provided. The prediction of the sth record value based on the first m (s>m) record values are obtained. A classical Pareto distribution provides reasonably…
Nursamsiah; Nugroho Sugianto, Denny; Suprijanto, Jusup; Munasik; Yulianto, Bambang
2018-02-01
The information of extreme wave height return level was required for maritime planning and management. The recommendation methods in analyzing extreme wave were better distributed by Generalized Pareto Distribution (GPD). Seasonal variation was often considered in the extreme wave model. This research aims to identify the best model of GPD by considering a seasonal variation of the extreme wave. By using percentile 95 % as the threshold of extreme significant wave height, the seasonal GPD and non-seasonal GPD fitted. The Kolmogorov-Smirnov test was applied to identify the goodness of fit of the GPD model. The return value from seasonal and non-seasonal GPD was compared with the definition of return value as criteria. The Kolmogorov-Smirnov test result shows that GPD fits data very well both seasonal and non-seasonal model. The seasonal return value gives better information about the wave height characteristics.
Pareto law and Pareto index in the income distribution of Japanese companies
Ishikawa, Atushi
2004-01-01
In order to study the phenomenon in detail that income distribution follows Pareto law, we analyze the database of high income companies in Japan. We find a quantitative relation between the average capital of the companies and the Pareto index. The larger the average capital becomes, the smaller the Pareto index becomes. From this relation, we can possibly explain that the Pareto index of company income distribution hardly changes, while the Pareto index of personal income distribution chang...
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General
The exponential age distribution and the Pareto firm size distribution
Coad, Alex
2008-01-01
Recent work drawing on data for large and small firms has shown a Pareto distribution of firm size. We mix a Gibrat-type growth process among incumbents with an exponential distribution of firm’s age, to obtain the empirical Pareto distribution.
Word frequencies: A comparison of Pareto type distributions
Wiegand, Martin; Nadarajah, Saralees; Si, Yuancheng
2018-03-01
Mehri and Jamaati (2017) [18] used Zipf's law to model word frequencies in Holy Bible translations for one hundred live languages. We compare the fit of Zipf's law to a number of Pareto type distributions. The latter distributions are shown to provide the best fit, as judged by a number of comparative plots and error measures. The fit of Zipf's law appears generally poor.
Analysis of a Pareto Mixture Distribution for Maritime Surveillance Radar
Directory of Open Access Journals (Sweden)
Graham V. Weinberg
2012-01-01
Full Text Available The Pareto distribution has been shown to be an excellent model for X-band high-resolution maritime surveillance radar clutter returns. Given the success of mixture distributions in radar, it is thus of interest to consider the effect of Pareto mixture models. This paper introduces a formulation of a Pareto intensity mixture distribution and investigates coherent multilook radar detector performance using this new clutter model. Clutter parameter estimates are derived from data sets produced by the Defence Science and Technology Organisation's Ingara maritime surveillance radar.
A Pareto upper tail for capital income distribution
Oancea, Bogdan; Pirjol, Dan; Andrei, Tudorel
2018-02-01
We present a study of the capital income distribution and of its contribution to the total income (capital income share) using individual tax income data in Romania, for 2013 and 2014. Using a parametric representation we show that the capital income is Pareto distributed in the upper tail, with a Pareto coefficient α ∼ 1 . 44 which is much smaller than the corresponding coefficient for wage- and non-wage-income (excluding capital income), of α ∼ 2 . 53. Including the capital income contribution has the effect of increasing the overall inequality measures.
Pareto Distribution of Firm Size and Knowledge Spillover Process as a Network
Tomohiko Konno
2013-01-01
The firm size distribution is considered as Pareto distribution. In the present paper, we show that the Pareto distribution of firm size results from the spillover network model which was introduced in Konno (2010).
Tsallis-Pareto like distributions in hadron-hadron collisions
International Nuclear Information System (INIS)
Barnafoeldi, G G; Uermoessy, K; Biro, T S
2011-01-01
Non-extensive thermodynamics is a novel approach in high energy physics. In high-energy heavy-ion, and especially in proton-proton collisions we are far from a canonical thermal state, described by the Boltzmann-Gibbs statistic. In these reactions low and intermediate transverse momentum spectra are extremely well reproduced by the Tsallis-Pareto distribution, but the physical origin of Tsallis parameters is still an unsettled question. Here, we analyze whether Tsallis-Pareto energy distribution do overlap with hadron spectra at high-pT. We fitted data, measured in proton-proton (proton-antiproton) collisions in wide center of mass energy range from 200 GeV RHIC up to 7 TeV LHC energies. Furthermore, our test is extended to an investigation of a possible √s-dependence of the power in the Tsallis-Pareto distribution, motivated by QCD evolution equations. We found that Tsallis-Pareto distributions fit well high-pT data, in the wide center of mass energy range. Deviance from the fits appears at p T > 20-30 GeV/c, especially on CDF data. Introducing a pT-scaling ansatz, the fits at low and intermediate transverse momenta still remain good, and the deviations tend to disappear at the highest-pT data.
The Burr X Pareto Distribution: Properties, Applications and VaR Estimation
Directory of Open Access Journals (Sweden)
Mustafa Ç. Korkmaz
2017-12-01
Full Text Available In this paper, a new three-parameter Pareto distribution is introduced and studied. We discuss various mathematical and statistical properties of the new model. Some estimation methods of the model parameters are performed. Moreover, the peaks-over-threshold method is used to estimate Value-at-Risk (VaR by means of the proposed distribution. We compare the distribution with a few other models to show its versatility in modelling data with heavy tails. VaR estimation with the Burr X Pareto distribution is presented using time series data, and the new model could be considered as an alternative VaR model against the generalized Pareto model for financial institutions.
Income inequality in Romania: The exponential-Pareto distribution
Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan
2017-03-01
We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.
[Origination of Pareto distribution in complex dynamic systems].
Chernavskiĭ, D S; Nikitin, A P; Chernavskaia, O D
2008-01-01
The Pareto distribution, whose probability density function can be approximated at sufficiently great chi as rho(chi) - chi(-alpha), where alpha > or = 2, is of crucial importance from both the theoretical and practical point of view. The main reason is its qualitative distinction from the normal (Gaussian) distribution. Namely, the probability of high deviations appears to be significantly higher. The conception of the universal applicability of the Gauss law remains to be widely distributed despite the lack of objective confirmation of this notion in a variety of application areas. The origin of the Pareto distribution in dynamic systems located in the gaussian noise field is considered. A simple one-dimensional model is discussed where the system response in a rather wide interval of the variable can be quite precisely approximated by this distribution.
Accelerated life testing design using geometric process for pareto distribution
Mustafa Kamal; Shazia Zarrin; Arif Ul Islam
2013-01-01
In this paper the geometric process is used for the analysis of accelerated life testing under constant stress for Pareto Distribution. Assuming that the lifetimes under increasing stress levels form a geometric process, estimates of the parameters are obtained by using the maximum likelihood method for complete data. In addition, asymptotic interval estimates of the parameters of the distribution using Fisher information matrix are also obtained. The statistical properties of the parameters ...
Income dynamics with a stationary double Pareto distribution.
Toda, Alexis Akira
2011-04-01
Once controlled for the trend, the distribution of personal income appears to be double Pareto, a distribution that obeys the power law exactly in both the upper and the lower tails. I propose a model of income dynamics with a stationary distribution that is consistent with this fact. Using US male wage data for 1970-1993, I estimate the power law exponent in two ways--(i) from each cross section, assuming that the distribution has converged to the stationary distribution, and (ii) from a panel directly estimating the parameters of the income dynamics model--and obtain the same value of 8.4.
Generalized Pareto optimum and semi-classical spinors
Rouleux, M.
2018-02-01
In 1971, S. Smale presented a generalization of Pareto optimum he called the critical Pareto set. The underlying motivation was to extend Morse theory to several functions, i.e. to find a Morse theory for m differentiable functions defined on a manifold M of dimension ℓ. We use this framework to take a 2 × 2 Hamiltonian ℋ = ℋ(p) ∈ 2 C ∞(T * R 2) to its normal form near a singular point of the Fresnel surface. Namely we say that ℋ has the Pareto property if it decomposes, locally, up to a conjugation with regular matrices, as ℋ(p) = u ‧(p)C(p)(u ‧(p))*, where u : R 2 → R 2 has singularities of codimension 1 or 2, and C(p) is a regular Hermitian matrix (“integrating factor”). In particular this applies in certain cases to the matrix Hamiltonian of Elasticity theory and its (relative) perturbations of order 3 in momentum at the origin.
Directory of Open Access Journals (Sweden)
Enrique Calderín-Ojeda
2017-11-01
Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.
Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions
Liu, C.; Charpentier, R.R.; Su, J.
2011-01-01
Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.
PARETO OPTIMAL SOLUTIONS FOR MULTI-OBJECTIVE GENERALIZED ASSIGNMENT PROBLEM
Directory of Open Access Journals (Sweden)
S. Prakash
2012-01-01
Full Text Available
ENGLISH ABSTRACT: The Multi-Objective Generalized Assignment Problem (MGAP with two objectives, where one objective is linear and the other one is non-linear, has been considered, with the constraints that a job is assigned to only one worker – though he may be assigned more than one job, depending upon the time available to him. An algorithm is proposed to find the set of Pareto optimal solutions of the problem, determining assignments of jobs to workers with two objectives without setting priorities for them. The two objectives are to minimise the total cost of the assignment and to reduce the time taken to complete all the jobs.
AFRIKAANSE OPSOMMING: ‘n Multi-doelwit veralgemeende toekenningsprobleem (“multi-objective generalised assignment problem – MGAP” met twee doelwitte, waar die een lineêr en die ander nielineêr is nie, word bestudeer, met die randvoorwaarde dat ‘n taak slegs toegedeel word aan een werker – alhoewel meer as een taak aan hom toegedeel kan word sou die tyd beskikbaar wees. ‘n Algoritme word voorgestel om die stel Pareto-optimale oplossings te vind wat die taaktoedelings aan werkers onderhewig aan die twee doelwitte doen sonder dat prioriteite toegeken word. Die twee doelwitte is om die totale koste van die opdrag te minimiseer en om die tyd te verminder om al die take te voltooi.
On the size distribution of cities: an economic interpretation of the Pareto coefficient.
Suh, S H
1987-01-01
"Both the hierarchy and the stochastic models of size distribution of cities are analyzed in order to explain the Pareto coefficient by economic variables. In hierarchy models, it is found that the rate of variation in the productivity of cities and that in the probability of emergence of cities can explain the Pareto coefficient. In stochastic models, the productivity of cities is found to explain the Pareto coefficient. New city-size distribution functions, in which the Pareto coefficient is decomposed by economic variables, are estimated." excerpt
Origin of Pareto-like spatial distributions in ecosystems.
Manor, Alon; Shnerb, Nadav M
2008-12-31
Recent studies of cluster distribution in various ecosystems revealed Pareto statistics for the size of spatial colonies. These results were supported by cellular automata simulations that yield robust criticality for endogenous pattern formation based on positive feedback. We show that this patch statistics is a manifestation of the law of proportionate effect. Mapping the stochastic model to a Markov birth-death process, the transition rates are shown to scale linearly with cluster size. This mapping provides a connection between patch statistics and the dynamics of the ecosystem; the "first passage time" for different colonies emerges as a powerful tool that discriminates between endogenous and exogenous clustering mechanisms. Imminent catastrophic shifts (such as desertification) manifest themselves in a drastic change of the stability properties of spatial colonies.
GAO Hongying; WU Kangping
2007-01-01
This paper estimates the Pareto exponent of the city size (population size and economy size) distribution, all provinces, and three regions in China in 1997, 2000 and 2003 by OLS, comparatively analyzes the Pareto exponent cross section and times, and empirically analyzes the factors which impacts on the Pareto exponents of provinces. Our analyses show that the size distributions of cities in China follow the Pareto distribution and are of structural features. Variations in the value of the P...
Directory of Open Access Journals (Sweden)
Achi Rinaldi
2016-06-01
Full Text Available Extreme event such as extreme rainfall have been analyzed and most concern for the country all around the world. There are two common distribution for extreme value which are Generalized Extreme Value distribution and Generalized Pareto distribution. These two distribution have shown good performace to estimate the parameter of extreme value. This research was aim to estimate parameter of extreme value using GEV distribution and GP distribution, and also to characterized effect of extreme event such as flood. The rainfall data was taken from BMKG for 5 location in DKI Jakarta. Both of distribution shown a good perfromance. The resut showed that Tanjung Priok station has biggest location parameter for GEV and also the biggest scale parameter for GP, that mean the biggest probability to take flood effect of the extreme rainfall.
Houghton, J.C.
1988-01-01
The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.
Vicente-Serrano, S.; Beguería, S.
2003-01-01
This paper analyses fifty-year time series of daily precipitation in a region of the middle Ebro valley (northern Spain) in order to predict extreme dry-spell risk. A comparison of observed and estimated maximum dry spells (50-year return period) showed that the Generalised Pareto (GP)
Using the Pareto Distribution to Improve Estimates of Topcoded Earnings
Philip Armour; Richard V. Burkhauser; Jeff Larrimore
2014-01-01
Inconsistent censoring in the public-use March Current Population Survey (CPS) limits its usefulness in measuring labor earnings trends. Using Pareto estimation methods with less-censored internal CPS data, we create an enhanced cell-mean series to capture top earnings in the public-use CPS. We find that previous approaches for imputing topcoded earnings systematically understate top earnings. Annual earnings inequality trends since 1963 using our series closely approximate those found by Kop...
Bayesian modeling to paired comparison data via the Pareto distribution
Directory of Open Access Journals (Sweden)
Nasir Abbas
2017-12-01
Full Text Available A probabilistic approach to build models for paired comparison experiments based on the comparison of two Pareto variables is considered. Analysis of the proposed model is carried out in classical as well as Bayesian frameworks. Informative and uninformative priors are employed to accommodate the prior information. Simulation study is conducted to assess the suitablily and performance of the model under theoretical conditions. Appropriateness of fit of the is also carried out. Entire inferential procedure is illustrated by comparing certain cricket teams using real dataset.
Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks
Directory of Open Access Journals (Sweden)
José Raúl Machado-Fernández
2016-12-01
Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.
Distributed approximation of Pareto surfaces in multicriteria radiation therapy treatment planning
International Nuclear Information System (INIS)
Bokrantz, Rasmus
2013-01-01
We consider multicriteria radiation therapy treatment planning by navigation over the Pareto surface, implemented by interpolation between discrete treatment plans. Current state of the art for calculation of a discrete representation of the Pareto surface is to sandwich this set between inner and outer approximations that are updated one point at a time. In this paper, we generalize this sequential method to an algorithm that permits parallelization. The principle of the generalization is to apply the sequential method to an approximation of an inexpensive model of the Pareto surface. The information gathered from the model is sub-sequently used for the calculation of points from the exact Pareto surface, which are processed in parallel. The model is constructed according to the current inner and outer approximations, and given a shape that is difficult to approximate, in order to avoid that parts of the Pareto surface are incorrectly disregarded. Approximations of comparable quality to those generated by the sequential method are demonstrated when the degree of parallelization is up to twice the number of dimensions of the objective space. For practical applications, the number of dimensions is typically at least five, so that a speed-up of one order of magnitude is obtained. (paper)
Distributed approximation of Pareto surfaces in multicriteria radiation therapy treatment planning.
Bokrantz, Rasmus
2013-06-07
We consider multicriteria radiation therapy treatment planning by navigation over the Pareto surface, implemented by interpolation between discrete treatment plans. Current state of the art for calculation of a discrete representation of the Pareto surface is to sandwich this set between inner and outer approximations that are updated one point at a time. In this paper, we generalize this sequential method to an algorithm that permits parallelization. The principle of the generalization is to apply the sequential method to an approximation of an inexpensive model of the Pareto surface. The information gathered from the model is sub-sequently used for the calculation of points from the exact Pareto surface, which are processed in parallel. The model is constructed according to the current inner and outer approximations, and given a shape that is difficult to approximate, in order to avoid that parts of the Pareto surface are incorrectly disregarded. Approximations of comparable quality to those generated by the sequential method are demonstrated when the degree of parallelization is up to twice the number of dimensions of the objective space. For practical applications, the number of dimensions is typically at least five, so that a speed-up of one order of magnitude is obtained.
Directory of Open Access Journals (Sweden)
Gökhan Gökdere
2014-05-01
Full Text Available In this paper, closed form expressions for the moments of the truncated Pareto order statistics are obtained by using conditional distribution. We also derive some results for the moments which will be useful for moment computations based on ordered data.
Generalized Pareto for Pattern-Oriented Random Walk Modelling of Organisms' Movements.
Directory of Open Access Journals (Sweden)
Sophie Bertrand
Full Text Available How organisms move and disperse is crucial to understand how population dynamics relates to the spatial heterogeneity of the environment. Random walk (RW models are typical tools to describe movement patterns. Whether Lévy or alternative RW better describes forager movements is keenly debated. We get around this issue using the Generalized Pareto Distribution (GPD. GPD includes as specific cases Normal, exponential and power law distributions, which underlie Brownian, Poisson-like and Lévy walks respectively. Whereas previous studies typically confronted a limited set of candidate models, GPD lets the most likely RW model emerge from the data. We illustrate the wide applicability of the method using GPS-tracked seabird foraging movements and fishing vessel movements tracked by Vessel Monitoring System (VMS, both collected in the Peruvian pelagic ecosystem. The two parameters from the fitted GPD, a scale and a shape parameter, provide a synoptic characterization of the observed movement in terms of characteristic scale and diffusive property. They reveal and quantify the variability, among species and individuals, of the spatial strategies selected by predators foraging on a common prey field. The GPD parameters constitute relevant metrics for (1 providing a synthetic and pattern-oriented description of movement, (2 using top predators as ecosystem indicators and (3 studying the variability of spatial behaviour among species or among individuals with different personalities.
Generalized Pareto for Pattern-Oriented Random Walk Modelling of Organisms' Movements.
Bertrand, Sophie; Joo, Rocío; Fablet, Ronan
2015-01-01
How organisms move and disperse is crucial to understand how population dynamics relates to the spatial heterogeneity of the environment. Random walk (RW) models are typical tools to describe movement patterns. Whether Lévy or alternative RW better describes forager movements is keenly debated. We get around this issue using the Generalized Pareto Distribution (GPD). GPD includes as specific cases Normal, exponential and power law distributions, which underlie Brownian, Poisson-like and Lévy walks respectively. Whereas previous studies typically confronted a limited set of candidate models, GPD lets the most likely RW model emerge from the data. We illustrate the wide applicability of the method using GPS-tracked seabird foraging movements and fishing vessel movements tracked by Vessel Monitoring System (VMS), both collected in the Peruvian pelagic ecosystem. The two parameters from the fitted GPD, a scale and a shape parameter, provide a synoptic characterization of the observed movement in terms of characteristic scale and diffusive property. They reveal and quantify the variability, among species and individuals, of the spatial strategies selected by predators foraging on a common prey field. The GPD parameters constitute relevant metrics for (1) providing a synthetic and pattern-oriented description of movement, (2) using top predators as ecosystem indicators and (3) studying the variability of spatial behaviour among species or among individuals with different personalities.
Modeling air quality in main cities of Peninsular Malaysia by using a generalized Pareto model.
Masseran, Nurulkamal; Razali, Ahmad Mahir; Ibrahim, Kamarulzaman; Latif, Mohd Talib
2016-01-01
The air pollution index (API) is an important figure used for measuring the quality of air in the environment. The API is determined based on the highest average value of individual indices for all the variables which include sulfur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3), and suspended particulate matter (PM10) at a particular hour. API values that exceed the limit of 100 units indicate an unhealthy status for the exposed environment. This study investigates the risk of occurrences of API values greater than 100 units for eight urban areas in Peninsular Malaysia for the period of January 2004 to December 2014. An extreme value model, known as the generalized Pareto distribution (GPD), has been fitted to the API values found. Based on the fitted model, return period for describing the occurrences of API exceeding 100 in the different cities has been computed as the indicator of risk. The results obtained indicated that most of the urban areas considered have a very small risk of occurrence of the unhealthy events, except for Kuala Lumpur, Malacca, and Klang. However, among these three cities, it is found that Klang has the highest risk. Based on all the results obtained, the air quality standard in urban areas of Peninsular Malaysia falls within healthy limits to human beings.
A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.
Carreau, Julie; Bengio, Yoshua
2009-07-01
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.
An Investigation of the Pareto Distribution as a Model for High Grazing Angle Clutter
2011-03-01
radar detection schemes under controlled conditions. Complicated clutter models result in mathematical difficulties in the determination of optimal and...a population [7]. It has been used in the modelling of actuarial data; an example is in excess of loss quotations in insurance [8]. Its usefulness as...UNCLASSIFIED modified Bessel functions, making it difficult to employ in radar detection schemes. The Pareto Distribution is amenable to mathematical
A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution
Directory of Open Access Journals (Sweden)
Enrique Calderín-Ojeda
2018-02-01
Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.
Computing the Distribution of Pareto Sums Using Laplace Transformation and Stehfest Inversion
Harris, C. K.; Bourne, S. J.
2017-05-01
In statistical seismology, the properties of distributions of total seismic moment are important for constraining seismological models, such as the strain partitioning model (Bourne et al. J Geophys Res Solid Earth 119(12): 8991-9015, 2014). This work was motivated by the need to develop appropriate seismological models for the Groningen gas field in the northeastern Netherlands, in order to address the issue of production-induced seismicity. The total seismic moment is the sum of the moments of individual seismic events, which in common with many other natural processes, are governed by Pareto or "power law" distributions. The maximum possible moment for an induced seismic event can be constrained by geomechanical considerations, but rather poorly, and for Groningen it cannot be reliably inferred from the frequency distribution of moment magnitude pertaining to the catalogue of observed events. In such cases it is usual to work with the simplest form of the Pareto distribution without an upper bound, and we follow the same approach here. In the case of seismicity, the exponent β appearing in the power-law relation is small enough for the variance of the unbounded Pareto distribution to be infinite, which renders standard statistical methods concerning sums of statistical variables, based on the central limit theorem, inapplicable. Determinations of the properties of sums of moderate to large numbers of Pareto-distributed variables with infinite variance have traditionally been addressed using intensive Monte Carlo simulations. This paper presents a novel method for accurate determination of the properties of such sums that is accurate, fast and easily implemented, and is applicable to Pareto-distributed variables for which the power-law exponent β lies within the interval [0, 1]. It is based on shifting the original variables so that a non-zero density is obtained exclusively for non-negative values of the parameter and is identically zero elsewhere, a property
Graham, John H; Robb, Daniel T; Poe, Amy R
2012-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of
Statistical inferences with jointly type-II censored samples from two Pareto distributions
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
Extending the Generalised Pareto Distribution for Novelty Detection in High-Dimensional Spaces.
Clifton, David A; Clifton, Lei; Hugueny, Samuel; Tarassenko, Lionel
2014-01-01
Novelty detection involves the construction of a "model of normality", and then classifies test data as being either "normal" or "abnormal" with respect to that model. For this reason, it is often termed one-class classification. The approach is suitable for cases in which examples of "normal" behaviour are commonly available, but in which cases of "abnormal" data are comparatively rare. When performing novelty detection, we are typically most interested in the tails of the normal model, because it is in these tails that a decision boundary between "normal" and "abnormal" areas of data space usually lies. Extreme value statistics provides an appropriate theoretical framework for modelling the tails of univariate (or low-dimensional) distributions, using the generalised Pareto distribution (GPD), which can be demonstrated to be the limiting distribution for data occurring within the tails of most practically-encountered probability distributions. This paper provides an extension of the GPD, allowing the modelling of probability distributions of arbitrarily high dimension, such as occurs when using complex, multimodel, multivariate distributions for performing novelty detection in most real-life cases. We demonstrate our extension to the GPD using examples from patient physiological monitoring, in which we have acquired data from hospital patients in large clinical studies of high-acuity wards, and in which we wish to determine "abnormal" patient data, such that early warning of patient physiological deterioration may be provided.
Entropies of negative incomes, Pareto-distributed loss, and financial crises.
Gao, Jianbo; Hu, Jing; Mao, Xiang; Zhou, Mi; Gurbaxani, Brian; Lin, Johnny
2011-01-01
Health monitoring of world economy is an important issue, especially in a time of profound economic difficulty world-wide. The most important aspect of health monitoring is to accurately predict economic downturns. To gain insights into how economic crises develop, we present two metrics, positive and negative income entropy and distribution analysis, to analyze the collective "spatial" and temporal dynamics of companies in nine sectors of the world economy over a 19 year period from 1990-2008. These metrics provide accurate predictive skill with a very low false-positive rate in predicting downturns. The new metrics also provide evidence of phase transition-like behavior prior to the onset of recessions. Such a transition occurs when negative pretax incomes prior to or during economic recessions transition from a thin-tailed exponential distribution to the higher entropy Pareto distribution, and develop even heavier tails than those of the positive pretax incomes. These features propagate from the crisis initiating sector of the economy to other sectors.
Entropies of negative incomes, Pareto-distributed loss, and financial crises.
Directory of Open Access Journals (Sweden)
Jianbo Gao
Full Text Available Health monitoring of world economy is an important issue, especially in a time of profound economic difficulty world-wide. The most important aspect of health monitoring is to accurately predict economic downturns. To gain insights into how economic crises develop, we present two metrics, positive and negative income entropy and distribution analysis, to analyze the collective "spatial" and temporal dynamics of companies in nine sectors of the world economy over a 19 year period from 1990-2008. These metrics provide accurate predictive skill with a very low false-positive rate in predicting downturns. The new metrics also provide evidence of phase transition-like behavior prior to the onset of recessions. Such a transition occurs when negative pretax incomes prior to or during economic recessions transition from a thin-tailed exponential distribution to the higher entropy Pareto distribution, and develop even heavier tails than those of the positive pretax incomes. These features propagate from the crisis initiating sector of the economy to other sectors.
Ikefuji, M.; Laeven, R.J.A.; Magnus, J.R.; Muris, C.H.M.
2013-01-01
In searching for an appropriate utility function in the expected utility framework, we formulate four properties that we want the utility function to satisfy. We conduct a search for such a function, and we identify Pareto utility as a function satisfying all four desired properties. Pareto utility
2011-01-01
Itaalia majandusteadlase Vilfredo Pareto jõudmisest oma kuulsa printsiibini ja selle printsiibi mõjust tänapäevasele juhtimisele. Pareto printsiibi kohaselt ei aita suurem osa tegevusest meid tulemuseni jõuda, vaid on aja raiskamine. Diagramm
Directory of Open Access Journals (Sweden)
Raffaele Federici
2017-08-01
Full Text Available In questa ricerca di senso fra la fine di un'epoca e la nuova visione del mondo, c’è, nei due Autori, quello che potrebbe chiamarsi una betweenness: Pareto, quasi un franco-italiano, e Michels, un italiano-tedesco, anzi un più che italiano. Nella linea di faglia rappresentata dal primo conflitto mondiale, i due sociologi sono in una doppia relazione interiore appunto franco-italiana Pareto e italo-tedesca Michels e una relazione esteriore fra il mondo di ieri e il mondo successivo al cataclisma che fu la prima guerra mondiale, quando ben quattro imperi colossali erano stati smembrati (l’Impero Russo, l’Impero Tedesco, l’Impero Austro-ungarico e l’Impero ottomano, nello stesso tempo in cui Emile Durkheim guardava con inquietudine alla disgregazione delle vecchie comunità tradizionali, dove il senso della crisi del tempo investe non solo le persone e i comportamenti, ma il mondo logico stesso. Lo scambio epistolare avviene nella stessa terra: Pareto a Celigny, sul lago di Ginevra , e Michels a Basilea , lungo le rive del Reno. Vi è, fra i due sociologi un profondo rispetto, che vedrà Robert Michels dedicare allo “scienziato e amico Vilfredo Pareto con venerazione” un’opera importante come “Problemi di sociologia applicata” pubblicata solo tre anni dopo il Trattato di Sociologia Generale del Maestro. In questa antologia di saggi Robert Michels, probabilmente composti fra il 1914 e il 1917, negli anni del grande cataclisma, anzi concepiti prima «dell’insediamento di questa terribile corte suprema di cassazione di tutte le nostre ideologie, che è la guerra» , quindi contemporanea al Trattato, il Maestro viene citato tre volte, come Max Weber, ma, de facto, la presenza di Pareto è continua. In particolare, il richiamo al Maestro è iscritto a due piste di ricerca: da una parte la realtà della ricerca sociologica e del suo amplissimo spettro di analisi e dall’altra la teoria della circolazione delle elités. È proprio
Directory of Open Access Journals (Sweden)
Владимир Геннадьевич Иванов
2015-12-01
Full Text Available The given article presents research of the evolution of the Russian party system. The chosen methodology is based on the heuristic potential of agent-based modelling. The author analyzes various scenarios of parties’ competition (applying Pareto distribution in connection with recent increase of the number of political parties. In addition, the author predicts the level of ideological diversity of the parties’ platforms (applying the principles of Hotelling distribution in order to evaluate their potential competitiveness in the struggle for voters.
Active learning of Pareto fronts.
Campigotto, Paolo; Passerini, Andrea; Battiti, Roberto
2014-03-01
This paper introduces the active learning of Pareto fronts (ALP) algorithm, a novel approach to recover the Pareto front of a multiobjective optimization problem. ALP casts the identification of the Pareto front into a supervised machine learning task. This approach enables an analytical model of the Pareto front to be built. The computational effort in generating the supervised information is reduced by an active learning strategy. In particular, the model is learned from a set of informative training objective vectors. The training objective vectors are approximated Pareto-optimal vectors obtained by solving different scalarized problem instances. The experimental results show that ALP achieves an accurate Pareto front approximation with a lower computational effort than state-of-the-art estimation of distribution algorithms and widely known genetic techniques.
Directory of Open Access Journals (Sweden)
José Raúl Castro
2016-02-01
Full Text Available This paper presents an efficient algorithm to solve the multi-objective (MO voltage control problem in distribution networks. The proposed algorithm minimizes the following three objectives: voltage variation on pilot buses, reactive power production ratio deviation, and generator voltage deviation. This work leverages two optimization techniques: fuzzy logic to find the optimum value of the reactive power of the distributed generation (DG and Pareto optimization to find the optimal value of the pilot bus voltage so that this produces lower losses under the constraints that the voltage remains within established limits. Variable loads and DGs are taken into account in this paper. The algorithm is tested on an IEEE 13-node test feeder and the results show the effectiveness of the proposed model.
Solomon, Sorin; Levy, Moshe
2001-06-01
The LLS stock market model (see Levy Levy and Solomon Academic Press 2000 "Microscopic Simulation of Financial Markets; From Investor Behavior to Market Phenomena" for a review) is a model of heterogeneous quasi-rational investors operating in a complex environment about which they have incomplete information. We review the main features of this model and several of its extensions. We study the effects of investor heterogeneity and show that predation, competition, or symbiosis may occur between different investor populations. The dynamics of the LLS model lead to the empirically observed Pareto wealth distribution. Many properties observed in actual markets appear as natural consequences of the LLS dynamics: - truncated Levy distribution of short-term returns, - excess volatility, - a return autocorrelation "U-shape" pattern, and - a positive correlation between volume and absolute returns.
Sun, Kaibiao; Kasperski, Andrzej; Tian, Yuan
2014-10-01
The aim of this study is the optimization of a product-driven self-cycling bioprocess and presentation of a way to determine the best possible decision variables out of a set of alternatives based on the designed model. Initially, a product-driven generalized kinetic model, which allows a flexible choice of the most appropriate kinetics is designed and analysed. The optimization problem is given as the bi-objective one, where maximization of biomass productivity and minimization of unproductive loss of substrate are the objective functions. Then, the Pareto fronts are calculated for exemplary kinetics. It is found that in the designed bioprocess, a decrease of emptying/refilling fraction and an increase of substrate feeding concentration cause an increase of the biomass productivity. An increase of emptying/refilling fraction and a decrease of substrate feeding concentration cause a decrease of unproductive loss of substrate. The preferred solutions are calculated using the minimum distance from an ideal solution method, while giving proposals of their modifications derived from a decision maker's reactions to the generated solutions.
Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.
2017-04-01
Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.
DEFF Research Database (Denmark)
Mozaffari, Ahmad; Gorji-Bandpy, Mofid; Samadian, Pendar
2013-01-01
Optimizing and controlling of complex engineering systems is a phenomenon that has attracted an incremental interest of numerous scientists. Until now, a variety of intelligent optimizing and controlling techniques such as neural networks, fuzzy logic, game theory, support vector machines...... and stochastic algorithms were proposed to facilitate controlling of the engineering systems. In this study, an extended version of mutable smart bee algorithm (MSBA) called Pareto based mutable smart bee (PBMSB) is inspired to cope with multi-objective problems. Besides, a set of benchmark problems and four...... well-known Pareto based optimizing algorithms i.e. multi-objective bee algorithm (MOBA), multi-objective particle swarm optimization (MOPSO) algorithm, non-dominated sorting genetic algorithm (NSGA-II), and strength Pareto evolutionary algorithm (SPEA 2) are utilized to confirm the acceptable...
Diphoton generalized distribution amplitudes
International Nuclear Information System (INIS)
El Beiyad, M.; Pire, B.; Szymanowski, L.; Wallon, S.
2008-01-01
We calculate the leading order diphoton generalized distribution amplitudes by calculating the amplitude of the process γ*γ→γγ in the low energy and high photon virtuality region at the Born order and in the leading logarithmic approximation. As in the case of the anomalous photon structure functions, the γγ generalized distribution amplitudes exhibit a characteristic lnQ 2 behavior and obey inhomogeneous QCD evolution equations.
Wright, Adam; Bates, David W
2010-01-01
BACKGROUND: Many natural phenomena demonstrate power-law distributions, where very common items predominate. Problems, medications and lab results represent some of the most important data elements in medicine, but their overall distribution has not been reported. OBJECTIVE: Our objective is to determine whether problems, medications and lab results demonstrate a power law distribution. METHODS: Retrospective review of electronic medical record data for 100,000 randomly selected patients seen at least twice in 2006 and 2007 at the Brigham and Women's Hospital in Boston and its affiliated medical practices. RESULTS: All three data types exhibited a power law distribution. The 12.5% most frequently used problems account for 80% of all patient problems, the top 11.8% of medications account for 80% of all medication orders and the top 4.5% of lab result types account for all lab results. CONCLUSION: These three data elements exhibited power law distributions with a small number of common items representing a substantial proportion of all orders and observations, which has implications for electronic health record design.
Directory of Open Access Journals (Sweden)
Sergey E. Bukhtoyarov
2005-05-01
Full Text Available A multicriterion linear combinatorial problem with a parametric principle of optimality is considered. This principle is defined by a partitioning of partial criteria onto Pareto preference relation groups within each group and the lexicographic preference relation between them. Quasistability of the problem is investigated. This type of stability is a discrete analog of Hausdorff lower semi-continuity of the multiple-valued mapping that defines the choice function. A formula of quasistability radius is derived for the case of the metric l∞. Some known results are stated as corollaries. Mathematics Subject Classification 2000: 90C05, 90C10, 90C29, 90C31.
Transmuted Generalized Inverse Weibull Distribution
Merovci, Faton; Elbatal, Ibrahim; Ahmed, Alaa
2013-01-01
A generalization of the generalized inverse Weibull distribution so-called transmuted generalized inverse Weibull dis- tribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM) in order to generate a flexible family of probability distributions taking generalized inverse Weibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expression...
Malevergne, Yannick; Pisarenko, Vladilen; Sornette, Didier
2011-03-01
Fat-tail distributions of sizes abound in natural, physical, economic, and social systems. The lognormal and the power laws have historically competed for recognition with sometimes closely related generating processes and hard-to-distinguish tail properties. This state-of-affair is illustrated with the debate between Eeckhout [Amer. Econ. Rev. 94, 1429 (2004)] and Levy [Amer. Econ. Rev. 99, 1672 (2009)] on the validity of Zipf's law for US city sizes. By using a uniformly most powerful unbiased (UMPU) test between the lognormal and the power-laws, we show that conclusive results can be achieved to end this debate. We advocate the UMPU test as a systematic tool to address similar controversies in the literature of many disciplines involving power laws, scaling, "fat" or "heavy" tails. In order to demonstrate that our procedure works for data sets other than the US city size distribution, we also briefly present the results obtained for the power-law tail of the distribution of personal identity (ID) losses, which constitute one of the major emergent risks at the interface between cyberspace and reality.
Stress-strength reliability for general bivariate distributions
Directory of Open Access Journals (Sweden)
Alaa H. Abdel-Hamid
2016-10-01
Full Text Available An expression for the stress-strength reliability R=P(X1
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Directory of Open Access Journals (Sweden)
S. N. Syed Nasir
2018-01-01
Full Text Available This research is focusing on optimal placement and sizing of multiple variable passive filter (VPF to mitigate harmonic distortion due to charging station (CS at 449 bus distribution network. There are 132 units of CS which are scheduled based on user behaviour within 24 hours, with the interval of 15 minutes. By considering the varying of CS patterns and harmonic impact, Modified Lightning Search Algorithm (MLSA is used to find 22 units of VPF coordination, so that less harmonics will be injected from 415 V bus to the medium voltage network and power loss is also reduced. Power system harmonic flow, VPF, CS, battery, and the analysis will be modelled in MATLAB/m-file platform. High Performance Computing (HPC is used to make simulation faster. Pareto-Fuzzy technique is used to obtain sizing of VPF from all nondominated solutions. From the result, the optimal placements and sizes of VPF are able to reduce the maximum THD for voltage and current and also the total apparent losses up to 39.14%, 52.5%, and 2.96%, respectively. Therefore, it can be concluded that the MLSA is suitable method to mitigate harmonic and it is beneficial in minimizing the impact of aggressive CS installation at distribution network.
A Pareto scale-inflated outlier model and its Bayesian analysis
Scollnik, David P. M.
2016-01-01
This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three wor...
Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf
2018-01-01
We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.
A heavy-traffic theorem for the GI/G/1 queue with a Pareto-type service time distribution
J.W. Cohen
1997-01-01
textabstractFor the $GI/G/1$-queueing model with traffic load $a<1$, service time distribution $B(t)$ and interarrival time distribution $A(t)$ holds, whenever for $t rightarrow infty$: $$ quad 1-B(t) sim frac{c{(t/ beta)^nu + {rm O ( {rm e^{-delta t ), quad c>0, quad 1< nu < 2, quad delta >
Ghosh, Indranil
2011-01-01
Consider a discrete bivariate random variable (X, Y) with possible values x[subscript 1], x[subscript 2],..., x[subscript I] for X and y[subscript 1], y[subscript 2],..., y[subscript J] for Y. Further suppose that the corresponding families of conditional distributions, for X given values of Y and of Y for given values of X are available. We…
International Nuclear Information System (INIS)
Agterberg, Frits
2017-01-01
Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that
Energy Technology Data Exchange (ETDEWEB)
Agterberg, Frits, E-mail: agterber@nrcan.gc.ca [Geological Survey of Canada (Canada)
2017-07-01
Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that
Axiomatizations of Pareto Equilibria in Multicriteria Games
Voorneveld, M.; Vermeulen, D.; Borm, P.E.M.
1997-01-01
We focus on axiomatizations of the Pareto equilibrium concept in multicriteria games based on consistency.Axiomatizations of the Nash equilibrium concept by Peleg and Tijs (1996) and Peleg, Potters, and Tijs (1996) have immediate generalizations.The axiomatization of Norde et al.(1996) cannot be
Kullback-Leibler divergence and the Pareto-Exponential approximation.
Weinberg, G V
2016-01-01
Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation.
Pareto optimality in organelle energy metabolism analysis.
Angione, Claudio; Carapezza, Giovanni; Costanza, Jole; Lió, Pietro; Nicosia, Giuseppe
2013-01-01
In low and high eukaryotes, energy is collected or transformed in compartments, the organelles. The rich variety of size, characteristics, and density of the organelles makes it difficult to build a general picture. In this paper, we make use of the Pareto-front analysis to investigate the optimization of energy metabolism in mitochondria and chloroplasts. Using the Pareto optimality principle, we compare models of organelle metabolism on the basis of single- and multiobjective optimization, approximation techniques (the Bayesian Automatic Relevance Determination), robustness, and pathway sensitivity analysis. Finally, we report the first analysis of the metabolic model for the hydrogenosome of Trichomonas vaginalis, which is found in several protozoan parasites. Our analysis has shown the importance of the Pareto optimality for such comparison and for insights into the evolution of the metabolism from cytoplasmic to organelle bound, involving a model order reduction. We report that Pareto fronts represent an asymptotic analysis useful to describe the metabolism of an organism aimed at maximizing concurrently two or more metabolite concentrations.
Monopoly, Pareto and Ramsey mark-ups
Ten Raa, T.
2009-01-01
Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear demand or, even within the constant elasticity framework, dependence is introduced. The analysis provides a single Generalized Inverse Elasticity Rule for the problems of monopoly, Pareto and Ramsey.
Approximating convex Pareto surfaces in multiobjective radiotherapy planning
International Nuclear Information System (INIS)
Craft, David L.; Halabi, Tarek F.; Shih, Helen A.; Bortfeld, Thomas R.
2006-01-01
Radiotherapy planning involves inherent tradeoffs: the primary mission, to treat the tumor with a high, uniform dose, is in conflict with normal tissue sparing. We seek to understand these tradeoffs on a case-to-case basis, by computing for each patient a database of Pareto optimal plans. A treatment plan is Pareto optimal if there does not exist another plan which is better in every measurable dimension. The set of all such plans is called the Pareto optimal surface. This article presents an algorithm for computing well distributed points on the (convex) Pareto optimal surface of a multiobjective programming problem. The algorithm is applied to intensity-modulated radiation therapy inverse planning problems, and results of a prostate case and a skull base case are presented, in three and four dimensions, investigating tradeoffs between tumor coverage and critical organ sparing
Identification of Climate Change with Generalized Extreme Value (GEV) Distribution Approach
International Nuclear Information System (INIS)
Rahayu, Anita
2013-01-01
Some events are difficult to avoid and gives considerable influence to humans and the environment is extreme weather and climate change. Many of the problems that require knowledge about the behavior of extreme values and one of the methods used are the Extreme Value Theory (EVT). EVT used to draw up reliable systems in a variety of conditions, so as to minimize the risk of a major disaster. There are two methods for identifying extreme value, Block Maxima with Generalized Extreme Value (GEV) distribution approach and Peaks over Threshold (POT) with Generalized Pareto Distribution (GPD) approach. This research in Indramayu with January 1961-December 2003 period, the method used is Block Maxima with GEV distribution approach. The result showed that there is no climate change in Indramayu with January 1961-December 2003 period.
Multi-choice stochastic transportation problem involving general form of distributions.
Quddoos, Abdul; Ull Hasan, Md Gulzar; Khalid, Mohammad Masood
2014-01-01
Many authors have presented studies of multi-choice stochastic transportation problem (MCSTP) where availability and demand parameters follow a particular probability distribution (such as exponential, weibull, cauchy or extreme value). In this paper an MCSTP is considered where availability and demand parameters follow general form of distribution and a generalized equivalent deterministic model (GMCSTP) of MCSTP is obtained. It is also shown that all previous models obtained by different authors can be deduced with the help of GMCSTP. MCSTP with pareto, power function or burr-XII distributions are also considered and equivalent deterministic models are obtained. To illustrate the proposed model two numerical examples are presented and solved using LINGO 13.0 software package.
Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.
Otero-Muras, Irene; Banga, Julio R
2017-07-21
In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.
DEFF Research Database (Denmark)
Bligaard, Thomas; Johannesson, Gisli Holmar; Ruban, Andrei
2003-01-01
Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties and the ......Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties...... and the cost. In this letter we present a database consisting of the lattice parameters, bulk moduli, and heats of formation for over 64 000 ordered metallic alloys, which has been established by direct first-principles density-functional-theory calculations. Furthermore, we use a concept from economic theory......, the Pareto-optimal set, to determine optimal alloy solutions for the compromise between low compressibility, high stability, and cost....
Pareto optimal pairwise sequence alignment.
DeRonne, Kevin W; Karypis, George
2013-01-01
Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.
Pareto vs Simmel: residui ed emozioni
Directory of Open Access Journals (Sweden)
Silvia Fornari
2017-08-01
Full Text Available A cento anni dalla pubblicazione del Trattato di sociologia generale (Pareto 1988 siamo a mantenere vivo ed attuale lo studio paretiano con una rilettura contemporanea del suo pensiero. Ricordato per la grande versatilità intellettuale dagli economisti, rimane lo scienziato rigoroso ed analitico i cui contributi sono ancora discussi a livello internazionale. Noi ne analizzeremo gli aspetti che l’hanno portato ad avvicinarsi all’approccio sociologico, con l’introduzione della nota distinzione dell’azione sociale: logica e non-logica. Una dicotomia utilizzata per dare conto dei cambiamenti sociali riguardanti le modalità d’azione degli uomini e delle donne. Com’è noto le azioni logiche sono quelle che riguardano comportamenti mossi da logicità e raziocinio, in cui vi è una diretta relazione causa-effetto, azioni oggetto di studio degli economisti, e di cui non si occupano i sociologi. Le azioni non-logiche riguardano tutte le tipologie di agire umano che rientrano nel novero delle scienze sociali, e che rappresentano la parte più ampia dell’agire sociale. Sono le azioni guidate dai sentimenti, dall’emotività, dalla superstizione, ecc., illustrate da Pareto nel Trattato di sociologia generale e in saggi successivi, dove riprende anche il concetto di eterogenesi dei fini, formulato per la prima volta da Giambattista Vico. Concetto secondo il quale la storia umana, pur conservando in potenza la realizzazione di certi fini, non è lineare e lungo il suo percorso evolutivo può accadere che l’uomo nel tentativo di raggiungere una finalità arrivi a conclusioni opposte. Pareto collega la definizione del filosofo napoletano alle tipologie di azione sociale e alla loro distinzione (logiche, non-logiche. L’eterogenesi dei fini per Pareto è dunque l’esito di un particolare tipo di azione non-logica dell’essere umano e della collettività.
The Transmuted Generalized Inverse Weibull Distribution
Directory of Open Access Journals (Sweden)
Faton Merovci
2014-05-01
Full Text Available A generalization of the generalized inverse Weibull distribution the so-called transmuted generalized inverse Weibull distribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM in order to generate a flexible family of probability distributions taking the generalized inverseWeibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expressions for the moments, quantiles, and moment generating function of the new distribution are derived. We propose the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set are used to compare the flexibility of the transmuted version versus the generalized inverse Weibull distribution.
Pareto joint inversion of 2D magnetotelluric and gravity data
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where
Approximations of the Generalized Wilks' Distribution
Raats, V.M.
2004-01-01
Wilks' lambda and the corresponding Wilks' distribution are well known concepts in testing in multivariate regression models.The topic of this paper is a generalization of the Wilks distribution.This generalized Wilks' distribution is relevant for testing in multivariate regression models with
An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index
DEFF Research Database (Denmark)
Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle
2013-01-01
We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...
Strong Convergence Bound of the Pareto Index Estimator under Right Censoring
Directory of Open Access Journals (Sweden)
Peng Zuoxiang
2010-01-01
Full Text Available Let be a sequence of positive independent and identically distributed random variables with common Pareto-type distribution function as , where represents a slowly varying function at infinity. In this note we study the strong convergence bound of a kind of right censored Pareto index estimator under second-order regularly varying conditions.
Characterizations of Generalized Hyperexponential Distributions.
1985-05-01
There are also additional - constraints that are not so obvious. Likacs and Szasz [19511 have shown that one of the roots with greatest real part must be...S function to be non-integer. The lemma is found in Kammler [197b] and is based upon the >luntz- Szasz theorem (see Cheney 1196b)). S 28...Convergence, 2nd ed. New York: Academic. Lukacs, E. and Szasz , 0. (1951). Certain Fourier Transforms of Distributions. Canadian Journal of Mathematics, 3, 140
The use of generalized functions and distributions in general relativity
International Nuclear Information System (INIS)
Steinbauer, R; Vickers, J A
2006-01-01
We review the extent to which one can use classical distribution theory in describing solutions of Einstein's equations. We show that there are a number of physically interesting cases which cannot be treated using distribution theory but require a more general concept. We describe a mathematical theory of nonlinear generalized functions based on Colombeau algebras and show how this may be applied in general relativity. We end by discussing the concept of singularity in general relativity and show that certain solutions with weak singularities may be regarded as distributional solutions of Einstein's equations. (topical review)
The application of analytical methods to the study of Pareto - optimal control systems
Directory of Open Access Journals (Sweden)
I. K. Romanova
2014-01-01
Full Text Available The subject of research articles - - methods of multicriteria optimization and their application for parametric synthesis of double-circuit control systems in conditions of inconsistency of individual criteria. The basis for solving multicriteria problems is a fundamental principle of a multi-criteria choice - the principle of the Edgeworth - Pareto. Getting Pareto - optimal variants due to inconsistency of individual criteria does not mean reaching a final decision. Set these options only offers the designer (DM.An important issue when using traditional numerical methods is their computational cost. An example is the use of methods of sounding the parameter space, including with use of uniform grids and uniformly distributed sequences. Very complex computational task is the application of computer methods of approximation bounds of Pareto.The purpose of this work is the development of a fairly simple search methods of Pareto - optimal solutions for the case of the criteria set out in the analytical form.The proposed solution is based on the study of the properties of the analytical dependences of criteria. The case is not covered so far in the literature, namely, the topology of the task, in which no touch of indifference curves (lines level. It is shown that for such tasks may be earmarked for compromise solutions. Prepositional use of the angular position of antigradient to the indifference curves in the parameter space relative to the coordinate axes. Formulated propositions on the characteristics of comonotonicity and contramonotonicity and angular characteristics of antigradient to determine Pareto optimal solutions. Considers the General algorithm of calculation: determine the scope of permissible values of parameters; investigates properties comonotonicity and contraventanas; to build an equal level (indifference curves; determined touch type: single sided (task is not strictly multicriteria or bilateral (objective relates to the Pareto
Experiments to Distribute Map Generalization Processes
Berli, Justin; Touya, Guillaume; Lokhat, Imran; Regnauld, Nicolas
2018-05-01
Automatic map generalization requires the use of computationally intensive processes often unable to deal with large datasets. Distributing the generalization process is the only way to make them scalable and usable in practice. But map generalization is a highly contextual process, and the surroundings of a generalized map feature needs to be known to generalize the feature, which is a problem as distribution might partition the dataset and parallelize the processing of each part. This paper proposes experiments to evaluate the past propositions to distribute map generalization, and to identify the main remaining issues. The past propositions to distribute map generalization are first discussed, and then the experiment hypotheses and apparatus are described. The experiments confirmed that regular partitioning was the quickest strategy, but also the less effective in taking context into account. The geographical partitioning, though less effective for now, is quite promising regarding the quality of the results as it better integrates the geographical context.
From form factors to generalized parton distributions
Energy Technology Data Exchange (ETDEWEB)
Diehl, Markus
2013-06-15
I present an extraction of generalized parton distributions from selected data on the electromagnetic nucleon form factors. The extracted distributions can in particular be used to quantify the contribution to the proton spin from the total angular momentum carried by valence quarks, as well as their transverse spatial distribution inside the proton.
Stable power laws in variable economies; Lotka-Volterra implies Pareto-Zipf
Solomon, S.; Richmond, P.
2002-05-01
In recent years we have found that logistic systems of the Generalized Lotka-Volterra type (GLV) describing statistical systems of auto-catalytic elements posses power law distributions of the Pareto-Zipf type. In particular, when applied to economic systems, GLV leads to power laws in the relative individual wealth distribution and in market returns. These power laws and their exponent α are invariant to arbitrary variations in the total wealth of the system and to other endogenously and exogenously induced variations.
Designing Pareto-superior demand-response rate options
International Nuclear Information System (INIS)
Horowitz, I.; Woo, C.K.
2006-01-01
We explore three voluntary service options-real-time pricing, time-of-use pricing, and curtailable/interruptible service-that a local distribution company might offer its customers in order to encourage them to alter their electricity usage in response to changes in the electricity-spot-market price. These options are simple and practical, and make minimal information demands. We show that each of the options is Pareto-superior ex ante, in that it benefits both the participants and the company offering it, while not affecting the non-participants. The options are shown to be Pareto-superior ex post as well, except under certain exceptional circumstances. (author)
Pareto-Zipf law in growing systems with multiplicative interactions
Ohtsuki, Toshiya; Tanimoto, Satoshi; Sekiyama, Makoto; Fujihara, Akihiro; Yamamoto, Hiroshi
2018-06-01
Numerical simulations of multiplicatively interacting stochastic processes with weighted selections were conducted. A feedback mechanism to control the weight w of selections was proposed. It becomes evident that when w is moderately controlled around 0, such systems spontaneously exhibit the Pareto-Zipf distribution. The simulation results are universal in the sense that microscopic details, such as parameter values and the type of control and weight, are irrelevant. The central ingredient of the Pareto-Zipf law is argued to be the mild control of interactions.
Multi-agent Pareto appointment exchanging in hospital patient scheduling
I.B. Vermeulen (Ivan); S.M. Bohte (Sander); D.J.A. Somefun (Koye); J.A. La Poutré (Han)
2007-01-01
htmlabstractWe present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment
Multi-agent Pareto appointment exchanging in hospital patient scheduling
Vermeulen, I.B.; Bohté, S.M.; Somefun, D.J.A.; Poutré, La J.A.
2007-01-01
We present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment exchanging algorithm:
COMPROMISE, OPTIMAL AND TRACTIONAL ACCOUNTS ON PARETO SET
Directory of Open Access Journals (Sweden)
V. V. Lahuta
2010-11-01
Full Text Available The problem of optimum traction calculations is considered as a problem about optimum distribution of a resource. The dynamic programming solution is based on a step-by-step calculation of set of points of Pareto-optimum values of a criterion function (energy expenses and a resource (time.
New generalized functions and multiplication of distributions
International Nuclear Information System (INIS)
Colombeau, J.F.
1984-01-01
Since its conception, Quantum Field Theory is based on 'heuristic' computations (in particular products of distributions) that, despite lots of effort, remained meaningless from a mathematical viewpoint. In this book the author presents a new mathematical theory giving a rigorous mathematical sense to these heuristic computations and, from a mathematical viewpoint, to all products of distributions. This new mathematical theory is a new theory of Generalized Functions defined on any open subset Ω of Rsup(n), which are much more general than the distributions on Ω. (Auth.)
Unraveling hadron structure with generalized parton distributions
Energy Technology Data Exchange (ETDEWEB)
Andrei Belitsky; Anatoly Radyushkin
2004-10-01
The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling and QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.
Transmuted New Generalized Inverse Weibull Distribution
Directory of Open Access Journals (Sweden)
Muhammad Shuaib Khan
2017-06-01
Full Text Available This paper introduces the transmuted new generalized inverse Weibull distribution by using the quadratic rank transmutation map (QRTM scheme studied by Shaw et al. (2007. The proposed model contains the twenty three lifetime distributions as special sub-models. Some mathematical properties of the new distribution are formulated, such as quantile function, Rényi entropy, mean deviations, moments, moment generating function and order statistics. The method of maximum likelihood is used for estimating the model parameters. We illustrate the flexibility and potential usefulness of the new distribution by using reliability data.
An Evolutionary Efficiency Alternative to the Notion of Pareto Efficiency
I.P. van Staveren (Irene)
2012-01-01
textabstractThe paper argues that the notion of Pareto efficiency builds on two normative assumptions: the more general consequentialist norm of any efficiency criterion, and the strong no-harm principle of the prohibition of any redistribution during the economic process that hurts at least one
Pareto optimization in algebraic dynamic programming.
Saule, Cédric; Giegerich, Robert
2015-01-01
Pareto optimization combines independent objectives by computing the Pareto front of its search space, defined as the set of all solutions for which no other candidate solution scores better under all objectives. This gives, in a precise sense, better information than an artificial amalgamation of different scores into a single objective, but is more costly to compute. Pareto optimization naturally occurs with genetic algorithms, albeit in a heuristic fashion. Non-heuristic Pareto optimization so far has been used only with a few applications in bioinformatics. We study exact Pareto optimization for two objectives in a dynamic programming framework. We define a binary Pareto product operator [Formula: see text] on arbitrary scoring schemes. Independent of a particular algorithm, we prove that for two scoring schemes A and B used in dynamic programming, the scoring scheme [Formula: see text] correctly performs Pareto optimization over the same search space. We study different implementations of the Pareto operator with respect to their asymptotic and empirical efficiency. Without artificial amalgamation of objectives, and with no heuristics involved, Pareto optimization is faster than computing the same number of answers separately for each objective. For RNA structure prediction under the minimum free energy versus the maximum expected accuracy model, we show that the empirical size of the Pareto front remains within reasonable bounds. Pareto optimization lends itself to the comparative investigation of the behavior of two alternative scoring schemes for the same purpose. For the above scoring schemes, we observe that the Pareto front can be seen as a composition of a few macrostates, each consisting of several microstates that differ in the same limited way. We also study the relationship between abstract shape analysis and the Pareto front, and find that they extract information of a different nature from the folding space and can be meaningfully combined.
Determining the distribution of fitness effects using a generalized Beta-Burr distribution.
Joyce, Paul; Abdo, Zaid
2017-07-12
In Beisel et al. (2007), a likelihood framework, based on extreme value theory (EVT), was developed for determining the distribution of fitness effects for adaptive mutations. In this paper we extend this framework beyond the extreme distributions and develop a likelihood framework for testing whether or not extreme value theory applies. By making two simple adjustments to the Generalized Pareto Distribution (GPD) we introduce a new simple five parameter probability density function that incorporates nearly every common (continuous) probability model ever used. This means that all of the common models are nested. This has important implications in model selection beyond determining the distribution of fitness effects. However, we demonstrate the use of this distribution utilizing likelihood ratio testing to evaluate alternative distributions to the Gumbel and Weibull domains of attraction of fitness effects. We use a bootstrap strategy, utilizing importance sampling, to determine where in the parameter space will the test be most powerful in detecting deviations from these domains and at what sample size, with focus on small sample sizes (n<20). Our results indicate that the likelihood ratio test is most powerful in detecting deviation from the Gumbel domain when the shape parameters of the model are small while the test is more powerful in detecting deviations from the Weibull domain when these parameters are large. As expected, an increase in sample size improves the power of the test. This improvement is observed to occur quickly with sample size n≥10 in tests related to the Gumbel domain and n≥15 in the case of the Weibull domain. This manuscript is in tribute to the contributions of Dr. Paul Joyce to the areas of Population Genetics, Probability Theory and Mathematical Statistics. A Tribute section is provided at the end that includes Paul's original writing in the first iterations of this manuscript. The Introduction and Alternatives to the GPD sections
An introduction to the Generalized Parton Distributions
International Nuclear Information System (INIS)
Michel Garcon
2002-01-01
The concepts of Generalized Parton Distributions (GPD) are reviewed in an introductory and phenomenological fashion. These distributions provide a rich and unifying picture of the nucleon structure. Their physical meaning is discussed. The GPD are in principle measurable through exclusive deeply virtual production of photons (DVCS) or of mesons (DVMP). Experiments are starting to test the validity of these concepts. First results are discussed and new experimental projects presented, with an emphasis on this program at Jefferson Lab
Generalized Analysis of a Distribution Separation Method
Directory of Open Access Journals (Sweden)
Peng Zhang
2016-04-01
Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.
Existence of pareto equilibria for multiobjective games without compactness
Shiraishi, Yuya; Kuroiwa, Daishi
2013-01-01
In this paper, we investigate the existence of Pareto and weak Pareto equilibria for multiobjective games without compactness. By employing an existence theorem of Pareto equilibria due to Yu and Yuan([10]), several existence theorems of Pareto and weak Pareto equilibria for the multiobjective games are established in a similar way to Flores-B´azan.
Karian, Zaven A
2000-01-01
Throughout the physical and social sciences, researchers face the challenge of fitting statistical distributions to their data. Although the study of statistical modelling has made great strides in recent years, the number and variety of distributions to choose from-all with their own formulas, tables, diagrams, and general properties-continue to create problems. For a specific application, which of the dozens of distributions should one use? What if none of them fit well?Fitting Statistical Distributions helps answer those questions. Focusing on techniques used successfully across many fields, the authors present all of the relevant results related to the Generalized Lambda Distribution (GLD), the Generalized Bootstrap (GB), and Monte Carlo simulation (MC). They provide the tables, algorithms, and computer programs needed for fitting continuous probability distributions to data in a wide variety of circumstances-covering bivariate as well as univariate distributions, and including situations where moments do...
Strong Convergence Bound of the Pareto Index Estimator under Right Censoring
Directory of Open Access Journals (Sweden)
Bao Tao
2010-01-01
Full Text Available Let {Xn,n≥1} be a sequence of positive independent and identically distributed random variables with common Pareto-type distribution function F(x=1−x−1/γlF(x as γ>0, where lF(x represents a slowly varying function at infinity. In this note we study the strong convergence bound of a kind of right censored Pareto index estimator under second-order regularly varying conditions.
Robustness analysis of bogie suspension components Pareto optimised values
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Tractable Pareto Optimization of Temporal Preferences
Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent
2003-01-01
This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.
Experimental studies of generalized parton distributions
International Nuclear Information System (INIS)
Kabuss, E.M.
2014-01-01
Generalized parton distributions (GPD) provide a new way to study the nucleon structure. Experimentally they can be accessed using hard exclusive processes such as deeply virtual Compton scattering and meson production. First insights to GPDs were already obtained from measurements at DESY, JLAB and CERN, while new ambitious studies are planned at the upgraded JLAB at 12 GeV and at CERN. Here, some emphasis will be put onto the planned COMPASS II programme. (author)
Momentum transfer dependence of generalized parton distributions
Energy Technology Data Exchange (ETDEWEB)
Sharma, Neetika [Indian Institute of Science Education and Research Mohali, S.A.S. Nagar, Punjab (India)
2016-11-15
We revisit the model for parametrization of the momentum dependence of nucleon generalized parton distributions in the light of recent MRST measurements of parton distribution functions (A.D. Martin et al., Eur. Phys. J. C 63, 189 (2009)). Our parametrization method with a minimum set of free parameters give a sufficiently good description of data for Dirac and Pauli electromagnetic form factors of proton and neutron at small and intermediate values of momentum transfer. We also calculate the GPDs for up- and down-quarks by decomposing the electromagnetic form factors for the nucleon using the charge and isospin symmetry and also study the evolution of GPDs to a higher scale. We further investigate the transverse charge densities for both the unpolarized and transversely polarized nucleon and compare our results with Kelly's distribution. (orig.)
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
Analysis of extreme drinking in patients with alcohol dependence using Pareto regression.
Das, Sourish; Harel, Ofer; Dey, Dipak K; Covault, Jonathan; Kranzler, Henry R
2010-05-20
We developed a novel Pareto regression model with an unknown shape parameter to analyze extreme drinking in patients with Alcohol Dependence (AD). We used the generalized linear model (GLM) framework and the log-link to include the covariate information through the scale parameter of the generalized Pareto distribution. We proposed a Bayesian method based on Ridge prior and Zellner's g-prior for the regression coefficients. Simulation study indicated that the proposed Bayesian method performs better than the existing likelihood-based inference for the Pareto regression.We examined two issues of importance in the study of AD. First, we tested whether a single nucleotide polymorphism within GABRA2 gene, which encodes a subunit of the GABA(A) receptor, and that has been associated with AD, influences 'extreme' alcohol intake and second, the efficacy of three psychotherapies for alcoholism in treating extreme drinking behavior. We found an association between extreme drinking behavior and GABRA2. We also found that, at baseline, men with a high-risk GABRA2 allele had a significantly higher probability of extreme drinking than men with no high-risk allele. However, men with a high-risk allele responded to the therapy better than those with two copies of the low-risk allele. Women with high-risk alleles also responded to the therapy better than those with two copies of the low-risk allele, while women who received the cognitive behavioral therapy had better outcomes than those receiving either of the other two therapies. Among men, motivational enhancement therapy was the best for the treatment of the extreme drinking behavior. Copyright 2010 John Wiley & Sons, Ltd.
Generalized Parton Distributions and their Singularities
Energy Technology Data Exchange (ETDEWEB)
Anatoly Radyushkin
2011-04-01
A new approach to building models of generalized parton distributions (GPDs) is discussed that is based on the factorized DD (double distribution) Ansatz within the single-DD formalism. The latter was not used before, because reconstructing GPDs from the forward limit one should start in this case with a very singular function $f(\\beta)/\\beta$ rather than with the usual parton density $f(\\beta)$. This results in a non-integrable singularity at $\\beta=0$ exaggerated by the fact that $f(\\beta)$'s, on their own, have a singular $\\beta^{-a}$ Regge behavior for small $\\beta$. It is shown that the singularity is regulated within the GPD model of Szczepaniak et al., in which the Regge behavior is implanted through a subtracted dispersion relation for the hadron-parton scattering amplitude. It is demonstrated that using proper softening of the quark-hadron vertices in the regions of large parton virtualities results in model GPDs $H(x,\\xi)$ that are finite and continuous at the "border point'' $x=\\xi$. Using a simple input forward distribution, we illustrate the implementation of the new approach for explicit construction of model GPDs. As a further development, a more general method of regulating the $\\beta=0$ singularities is proposed that is based on the separation of the initial single DD $f(\\beta, \\alpha)$ into the "plus'' part $[f(\\beta,\\alpha)]_{+}$ and the $D$-term. It is demonstrated that the "DD+D'' separation method allows to (re)derive GPD sum rules that relate the difference between the forward distribution $f(x)=H(x,0)$ and the border function $H(x,x)$ with the $D$-term function $D(\\alpha)$.
General distributed control system for fusion experiments
International Nuclear Information System (INIS)
Klingner, P.L.; Levings, S.J.; Wilkins, R.W.
1986-01-01
A general control system using distributed LSI-11 microprocessors is being developed. Common software residues in each LSI-11 and is tailored to an application by control specifications downloaded from a host computer. The microprocessors, their control interfaces, and the micro-to-host communications are CAMAC based. The host computer also supports an operator interface, coordination of multiple microprocessors, and utilities to create and maintain the control specifications. Typical applications include monitoring safety interlocks as well as controlling vacuum systems, high voltage charging systems, and diagnostics
Giller, C A
2011-12-01
The use of conformity indices to optimize Gamma Knife planning is common, but does not address important tradeoffs between dose to tumor and normal tissue. Pareto analysis has been used for this purpose in other applications, but not for Gamma Knife (GK) planning. The goal of this work is to use computer models to show that Pareto analysis may be feasible for GK planning to identify dosimetric tradeoffs. We define a GK plan A to be Pareto dominant to B if the prescription isodose volume of A covers more tumor but not more normal tissue than B, or if A covers less normal tissue but not less tumor than B. A plan is Pareto optimal if it is not dominated by any other plan. Two different Pareto optimal plans represent different tradeoffs between dose to tumor and normal tissue, because neither plan dominates the other. 'GK simulator' software calculated dose distributions for GK plans, and was called repetitively by a genetic algorithm to calculate Pareto dominant plans. Three irregular tumor shapes were tested in 17 trials using various combinations of shots. The mean number of Pareto dominant plans/trial was 59 ± 17 (sd). Different planning strategies were identified by large differences in shot positions, and 70 of the 153 coordinate plots (46%) showed differences of 5mm or more. The Pareto dominant plans dominated other nearby plans. Pareto dominant plans represent different dosimetric tradeoffs and can be systematically calculated using genetic algorithms. Automatic identification of non-intuitive planning strategies may be feasible with these methods.
A general framework for updating belief distributions.
Bissiri, P G; Holmes, C C; Walker, S G
2016-11-01
We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.
On Generalized Type 1 Logistic Distribution | Ahsanullah | Afrika ...
African Journals Online (AJOL)
Some distributional properties of the generalized type 1 logistic distribution are given. Based on these distributional property a characterization of this distribution is presented. Key words: Conditional Expectation; Reversed Hazard Rate; Characterization.
Kinetics of wealth and the Pareto law.
Boghosian, Bruce M
2014-04-01
An important class of economic models involve agents whose wealth changes due to transactions with other agents. Several authors have pointed out an analogy with kinetic theory, which describes molecules whose momentum and energy change due to interactions with other molecules. We pursue this analogy and derive a Boltzmann equation for the time evolution of the wealth distribution of a population of agents for the so-called Yard-Sale Model of wealth exchange. We examine the solutions to this equation by a combination of analytical and numerical methods and investigate its long-time limit. We study an important limit of this equation for small transaction sizes and derive a partial integrodifferential equation governing the evolution of the wealth distribution in a closed economy. We then describe how this model can be extended to include features such as inflation, production, and taxation. In particular, we show that the model with taxation exhibits the basic features of the Pareto law, namely, a lower cutoff to the wealth density at small values of wealth, and approximate power-law behavior at large values of wealth.
On chiral-odd Generalized Parton Distributions
Energy Technology Data Exchange (ETDEWEB)
Wallon, Samuel [Laboratoire de Physique Theorique d' Orsay - LPT, Bat. 210, Univ. Paris-Sud 11, 91405 Orsay Cedex (France); UPMC Univ. Paris 6, Paris (France); Pire, Bernard [Centre de Physique Theorique - CPHT, UMR 7644, Ecole Polytechnique, Bat. 6, RDC, F91128 Palaiseau Cedex (France); Szymanowski, Lech [Soltan Institute for Nuclear Studies, Hoza 69, 00691, Warsaw (Poland)
2010-07-01
The chiral-odd transversity generalized parton distributions of the nucleon can be accessed experimentally through the exclusive photoproduction process {gamma} + N {yields} {pi} + {rho} + N', in the kinematics where the meson pair has a large invariant mass and the final nucleon has a small transverse momentum, provided the vector meson is produced in a transversally polarized state. Estimated counting rates show that the experiment is feasible with real or quasi real photon beams expected at JLab at 12 GeV and in the COMPASS experiment. (Phys Letters B688,154,2010) In addition, a consistent classification of the chiral-odd pion GPDs beyond the leading twist 2 is presented. Based on QCD equations of motion and on the invariance under rotation on the light-cone of any scattering amplitude involving such GPDs, we reduce the basis of these chiral-odd GPDs to a minimal set. (author)
Pardo-Montero, Juan; Fenwick, John D
2010-06-01
The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape
Generalized parton distribution for non zero skewness
International Nuclear Information System (INIS)
Kumar, Narinder; Dahiya, Harleen; Teryaev, Oleg
2012-01-01
In the theory of strong interactions the main open question is how the nucleon and other hadrons are built from quarks and gluons, the fundamental degrees of freedom in QCD. An essential tool to investigate hadron structure is the study of deep inelastic scattering processes, where individual quarks and gluons can be resolved. The parton densities extracted from such processes encode the distribution of longitudinal momentum and polarization carried by quarks, antiquarks and gluons within a fast moving hadron. They have provided much to shape the physical picture of hadron structure. In the recent years, it has become clear that appropriate exclusive scattering processes may provide such information encoded in the general parton distributions (GPDs). Here, we investigate the GPD for deep virtual compton scattering (DVCS) for the non zero skewness. The study has investigated the GPDs by expressing them in terms of overlaps of light front wave functions (LFWFs). The work represented a spin 1/2 system as a composite of spin 1/2 fermion and spin 1 boson with arbitrary masses
Multiobjective Optimization of Linear Cooperative Spectrum Sensing: Pareto Solutions and Refinement.
Yuan, Wei; You, Xinge; Xu, Jing; Leung, Henry; Zhang, Tianhang; Chen, Chun Lung Philip
2016-01-01
In linear cooperative spectrum sensing, the weights of secondary users and detection threshold should be optimally chosen to minimize missed detection probability and to maximize secondary network throughput. Since these two objectives are not completely compatible, we study this problem from the viewpoint of multiple-objective optimization. We aim to obtain a set of evenly distributed Pareto solutions. To this end, here, we introduce the normal constraint (NC) method to transform the problem into a set of single-objective optimization (SOO) problems. Each SOO problem usually results in a Pareto solution. However, NC does not provide any solution method to these SOO problems, nor any indication on the optimal number of Pareto solutions. Furthermore, NC has no preference over all Pareto solutions, while a designer may be only interested in some of them. In this paper, we employ a stochastic global optimization algorithm to solve the SOO problems, and then propose a simple method to determine the optimal number of Pareto solutions under a computational complexity constraint. In addition, we extend NC to refine the Pareto solutions and select the ones of interest. Finally, we verify the effectiveness and efficiency of the proposed methods through computer simulations.
Diversity comparison of Pareto front approximations in many-objective optimization.
Li, Miqing; Yang, Shengxiang; Liu, Xiaohui
2014-12-01
Diversity assessment of Pareto front approximations is an important issue in the stochastic multiobjective optimization community. Most of the diversity indicators in the literature were designed to work for any number of objectives of Pareto front approximations in principle, but in practice many of these indicators are infeasible or not workable when the number of objectives is large. In this paper, we propose a diversity comparison indicator (DCI) to assess the diversity of Pareto front approximations in many-objective optimization. DCI evaluates relative quality of different Pareto front approximations rather than provides an absolute measure of distribution for a single approximation. In DCI, all the concerned approximations are put into a grid environment so that there are some hyperboxes containing one or more solutions. The proposed indicator only considers the contribution of different approximations to nonempty hyperboxes. Therefore, the computational cost does not increase exponentially with the number of objectives. In fact, the implementation of DCI is of quadratic time complexity, which is fully independent of the number of divisions used in grid. Systematic experiments are conducted using three groups of artificial Pareto front approximations and seven groups of real Pareto front approximations with different numbers of objectives to verify the effectiveness of DCI. Moreover, a comparison with two diversity indicators used widely in many-objective optimization is made analytically and empirically. Finally, a parametric investigation reveals interesting insights of the division number in grid and also offers some suggested settings to the users with different preferences.
Pareto fronts in clinical practice for pinnacle.
Janssen, Tomas; van Kesteren, Zdenko; Franssen, Gijs; Damen, Eugène; van Vliet, Corine
2013-03-01
Our aim was to develop a framework to objectively perform treatment planning studies using Pareto fronts. The Pareto front represents all optimal possible tradeoffs among several conflicting criteria and is an ideal tool with which to study the possibilities of a given treatment technique. The framework should require minimal user interaction and should resemble and be applicable to daily clinical practice. To generate the Pareto fronts, we used the native scripting language of Pinnacle(3) (Philips Healthcare, Andover, MA). The framework generates thousands of plans automatically from which the Pareto front is generated. As an example, the framework is applied to compare intensity modulated radiation therapy (IMRT) with volumetric modulated arc therapy (VMAT) for prostate cancer patients. For each patient and each technique, 3000 plans are generated, resulting in a total of 60,000 plans. The comparison is based on 5-dimensional Pareto fronts. Generating 3000 plans for 10 patients in parallel requires on average 96 h for IMRT and 483 hours for VMAT. Using VMAT, compared to IMRT, the maximum dose of the boost PTV was reduced by 0.4 Gy (P=.074), the mean dose in the anal sphincter by 1.6 Gy (P=.055), the conformity index of the 95% isodose (CI(95%)) by 0.02 (P=.005), and the rectal wall V(65 Gy) by 1.1% (P=.008). We showed the feasibility of automatically generating Pareto fronts with Pinnacle(3). Pareto fronts provide a valuable tool for performing objective comparative treatment planning studies. We compared VMAT with IMRT in prostate patients and found VMAT had a dosimetric advantage over IMRT. Copyright © 2013 Elsevier Inc. All rights reserved.
Pareto Fronts in Clinical Practice for Pinnacle
International Nuclear Information System (INIS)
Janssen, Tomas; Kesteren, Zdenko van; Franssen, Gijs; Damen, Eugène; Vliet, Corine van
2013-01-01
Purpose: Our aim was to develop a framework to objectively perform treatment planning studies using Pareto fronts. The Pareto front represents all optimal possible tradeoffs among several conflicting criteria and is an ideal tool with which to study the possibilities of a given treatment technique. The framework should require minimal user interaction and should resemble and be applicable to daily clinical practice. Methods and Materials: To generate the Pareto fronts, we used the native scripting language of Pinnacle 3 (Philips Healthcare, Andover, MA). The framework generates thousands of plans automatically from which the Pareto front is generated. As an example, the framework is applied to compare intensity modulated radiation therapy (IMRT) with volumetric modulated arc therapy (VMAT) for prostate cancer patients. For each patient and each technique, 3000 plans are generated, resulting in a total of 60,000 plans. The comparison is based on 5-dimensional Pareto fronts. Results: Generating 3000 plans for 10 patients in parallel requires on average 96 h for IMRT and 483 hours for VMAT. Using VMAT, compared to IMRT, the maximum dose of the boost PTV was reduced by 0.4 Gy (P=.074), the mean dose in the anal sphincter by 1.6 Gy (P=.055), the conformity index of the 95% isodose (CI 95% ) by 0.02 (P=.005), and the rectal wall V 65 Gy by 1.1% (P=.008). Conclusions: We showed the feasibility of automatically generating Pareto fronts with Pinnacle 3 . Pareto fronts provide a valuable tool for performing objective comparative treatment planning studies. We compared VMAT with IMRT in prostate patients and found VMAT had a dosimetric advantage over IMRT
Pareto-optimal phylogenetic tree reconciliation.
Libeskind-Hadas, Ran; Wu, Yi-Chieh; Bansal, Mukul S; Kellis, Manolis
2014-06-15
Phylogenetic tree reconciliation is a widely used method for reconstructing the evolutionary histories of gene families and species, hosts and parasites and other dependent pairs of entities. Reconciliation is typically performed using maximum parsimony, in which each evolutionary event type is assigned a cost and the objective is to find a reconciliation of minimum total cost. It is generally understood that reconciliations are sensitive to event costs, but little is understood about the relationship between event costs and solutions. Moreover, choosing appropriate event costs is a notoriously difficult problem. We address this problem by giving an efficient algorithm for computing Pareto-optimal sets of reconciliations, thus providing the first systematic method for understanding the relationship between event costs and reconciliations. This, in turn, results in new techniques for computing event support values and, for cophylogenetic analyses, performing robust statistical tests. We provide new software tools and demonstrate their use on a number of datasets from evolutionary genomic and cophylogenetic studies. Our Python tools are freely available at www.cs.hmc.edu/∼hadas/xscape. . © The Author 2014. Published by Oxford University Press.
Post Pareto optimization-A case
Popov, Stoyan; Baeva, Silvia; Marinova, Daniela
2017-12-01
Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.
Energy Technology Data Exchange (ETDEWEB)
2016-12-21
The JMP Add-In TopN-PFS provides an automated tool for finding layered Pareto front to identify the top N solutions from an enumerated list of candidates subject to optimizing multiple criteria. The approach constructs the N layers of Pareto fronts, and then provides a suite of graphical tools to explore the alternatives based on different prioritizations of the criteria. The tool is designed to provide a set of alternatives from which the decision-maker can select the best option for their study goals.
Project management under uncertainty beyond beta: The generalized bicubic distribution
Directory of Open Access Journals (Sweden)
José García Pérez
2016-01-01
Full Text Available The beta distribution has traditionally been employed in the PERT methodology and generally used for modeling bounded continuous random variables based on expert’s judgment. The impossibility of estimating four parameters from the three values provided by the expert when the beta distribution is assumed to be the underlying distribution has been widely debated. This paper presents the generalized bicubic distribution as a good alternative to the beta distribution since, when the variance depends on the mode, the generalized bicubic distribution approximates the kurtosis of the Gaussian distribution better than the beta distribution. In addition, this distribution presents good properties in the PERT methodology in relation to moderation and conservatism criteria. Two empirical applications are presented to demonstrate the adequateness of this new distribution.
How Well Do We Know Pareto Optimality?
Mathur, Vijay K.
1991-01-01
Identifies sources of ambiguity in economics textbooks' discussion of the condition for efficient output mix. Points out that diverse statements without accompanying explanations create confusion among students. Argues that conflicting views concerning the concept of Pareto optimality as one source of ambiguity. Suggests clarifying additions to…
Performance-based Pareto optimal design
Sariyildiz, I.S.; Bittermann, M.S.; Ciftcioglu, O.
2008-01-01
A novel approach for performance-based design is presented, where Pareto optimality is pursued. Design requirements may contain linguistic information, which is difficult to bring into computation or make consistent their impartial estimations from case to case. Fuzzy logic and soft computing are
A Further Note on Generalized Hyperexponential Distributions
1989-11-15
functions. The inverse transform of each of m factors is of the form The requirement that 0, < r7 thus yields a mixture of an atom at the origin and a...real and (0, + 0,+,)/2 < Re(r/,) when (7h, 77t4) are a complex conjugate pair. Then the inverse transform of f*(s) is a probability distribution. To
McDonald Generalized Linear Failure Rate Distribution
Directory of Open Access Journals (Sweden)
Ibrahim Elbatal
2014-10-01
Full Text Available We introduce in this paper a new six-parameters generalized version of the generalized linear failure rate (GLFR distribution which is called McDonald Generalized Linear failure rate (McGLFR distribution. The new distribution is quite flexible and can be used effectively in modeling survival data and reliability problems. It can have a constant, decreasing, increasing, and upside down bathtub-and bathtub shaped failure rate function depending on its parameters. It includes some well-known lifetime distributions as special sub-models. Some structural properties of the new distribution are studied. Moreover we discuss maximum likelihood estimation of the unknown parameters of the new model.
Size-biased distributions in the generalized beta distribution family, with applications to forestry
Mark J. Ducey; Jeffrey H. Gove
2015-01-01
Size-biased distributions arise in many forestry applications, as well as other environmental, econometric, and biomedical sampling problems. We examine the size-biased versions of the generalized beta of the first kind, generalized beta of the second kind and generalized gamma distributions. These distributions include, as special cases, the Dagum (Burr Type III),...
The Pareto Analysis for Establishing Content Criteria in Surgical Training.
Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N
2016-01-01
Current surgical training is still highly dependent on expensive operating room (OR) experience. Although there have been many attempts to transfer more training to the skills laboratory, little research is focused on which technical behaviors can lead to the highest profit when they are trained outside the OR. The Pareto principle states that in any population that contributes to a common effect, a few account for the bulk of the effect. This principle has been widely used in business management to increase company profits. This study uses the Pareto principle for establishing content criteria for more efficient surgical training. A retrospective study was conducted to assess verbal guidance provided by 9 supervising surgeons to 12 trainees performing 64 laparoscopic cholecystectomies in the OR. The verbal corrections were documented, tallied, and clustered according to the aimed change in novice behavior. The corrections were rank ordered, and a cumulative distribution curve was used to calculate which corrections accounted for 80% of the total number of verbal corrections. In total, 253 different verbal corrections were uttered 1587 times and were categorized into 40 different clusters of aimed changes in novice behaviors. The 35 highest-ranking verbal corrections (14%) and the 11 highest-ranking clusters (28%) accounted for 80% of the total number of given verbal corrections. Following the Pareto principle, we were able to identify the aspects of trainee behavior that account for most corrections given by supervisors during a laparoscopic cholecystectomy on humans. This strategy can be used for the development of new training programs to prepare the trainee in advance for the challenges encountered in the clinical setting in an OR. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Building a generalized distributed system model
Mukkamala, R.
1993-01-01
The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.
Efficiently approximating the Pareto frontier: Hydropower dam placement in the Amazon basin
Wu, Xiaojian; Gomes-Selman, Jonathan; Shi, Qinru; Xue, Yexiang; Garcia-Villacorta, Roosevelt; Anderson, Elizabeth; Sethi, Suresh; Steinschneider, Scott; Flecker, Alexander; Gomes, Carla P.
2018-01-01
Real–world problems are often not fully characterized by a single optimal solution, as they frequently involve multiple competing objectives; it is therefore important to identify the so-called Pareto frontier, which captures solution trade-offs. We propose a fully polynomial-time approximation scheme based on Dynamic Programming (DP) for computing a polynomially succinct curve that approximates the Pareto frontier to within an arbitrarily small > 0 on treestructured networks. Given a set of objectives, our approximation scheme runs in time polynomial in the size of the instance and 1/. We also propose a Mixed Integer Programming (MIP) scheme to approximate the Pareto frontier. The DP and MIP Pareto frontier approaches have complementary strengths and are surprisingly effective. We provide empirical results showing that our methods outperform other approaches in efficiency and accuracy. Our work is motivated by a problem in computational sustainability concerning the proliferation of hydropower dams throughout the Amazon basin. Our goal is to support decision-makers in evaluating impacted ecosystem services on the full scale of the Amazon basin. Our work is general and can be applied to approximate the Pareto frontier of a variety of multiobjective problems on tree-structured networks.
Modified Stieltjes Transform and Generalized Convolutions of Probability Distributions
Directory of Open Access Journals (Sweden)
Lev B. Klebanov
2018-01-01
Full Text Available The classical Stieltjes transform is modified in such a way as to generalize both Stieltjes and Fourier transforms. This transform allows the introduction of new classes of commutative and non-commutative generalized convolutions. A particular case of such a convolution for degenerate distributions appears to be the Wigner semicircle distribution.
Nucleon generalized parton distributions from full lattice QCD
International Nuclear Information System (INIS)
Haegler, P.; Schroers, W.; Bratt, J.; Negele, J.W.; Pochinsky, A.V.
2007-07-01
We present a comprehensive study of the lowest moments of nucleon generalized parton distributions in N f =2+1 lattice QCD using domain wall valence quarks and improved staggered sea quarks. Our investigation includes helicity dependent and independent generalized parton distributions for pion masses as low as 350 MeV and volumes as large as (3.5 fm) 3 . (orig.)
Determination of Pareto frontier in multi-objective maintenance optimization
International Nuclear Information System (INIS)
Certa, Antonella; Galante, Giacomo; Lupo, Toni; Passannanti, Gianfranco
2011-01-01
The objective of a maintenance policy generally is the global maintenance cost minimization that involves not only the direct costs for both the maintenance actions and the spare parts, but also those ones due to the system stop for preventive maintenance and the downtime for failure. For some operating systems, the failure event can be dangerous so that they are asked to operate assuring a very high reliability level between two consecutive fixed stops. The present paper attempts to individuate the set of elements on which performing maintenance actions so that the system can assure the required reliability level until the next fixed stop for maintenance, minimizing both the global maintenance cost and the total maintenance time. In order to solve the previous constrained multi-objective optimization problem, an effective approach is proposed to obtain the best solutions (that is the Pareto optimal frontier) among which the decision maker will choose the more suitable one. As well known, describing the whole Pareto optimal frontier generally is a troublesome task. The paper proposes an algorithm able to rapidly overcome this problem and its effectiveness is shown by an application to a case study regarding a complex series-parallel system.
Evaluation of Preanalytical Quality Indicators by Six Sigma and Pareto`s Principle.
Kulkarni, Sweta; Ramesh, R; Srinivasan, A R; Silvia, C R Wilma Delphine
2018-01-01
Preanalytical steps are the major sources of error in clinical laboratory. The analytical errors can be corrected by quality control procedures but there is a need for stringent quality checks in preanalytical area as these processes are done outside the laboratory. Sigma value depicts the performance of laboratory and its quality measures. Hence in the present study six sigma and Pareto principle was applied to preanalytical quality indicators to evaluate the clinical biochemistry laboratory performance. This observational study was carried out for a period of 1 year from November 2015-2016. A total of 1,44,208 samples and 54,265 test requisition forms were screened for preanalytical errors like missing patient information, sample collection details in forms and hemolysed, lipemic, inappropriate, insufficient samples and total number of errors were calculated and converted into defects per million and sigma scale. Pareto`s chart was drawn using total number of errors and cumulative percentage. In 75% test requisition forms diagnosis was not mentioned and sigma value of 0.9 was obtained and for other errors like sample receiving time, stat and type of sample sigma values were 2.9, 2.6, and 2.8 respectively. For insufficient sample and improper ratio of blood to anticoagulant sigma value was 4.3. Pareto`s chart depicts out of 80% of errors in requisition forms, 20% is contributed by missing information like diagnosis. The development of quality indicators, application of six sigma and Pareto`s principle are quality measures by which not only preanalytical, the total testing process can be improved.
Pareto-Efficiency, Hayek’s Marvel, and the Invisible Executor
Kakarot-Handtke, Egmont
2014-01-01
This non-technical contribution to the RWER-Blog deals with the interrelations of market clearing, efficient information processing through the price system, and distribution. The point of entry is a transparent example of Pareto-efficiency taken from the popular book How Markets Fail.
General results for the Marshall and Olkin's family of distributions
Directory of Open Access Journals (Sweden)
WAGNER BARRETO-SOUZA
2013-03-01
Full Text Available Abstract Marshall and Olkin (1997 introduced an interesting method of adding a parameter to a well-established distribution. However, they did not investigate general mathematical properties of their family of distributions. We provide for this family of distributions general expansions for the density function, explicit expressions for the moments and moments of the order statistics. Several especial models are investigated. We discuss estimation of the model parameters. An application to a real data set is presented for illustrative purposes.
Pareto front estimation for decision making.
Giagkiozis, Ioannis; Fleming, Peter J
2014-01-01
The set of available multi-objective optimisation algorithms continues to grow. This fact can be partially attributed to their widespread use and applicability. However, this increase also suggests several issues remain to be addressed satisfactorily. One such issue is the diversity and the number of solutions available to the decision maker (DM). Even for algorithms very well suited for a particular problem, it is difficult-mainly due to the computational cost-to use a population large enough to ensure the likelihood of obtaining a solution close to the DM's preferences. In this paper we present a novel methodology that produces additional Pareto optimal solutions from a Pareto optimal set obtained at the end run of any multi-objective optimisation algorithm for two-objective and three-objective problem instances.
Multiclass gene selection using Pareto-fronts.
Rajapakse, Jagath C; Mundra, Piyushkumar A
2013-01-01
Filter methods are often used for selection of genes in multiclass sample classification by using microarray data. Such techniques usually tend to bias toward a few classes that are easily distinguishable from other classes due to imbalances of strong features and sample sizes of different classes. It could therefore lead to selection of redundant genes while missing the relevant genes, leading to poor classification of tissue samples. In this manuscript, we propose to decompose multiclass ranking statistics into class-specific statistics and then use Pareto-front analysis for selection of genes. This alleviates the bias induced by class intrinsic characteristics of dominating classes. The use of Pareto-front analysis is demonstrated on two filter criteria commonly used for gene selection: F-score and KW-score. A significant improvement in classification performance and reduction in redundancy among top-ranked genes were achieved in experiments with both synthetic and real-benchmark data sets.
Towards a seascape typology. I. Zipf versus Pareto laws
Seuront, Laurent; Mitchell, James G.
Two data analysis methods, referred to as the Zipf and Pareto methods, initially introduced in economics and linguistics two centuries ago and subsequently used in a wide range of fields (word frequency in languages and literature, human demographics, finance, city formation, genomics and physics), are described and proposed here as a potential tool to classify space-time patterns in marine ecology. The aim of this paper is, first, to present the theoretical bases of Zipf and Pareto laws, and to demonstrate that they are strictly equivalent. In that way, we provide a one-to-one correspondence between their characteristic exponents and argue that the choice of technique is a matter of convenience. Second, we argue that the appeal of this technique is that it is assumption-free for the distribution of the data and regularity of sampling interval, as well as being extremely easy to implement. Finally, in order to allow marine ecologists to identify and classify any structure in their data sets, we provide a step by step overview of the characteristic shapes expected for Zipf's law for the cases of randomness, power law behavior, power law behavior contaminated by internal and external noise, and competing power laws illustrated on the basis of typical ecological situations such as mixing processes involving non-interacting and interacting species, phytoplankton growth processes and differential grazing by zooplankton.
Model-based problem solving through symbolic regression via pareto genetic programming
Vladislavleva, E.
2008-01-01
Pareto genetic programming methodology is extended by additional generic model selection and generation strategies that (1) drive the modeling engine to creation of models of reduced non-linearity and increased generalization capabilities, and (2) improve the effectiveness of the search for robust
Comparative analysis of Pareto surfaces in multi-criteria IMRT planning
Energy Technology Data Exchange (ETDEWEB)
Teichert, K; Suess, P; Serna, J I; Monz, M; Kuefer, K H [Department of Optimization, Fraunhofer Institute for Industrial Mathematics (ITWM), Fraunhofer Platz 1, 67663 Kaiserslautern (Germany); Thieke, C, E-mail: katrin.teichert@itwm.fhg.de [Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)
2011-06-21
In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g. photons versus protons) than with the classical method of comparing single treatment plans.
Comparative analysis of Pareto surfaces in multi-criteria IMRT planning.
Teichert, K; Süss, P; Serna, J I; Monz, M; Küfer, K H; Thieke, C
2011-06-21
In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g., photons versus protons) than with the classical method of comparing single treatment plans.
Comparative analysis of Pareto surfaces in multi-criteria IMRT planning
International Nuclear Information System (INIS)
Teichert, K; Suess, P; Serna, J I; Monz, M; Kuefer, K H; Thieke, C
2011-01-01
In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g. photons versus protons) than with the classical method of comparing single treatment plans.
The Forbes 400, the Pareto power-law and efficient markets
Klass, O. S.; Biham, O.; Levy, M.; Malcai, O.; Solomon, S.
2007-01-01
Statistical regularities at the top end of the wealth distribution in the United States are examined using the Forbes 400 lists of richest Americans, published between 1988 and 2003. It is found that the wealths are distributed according to a power-law (Pareto) distribution. This result is explained using a simple stochastic model of multiple investors that incorporates the efficient market hypothesis as well as the multiplicative nature of financial market fluctuations.
Tsallis distribution as a standard maximum entropy solution with 'tail' constraint
International Nuclear Information System (INIS)
Bercher, J.-F.
2008-01-01
We show that Tsallis' distributions can be derived from the standard (Shannon) maximum entropy setting, by incorporating a constraint on the divergence between the distribution and another distribution imagined as its tail. In this setting, we find an underlying entropy which is the Renyi entropy. Furthermore, escort distributions and generalized means appear as a direct consequence of the construction. Finally, the 'maximum entropy tail distribution' is identified as a Generalized Pareto Distribution
RNA-Pareto: interactive analysis of Pareto-optimal RNA sequence-structure alignments.
Schnattinger, Thomas; Schöning, Uwe; Marchfelder, Anita; Kestler, Hans A
2013-12-01
Incorporating secondary structure information into the alignment process improves the quality of RNA sequence alignments. Instead of using fixed weighting parameters, sequence and structure components can be treated as different objectives and optimized simultaneously. The result is not a single, but a Pareto-set of equally optimal solutions, which all represent different possible weighting parameters. We now provide the interactive graphical software tool RNA-Pareto, which allows a direct inspection of all feasible results to the pairwise RNA sequence-structure alignment problem and greatly facilitates the exploration of the optimal solution set.
A generalized statistical model for the size distribution of wealth
International Nuclear Information System (INIS)
Clementi, F; Gallegati, M; Kaniadakis, G
2012-01-01
In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature. (paper)
A generalized statistical model for the size distribution of wealth
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2012-12-01
In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.
A general algorithm for distributing information in a graph
Aji, Srinivas M.; McEliece, Robert J.
1997-01-01
We present a general “message-passing” algorithm for distributing information in a graph. This algorithm may help us to understand the approximate correctness of both the Gallager-Tanner-Wiberg algorithm, and the turbo-decoding algorithm.
Insight into nucleon structure from generalized parton distributions
International Nuclear Information System (INIS)
J.W. Negele; R.C. Brower; P. Dreher; R. Edwards; G. Fleming; Ph. Hagler; Th. Lippert; A.V.Pochinsky; D.B. Renner; D. Richards; K. Schilling; W. Schroers
2004-01-01
The lowest three moments of generalized parton distributions are calculated in full QCD and provide new insight into the behavior of nucleon electromagnetic form factors, the origin of the nucleon spin, and the transverse structure of the nucleon
Probability distribution of extreme share returns in Malaysia
Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin
2014-09-01
The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.
Homogeneity and scale testing of generalized gamma distribution
International Nuclear Information System (INIS)
Stehlik, Milan
2008-01-01
The aim of this paper is to derive the exact distributions of the likelihood ratio tests of homogeneity and scale hypothesis when the observations are generalized gamma distributed. The special cases of exponential, Rayleigh, Weibull or gamma distributed observations are discussed exclusively. The photoemulsion experiment analysis and scale test with missing time-to-failure observations are present to illustrate the applications of methods discussed
Pareto Optimal Design for Synthetic Biology.
Patanè, Andrea; Santoro, Andrea; Costanza, Jole; Carapezza, Giovanni; Nicosia, Giuseppe
2015-08-01
Recent advances in synthetic biology call for robust, flexible and efficient in silico optimization methodologies. We present a Pareto design approach for the bi-level optimization problem associated to the overproduction of specific metabolites in Escherichia coli. Our method efficiently explores the high dimensional genetic manipulation space, finding a number of trade-offs between synthetic and biological objectives, hence furnishing a deeper biological insight to the addressed problem and important results for industrial purposes. We demonstrate the computational capabilities of our Pareto-oriented approach comparing it with state-of-the-art heuristics in the overproduction problems of i) 1,4-butanediol, ii) myristoyl-CoA, i ii) malonyl-CoA , iv) acetate and v) succinate. We show that our algorithms are able to gracefully adapt and scale to more complex models and more biologically-relevant simulations of the genetic manipulations allowed. The Results obtained for 1,4-butanediol overproduction significantly outperform results previously obtained, in terms of 1,4-butanediol to biomass formation ratio and knock-out costs. In particular overproduction percentage is of +662.7%, from 1.425 mmolh⁻¹gDW⁻¹ (wild type) to 10.869 mmolh⁻¹gDW⁻¹, with a knockout cost of 6. Whereas, Pareto-optimal designs we have found in fatty acid optimizations strictly dominate the ones obtained by the other methodologies, e.g., biomass and myristoyl-CoA exportation improvement of +21.43% (0.17 h⁻¹) and +5.19% (1.62 mmolh⁻¹gDW⁻¹), respectively. Furthermore CPU time required by our heuristic approach is more than halved. Finally we implement pathway oriented sensitivity analysis, epsilon-dominance analysis and robustness analysis to enhance our biological understanding of the problem and to improve the optimization algorithm capabilities.
A Pareto-Improving Minimum Wage
Eliav Danziger; Leif Danziger
2014-01-01
This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...
Directory of Open Access Journals (Sweden)
Yang Sun
2018-01-01
Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.
Pareto-Optimal Estimates of California Precipitation Change
Langenbrunner, Baird; Neelin, J. David
2017-12-01
In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.
Pareto Improving Price Regulation when the Asset Market is Incomplete
Herings, P.J.J.; Polemarchakis, H.M.
1999-01-01
When the asset market is incomplete, competitive equilibria are constrained suboptimal, which provides a scope for pareto improving interventions. Price regulation can be such a pareto improving policy, even when the welfare effects of rationing are taken into account. An appealing aspect of price
Pareto optimality in infinite horizon linear quadratic differential games
Reddy, P.V.; Engwerda, J.C.
2013-01-01
In this article we derive conditions for the existence of Pareto optimal solutions for linear quadratic infinite horizon cooperative differential games. First, we present a necessary and sufficient characterization for Pareto optimality which translates to solving a set of constrained optimal
Pareto 80/20 Law: Derivation via Random Partitioning
Lipovetsky, Stan
2009-01-01
The Pareto 80/20 Rule, also known as the Pareto principle or law, states that a small number of causes (20%) is responsible for a large percentage (80%) of the effect. Although widely recognized as a heuristic rule, this proportion has not been theoretically based. The article considers derivation of this 80/20 rule and some other standard…
A new generalization of the Pareto–geometric distribution
Directory of Open Access Journals (Sweden)
M. Nassar
2013-07-01
Full Text Available In this paper we introduce a new distribution called the beta Pareto–geometric. We provide a comprehensive treatment of the mathematical properties of the proposed distribution and derive expressions for its moment generating function and the rth generalized moment. We discuss estimation of the parameters by maximum likelihood and obtain the information matrix that is easily numerically determined. We also demonstrate its usefulness on a real data set.
The size distributions of all Indian cities
Luckstead, Jeff; Devadoss, Stephen; Danforth, Diana
2017-05-01
We apply five distributions-lognormal, double-Pareto lognormal, lognormal-upper tail Pareto, Pareto tails-lognormal, and Pareto tails-lognormal with differentiability restrictions-to estimate the size distribution of all Indian cities. Since India contains numerous small cities, it is important to explicitly model the lower-tail behavior for studying the distribution of all Indian cities. Our results rigorously confirm, using both graphical and formal statistical tests, that among these five distributions, Pareto tails-lognormal is a better suited parametrization of the Indian city size data, verifying that the Indian city size distribution exhibits a strong reverse Pareto in the lower tail, lognormal in the mid-range body, and Pareto in the upper tail.
Feasibility of estimating generalized extreme-value distribution of floods
International Nuclear Information System (INIS)
Ferreira de Queiroz, Manoel Moises
2004-01-01
Flood frequency analysis by generalized extreme-value probability distribution (GEV) has found increased application in recent years, given its flexibility in dealing with the three asymptotic forms of extreme distribution derived from different initial probability distributions. Estimation of higher quantiles of floods is usually accomplished by extrapolating one of the three inverse forms of GEV distribution fitted to the experimental data for return periods much higher than those actually observed. This paper studies the feasibility of fitting GEV distribution by moments of linear combinations of higher order statistics (LH moments) using synthetic annual flood series with varying characteristics and lengths. As the hydrologic events in nature such as daily discharge occur with finite values, their annual maximums are expected to follow the asymptotic form of the limited GEV distribution. Synthetic annual flood series were thus obtained from the stochastic sequences of 365 daily discharges generated by Monte Carlo simulation on the basis of limited probability distribution underlying the limited GEV distribution. The results show that parameter estimation by LH moments of this distribution, fitted to annual flood samples of less than 100-year length derived from initial limited distribution, may indicate any form of extreme-value distribution, not just the limited form as expected, and with large uncertainty in fitted parameters. A frequency analysis, on the basis of GEV distribution and LH moments, of annual flood series of lengths varying between 13 and 73 years observed at 88 gauge stations on Parana River in Brazil, indicated all the three forms of GEV distribution.(Author)
Distributed Systems of Generalizing as the Basis of Workplace Learning
Virkkunen, Jaakko; Pihlaja, Juha
2004-01-01
This article proposes a new way of conceptualizing workplace learning as distributed systems of appropriation, development and the use of practice-relevant generalizations fixed within mediational artifacts. This article maintains that these systems change historically as technology and increasingly sophisticated forms of production develop.…
Czech Academy of Sciences Publication Activity Database
Jordanova, P.; Dušek, Jiří; Stehlík, M.
2013-01-01
Roč. 128, OCT 15 (2013), s. 124-134 ISSN 0169-7439 R&D Projects: GA ČR(CZ) GAP504/11/1151; GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : environmental chemistry * ebullition of methane * mixed poisson processes * renewal process * pareto distribution * moving average process * robust statistics * sedge–grass marsh Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013
Liu, Xian
2010-02-10
This paper shows that optical signal transmission over intersatellite links with swaying transmitters can be described as an equivalent fading model. In this model, the instantaneous signal-to-noise ratio is stochastic and follows the reciprocal Pareto distribution. With this model, we show that the transmitter power can be minimized, subject to a specified outage probability, by appropriately adjusting some system parameters, such as the transmitter gain.
The κ-generalized distribution: A new descriptive model for the size distribution of incomes
Clementi, F.; Di Matteo, T.; Gallegati, M.; Kaniadakis, G.
2008-05-01
This paper proposes the κ-generalized distribution as a model for describing the distribution and dispersion of income within a population. Formulas for the shape, moments and standard tools for inequality measurement-such as the Lorenz curve and the Gini coefficient-are given. A method for parameter estimation is also discussed. The model is shown to fit extremely well the data on personal income distribution in Australia and in the United States.
Craft, David; Monz, Michael
2010-02-01
To introduce a method to simultaneously explore a collection of Pareto surfaces. The method will allow radiotherapy treatment planners to interactively explore treatment plans for different beam angle configurations as well as different treatment modalities. The authors assume a convex optimization setting and represent the Pareto surface for each modality or given beam set by a set of discrete points on the surface. Weighted averages of these discrete points produce a continuous representation of each Pareto surface. The authors calculate a set of Pareto surfaces and use linear programming to navigate across the individual surfaces, allowing switches between surfaces. The switches are organized such that the plan profits in the requested way, while trying to keep the change in dose as small as possible. The system is demonstrated on a phantom pancreas IMRT case using 100 different five beam configurations and a multicriteria formulation with six objectives. The system has intuitive behavior and is easy to control. Also, because the underlying linear programs are small, the system is fast enough to offer real-time exploration for the Pareto surfaces of the given beam configurations. The system presented offers a sound starting point for building clinical systems for multicriteria exploration of different modalities and offers a controllable way to explore hundreds of beam angle configurations in IMRT planning, allowing the users to focus their attention on the dose distribution and treatment planning objectives instead of spending excessive time on the technicalities of delivery.
Projections onto the Pareto surface in multicriteria radiation therapy optimization.
Bokrantz, Rasmus; Miettinen, Kaisa
2015-10-01
To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose-volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose-volume histogram constraints were used. No consistent improvements in target homogeneity were observed. There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.
Projections onto the Pareto surface in multicriteria radiation therapy optimization
International Nuclear Information System (INIS)
Bokrantz, Rasmus; Miettinen, Kaisa
2015-01-01
Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose–volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose–volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan
Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.
Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K
2010-03-21
We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.
A new mechanism for maintaining diversity of Pareto archive in multi-objective optimization
Czech Academy of Sciences Publication Activity Database
Hájek, J.; Szöllös, A.; Šístek, Jakub
2010-01-01
Roč. 41, 7-8 (2010), s. 1031-1057 ISSN 0965-9978 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : multi-objective optimization * micro-genetic algorithm * diversity * Pareto archive Subject RIV: BA - General Mathematics Impact factor: 1.004, year: 2010 http://www.sciencedirect.com/science/article/pii/S0965997810000451
A new mechanism for maintaining diversity of Pareto archive in multi-objective optimization
Czech Academy of Sciences Publication Activity Database
Hájek, J.; Szöllös, A.; Šístek, Jakub
2010-01-01
Roč. 41, 7-8 (2010), s. 1031-1057 ISSN 0965-9978 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : multi-objective optimization * micro- genetic algorithm * diversity * Pareto archive Subject RIV: BA - General Mathematics Impact factor: 1.004, year: 2010 http://www.sciencedirect.com/science/article/pii/S0965997810000451
Topology Identification of General Dynamical Network with Distributed Time Delays
International Nuclear Information System (INIS)
Zhao-Yan, Wu; Xin-Chu, Fu
2009-01-01
General dynamical networks with distributed time delays are studied. The topology of the networks are viewed as unknown parameters, which need to be identified. Some auxiliary systems (also called the network estimators) are designed to achieve this goal. Both linear feedback control and adaptive strategy are applied in designing these network estimators. Based on linear matrix inequalities and the Lyapunov function method, the sufficient condition for the achievement of topology identification is obtained. This method can also better monitor the switching topology of dynamical networks. Illustrative examples are provided to show the effectiveness of this method. (general)
Chiral perturbation theory for nucleon generalized parton distributions
Energy Technology Data Exchange (ETDEWEB)
Diehl, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Manashov, A. [Regensburg Univ. (Germany). Inst. fuer Physik 1 - Theoretische Physik]|[Sankt-Petersburg State Univ. (Russian Federation). Dept. of Theoretical Physics; Schaefer, A. [Regensburg Univ. (Germany). Inst. fuer Physik 1 - Theoretische Physik
2006-08-15
We analyze the moments of the isosinglet generalized parton distributions H, E, H, E of the nucleon in one-loop order of heavy-baryon chiral perturbation theory. We discuss in detail the construction of the operators in the effective theory that are required to obtain all corrections to a given order in the chiral power counting. The results will serve to improve the extrapolation of lattice results to the chiral limit. (orig.)
Timelike Compton scattering off the neutron and generalized parton distributions
Energy Technology Data Exchange (ETDEWEB)
Boer, M.; Guidal, M. [CNRS-IN2P3, Universite Paris-Sud, Institut de Physique Nucleaire d' Orsay, Orsay (France); Vanderhaeghen, M. [Johannes Gutenberg Universitaet, Institut fuer Kernphysik and PRISMA Cluster of Excellence, Mainz (Germany)
2016-02-15
We study the exclusive photoproduction of an electron-positron pair on a neutron target in the Jefferson Lab energy domain. The reaction consists of two processes: the Bethe-Heitler and the Timelike Compton Scattering. The latter process provides potentially access to the Generalized Parton Distributions (GPDs) of the nucleon. We calculate all the unpolarized, single- and double-spin observables of the reaction and study their sensitivities to GPDs. (orig.)
Directory of Open Access Journals (Sweden)
Jorge Caldera-Serrano
2015-09-01
Full Text Available Se analiza la reutilización de las colecciones audiovisuales de las cadenas de televisión con el fin de detectar si se cumple el Índice de Pareto, facilitando mecanismos para su control y explotación de la parte de la colección audiovisual menos utilizada. Se detecta que la correlación de Pareto se establece no sólo en el uso sino también en la presencia de elementos temáticos y elementos onomásticos en el archivo y en la difusión de contenidos, por lo que se plantea formas de control en la integración de información en la colección y de recursos en la difusión. Igualmente se describe el Índice de Pareto, los Media Asset Management y el cambio de paradigma al digital, elementos fundamentales para entender los problemas y las soluciones para la eliminación de problemas en la recuperación y en la conformación de la colección. Abstract: Reuse of audiovisual collections television networks in order to detect whether the Pareto index, providing mechanisms for control and exploitation of the least used part of the audiovisual collection holds analyzed. It is found that the correlation of Pareto is established not only in the use but also the presence of thematic elements and onomastic elements in the file and in the distribution of content, so forms of control arises in the integration of information collection and distributing resources. Likewise, the Pareto index, the Media Asset Management and the paradigm shift to digital, essential to understanding the problems and solutions to eliminate problems in recovery and in the establishment of collection elements described. Keywords: Information processing. Television. Electronic media. Information systems evaluation.
Citation distribution profile in Brazilian journals of general medicine.
Lustosa, Luiggi Araujo; Chalco, Mario Edmundo Pastrana; Borba, Cecília de Melo; Higa, André Eizo; Almeida, Renan Moritz Varnier Rodrigues
2012-01-01
Impact factors are currently the bibliometric index most used for evaluating scientific journals. However, the way in which they are used, for instance concerning the study or journal types analyzed, can markedly interfere with estimate reliability. This study aimed to analyze the citation distribution pattern in three Brazilian journals of general medicine. This was a descriptive study based on numbers of citations of scientific studies published by three Brazilian journals of general medicine. The journals analyzed were São Paulo Medical Journal, Clinics and Revista da Associação Médica Brasileira. This survey used data available from the Institute for Scientific Information (ISI) platform, from which the total number of papers published in each journal in 2007-2008 and the number of citations of these papers in 2009 were obtained. From these data, the citation distribution was derived and journal impact factors (average number of citations) were estimated. These factors were then compared with those directly available from the ISI Journal of Citation Reports (JCR). Respectively, 134, 203 and 192 papers were published by these journals during the period analyzed. The observed citation distributions were highly skewed, such that many papers had few citations and a small percentage had many citations. It was not possible to identify any specific pattern for the most cited papers or to exactly reproduce the JCR impact factors. Use of measures like "impact factors", which characterize citations through averages, does not adequately represent the citation distribution in the journals analyzed.
A Pareto Optimal Auction Mechanism for Carbon Emission Rights
Directory of Open Access Journals (Sweden)
Mingxi Wang
2014-01-01
Full Text Available The carbon emission rights do not fit well into the framework of existing multi-item auction mechanisms because of their own unique features. This paper proposes a new auction mechanism which converges to a unique Pareto optimal equilibrium in a finite number of periods. In the proposed auction mechanism, the assignment outcome is Pareto efficient and the carbon emission rights’ resources are efficiently used. For commercial application and theoretical completeness, both discrete and continuous markets—represented by discrete and continuous bid prices, respectively—are examined, and the results show the existence of a Pareto optimal equilibrium under the constraint of individual rationality. With no ties, the Pareto optimal equilibrium can be further proven to be unique.
Variational principle for the Pareto power law.
Chakraborti, Anirban; Patriarca, Marco
2009-11-27
A mechanism is proposed for the appearance of power-law distributions in various complex systems. It is shown that in a conservative mechanical system composed of subsystems with different numbers of degrees of freedom a robust power-law tail can appear in the equilibrium distribution of energy as a result of certain superpositions of the canonical equilibrium energy densities of the subsystems. The derivation only uses a variational principle based on the Boltzmann entropy, without assumptions outside the framework of canonical equilibrium statistical mechanics. Two examples are discussed, free diffusion on a complex network and a kinetic model of wealth exchange. The mechanism is illustrated in the general case through an exactly solvable mechanical model of a dimensionally heterogeneous system.
Phase transitions in Pareto optimal complex networks.
Seoane, Luís F; Solé, Ricard
2015-09-01
The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem, finding phase transitions of different kinds. Distinct phases are associated with different arrangements of the connections, but the need of drastic topological changes does not determine the presence or the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.
A Generalized Cauchy Distribution Framework for Problems Requiring Robust Behavior
Directory of Open Access Journals (Sweden)
Carrillo RafaelE
2010-01-01
Full Text Available Statistical modeling is at the heart of many engineering problems. The importance of statistical modeling emanates not only from the desire to accurately characterize stochastic events, but also from the fact that distributions are the central models utilized to derive sample processing theories and methods. The generalized Cauchy distribution (GCD family has a closed-form pdf expression across the whole family as well as algebraic tails, which makes it suitable for modeling many real-life impulsive processes. This paper develops a GCD theory-based approach that allows challenging problems to be formulated in a robust fashion. Notably, the proposed framework subsumes generalized Gaussian distribution (GGD family-based developments, thereby guaranteeing performance improvements over traditional GCD-based problem formulation techniques. This robust framework can be adapted to a variety of applications in signal processing. As examples, we formulate four practical applications under this framework: (1 filtering for power line communications, (2 estimation in sensor networks with noisy channels, (3 reconstruction methods for compressed sensing, and (4 fuzzy clustering.
The geometry of the Pareto front in biological phenotype space
Sheftel, Hila; Shoval, Oren; Mayo, Avi; Alon, Uri
2013-01-01
When organisms perform a single task, selection leads to phenotypes that maximize performance at that task. When organisms need to perform multiple tasks, a trade-off arises because no phenotype can optimize all tasks. Recent work addressed this question, and assumed that the performance at each task decays with distance in trait space from the best phenotype at that task. Under this assumption, the best-fitness solutions (termed the Pareto front) lie on simple low-dimensional shapes in trait space: line segments, triangles and other polygons. The vertices of these polygons are specialists at a single task. Here, we generalize this finding, by considering performance functions of general form, not necessarily functions that decay monotonically with distance from their peak. We find that, except for performance functions with highly eccentric contours, simple shapes in phenotype space are still found, but with mildly curving edges instead of straight ones. In a wide range of systems, complex data on multiple quantitative traits, which might be expected to fill a high-dimensional phenotype space, is predicted instead to collapse onto low-dimensional shapes; phenotypes near the vertices of these shapes are predicted to be specialists, and can thus suggest which tasks may be at play. PMID:23789060
Citation distribution profile in Brazilian journals of general medicine
Directory of Open Access Journals (Sweden)
Luiggi Araujo Lustosa
Full Text Available CONTEXT AND OBJECTIVE: Impact factors are currently the bibliometric index most used for evaluating scientific journals. However, the way in which they are used, for instance concerning the study or journal types analyzed, can markedly interfere with estimate reliability. This study aimed to analyze the citation distribution pattern in three Brazilian journals of general medicine. DESIGN AND SETTING: This was a descriptive study based on numbers of citations of scientific studies published by three Brazilian journals of general medicine. METHODS: The journals analyzed were São Paulo Medical Journal, Clinics and Revista da Associação Médica Brasileira. This survey used data available from the Institute for Scientific Information (ISI platform, from which the total number of papers published in each journal in 2007-2008 and the number of citations of these papers in 2009 were obtained. From these data, the citation distribution was derived and journal impact factors (average number of citations were estimated. These factors were then compared with those directly available from the ISI Journal of Citation Reports (JCR. RESULTS: Respectively, 134, 203 and 192 papers were published by these journals during the period analyzed. The observed citation distributions were highly skewed, such that many papers had few citations and a small percentage had many citations. It was not possible to identify any specific pattern for the most cited papers or to exactly reproduce the JCR impact factors. CONCLUSION: Use of measures like "impact factors", which characterize citations through averages, does not adequately represent the citation distribution in the journals analyzed.
Helicity-dependent generalized parton distributions for nonzero skewness
Energy Technology Data Exchange (ETDEWEB)
Mondal, Chandan [Chinese Academy of Sciences, Institute of Modern Physics, Lanzhou (China)
2017-09-15
We investigate the helicity-dependent generalized parton distributions (GPDs) in momentum as well as transverse position (impact) spaces for the u and d quarks in a proton when the momentum transfer in both the transverse and the longitudinal directions are nonzero. The GPDs are evaluated using the light-front wave functions of a quark-diquark model for nucleon where the wave functions are constructed by the soft-wall AdS/QCD correspondence. We also express the GPDs in the boost-invariant longitudinal position space. (orig.)
Modeling the brain morphology distribution in the general aging population
Huizinga, W.; Poot, D. H. J.; Roshchupkin, G.; Bron, E. E.; Ikram, M. A.; Vernooij, M. W.; Rueckert, D.; Niessen, W. J.; Klein, S.
2016-03-01
Both normal aging and neurodegenerative diseases such as Alzheimer's disease cause morphological changes of the brain. To better distinguish between normal and abnormal cases, it is necessary to model changes in brain morphology owing to normal aging. To this end, we developed a method for analyzing and visualizing these changes for the entire brain morphology distribution in the general aging population. The method is applied to 1000 subjects from a large population imaging study in the elderly, from which 900 were used to train the model and 100 were used for testing. The results of the 100 test subjects show that the model generalizes to subjects outside the model population. Smooth percentile curves showing the brain morphology changes as a function of age and spatiotemporal atlases derived from the model population are publicly available via an interactive web application at agingbrain.bigr.nl.
Classification as clustering: a Pareto cooperative-competitive GP approach.
McIntyre, Andrew R; Heywood, Malcolm I
2011-01-01
Intuitively population based algorithms such as genetic programming provide a natural environment for supporting solutions that learn to decompose the overall task between multiple individuals, or a team. This work presents a framework for evolving teams without recourse to prespecifying the number of cooperating individuals. To do so, each individual evolves a mapping to a distribution of outcomes that, following clustering, establishes the parameterization of a (Gaussian) local membership function. This gives individuals the opportunity to represent subsets of tasks, where the overall task is that of classification under the supervised learning domain. Thus, rather than each team member representing an entire class, individuals are free to identify unique subsets of the overall classification task. The framework is supported by techniques from evolutionary multiobjective optimization (EMO) and Pareto competitive coevolution. EMO establishes the basis for encouraging individuals to provide accurate yet nonoverlaping behaviors; whereas competitive coevolution provides the mechanism for scaling to potentially large unbalanced datasets. Benchmarking is performed against recent examples of nonlinear SVM classifiers over 12 UCI datasets with between 150 and 200,000 training instances. Solutions from the proposed coevolutionary multiobjective GP framework appear to provide a good balance between classification performance and model complexity, especially as the dataset instance count increases.
Estimating the parameters of a generalized lambda distribution
International Nuclear Information System (INIS)
Fournier, B.; Rupin, N.; Najjar, D.; Iost, A.; Rupin, N.; Bigerelle, M.; Wilcox, R.; Fournier, B.
2007-01-01
The method of moments is a popular technique for estimating the parameters of a generalized lambda distribution (GLD), but published results suggest that the percentile method gives superior results. However, the percentile method cannot be implemented in an automatic fashion, and automatic methods, like the starship method, can lead to prohibitive execution time with large sample sizes. A new estimation method is proposed that is automatic (it does not require the use of special tables or graphs), and it reduces the computational time. Based partly on the usual percentile method, this new method also requires choosing which quantile u to use when fitting a GLD to data. The choice for u is studied and it is found that the best choice depends on the final goal of the modeling process. The sampling distribution of the new estimator is studied and compared to the sampling distribution of estimators that have been proposed. Naturally, all estimators are biased and here it is found that the bias becomes negligible with sample sizes n ≥ 2 * 10(3). The.025 and.975 quantiles of the sampling distribution are investigated, and the difference between these quantiles is found to decrease proportionally to 1/root n.. The same results hold for the moment and percentile estimates. Finally, the influence of the sample size is studied when a normal distribution is modeled by a GLD. Both bounded and unbounded GLDs are used and the bounded GLD turns out to be the most accurate. Indeed it is shown that, up to n = 10(6), bounded GLD modeling cannot be rejected by usual goodness-of-fit tests. (authors)
Pareto-optimal multi-objective design of airplane control systems
Schy, A. A.; Johnson, K. G.; Giesy, D. P.
1980-01-01
A constrained minimization algorithm for the computer aided design of airplane control systems to meet many requirements over a set of flight conditions is generalized using the concept of Pareto-optimization. The new algorithm yields solutions on the boundary of the achievable domain in objective space in a single run, whereas the older method required a sequence of runs to approximate such a limiting solution. However, Pareto-optimality does not guarantee a satisfactory design, since such solutions may emphasize some objectives at the expense of others. The designer must still interact with the program to obtain a well-balanced set of objectives. Using the example of a fighter lateral stability augmentation system (SAS) design over five flight conditions, several effective techniques are developed for obtaining well-balanced Pareto-optimal solutions. For comparison, one of these techniques is also used in a recently developed algorithm of Kreisselmeier and Steinhauser, which replaces the hard constraints with soft constraints, using a special penalty function. It is shown that comparable results can be obtained.
Tolerating Correlated Failures for Generalized Cartesian Distributions via Bipartite Matching
International Nuclear Information System (INIS)
Ali, Nawab; Krishnamoorthy, Sriram; Halappanavar, Mahantesh; Daily, Jeffrey A.
2011-01-01
Faults are expected to play an increasingly important role in how algorithms and applications are designed to run on future extreme-scale systems. A key ingredient of any approach to fault tolerance is effective support for fault tolerant data storage. A typical application execution consists of phases in which certain data structures are modified while others are read-only. Often, read-only data structures constitute a large fraction of total memory consumed. Fault tolerance for read-only data can be ensured through the use of checksums or parities, without resorting to expensive in-memory duplication or checkpointing to secondary storage. In this paper, we present a graph-matching approach to compute and store parity data for read-only matrices that are compatible with fault tolerant linear algebra (FTLA). Typical approaches only support blocked data distributions with each process holding one block with the parity located on additional processes. The matrices are assumed to be blocked by a cartesian grid with each block assigned to a process. We consider a generalized distribution in which each process can be assigned arbitrary blocks. We also account for the fact that multiple processes might be part of the same failure unit, say an SMP node. The flexibility enabled by our novel application of graph matching extends fault tolerance support to data distributions beyond those supported by prior work. We evaluate the matching implementations and cost to compute the parity and recover lost data, demonstrating the low overhead incurred by our approach.
Dual parametrization of generalized parton distributions in two equivalent representations
International Nuclear Information System (INIS)
Müller, D.; Polyakov, M.V.; Semenov-Tian-Shansky, K.M.
2015-01-01
The dual parametrization and the Mellin-Barnes integral approach represent two frameworks for handling the double partial wave expansion of generalized parton distributions (GPDs) in the conformal partial waves and in the t-channel SO(3) partial waves. Within the dual parametrization framework, GPDs are represented as integral convolutions of forward-like functions whose Mellin moments generate the conformal moments of GPDs. The Mellin-Barnes integral approach is based on the analytic continuation of the GPD conformal moments to the complex values of the conformal spin. GPDs are then represented as the Mellin-Barnes-type integrals in the complex conformal spin plane. In this paper we explicitly show the equivalence of these two independently developed GPD representations. Furthermore, we clarify the notions of the J=0 fixed pole and the D-form factor. We also provide some insight into GPD modeling and map the phenomenologically successful Kumerički-Müller GPD model to the dual parametrization framework by presenting the set of the corresponding forward-like functions. We also build up the reparametrization procedure allowing to recast the double distribution representation of GPDs in the Mellin-Barnes integral framework and present the explicit formula for mapping double distributions into the space of double partial wave amplitudes with complex conformal spin.
Pareto-optimal estimates that constrain mean California precipitation change
Langenbrunner, B.; Neelin, J. D.
2017-12-01
Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.
Nucleon form factors, generalized parton distributions and quark angular momentum
Energy Technology Data Exchange (ETDEWEB)
Diehl, Markus [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Kroll, Peter [Bergische Univ., Wuppertal (Germany). Fachbereich Physik; Regensburg Univ. (Germany). Institut fuer Theoretische Physik
2013-02-15
We extract the individual contributions from u and d quarks to the Dirac and Pauli form factors of the proton, after a critical examination of the available measurements of electromagnetic nucleon form factors. From this data we determine generalized parton distributions for valence quarks, assuming a particular form for their functional dependence. The result allows us to study various aspects of nucleon structure in the valence region. In particular, we evaluate Ji's sum rule and estimate the total angular momentum carried by valence quarks at the scale {mu}=2 GeV to be J{sup u}{sub v}=0.230{sup +0.009}{sub -0.024} and J{sup d}{sub v}=-0.004{sup +0.010}{sub -0.016}.
Nucleon form factors, generalized parton distributions and quark angular momentum
International Nuclear Information System (INIS)
Diehl, Markus; Kroll, Peter; Regensburg Univ.
2013-02-01
We extract the individual contributions from u and d quarks to the Dirac and Pauli form factors of the proton, after a critical examination of the available measurements of electromagnetic nucleon form factors. From this data we determine generalized parton distributions for valence quarks, assuming a particular form for their functional dependence. The result allows us to study various aspects of nucleon structure in the valence region. In particular, we evaluate Ji's sum rule and estimate the total angular momentum carried by valence quarks at the scale μ=2 GeV to be J u v =0.230 +0.009 -0.024 and J d v =-0.004 +0.010 -0.016 .
Moments of nucleon generalized parton distributions from lattice QCD
International Nuclear Information System (INIS)
Alexandrou, C.; Cyprus Institute, Nicosia; Carbonell, J.; Harraud, P.A.; Papinutto, M.; Constantinou, M.; Kallidonis, C.; Guichon, P.; Jansen, K.; Korzec, T.; Humboldt Univ. Berlin
2011-07-01
We present results on the lower moments of the nucleon generalized parton distributions within lattice QCD using two dynamical flavors of degenerate twisted mass fermions. Our simulations are performed on lattices with three different values of the lattice spacings, namely a=0.089 fm, a=0.070 fm and a=0.056 fm, allowing the investigation of cut-off effects. The volume dependence is examined using simulations on two lattices of spatial length L=2.1 fm and L=2.8 fm. The simulations span pion masses in the range of 260-470 MeV. Our results are renormalized nonperturbatively and the values are given in the MS scheme at a scale μ=2 GeV. They are chirally extrapolated to the physical point in order to compare with experiment. The consequences of these results on the spin carried by the quarks in the nucleon are investigated. (orig.)
Residual distribution for general time-dependent conservation laws
International Nuclear Information System (INIS)
Ricchiuto, Mario; Csik, Arpad; Deconinck, Herman
2005-01-01
We consider the second-order accurate numerical solution of general time-dependent hyperbolic conservation laws over unstructured grids in the framework of the Residual Distribution method. In order to achieve full conservation of the linear, monotone and first-order space-time schemes of (Csik et al., 2003) and (Abgrall et al., 2000), we extend the conservative residual distribution (CRD) formulation of (Csik et al., 2002) to prismatic space-time elements. We then study the design of second-order accurate and monotone schemes via the nonlinear mapping of the local residuals of linear monotone schemes. We derive sufficient and necessary conditions for the well-posedness of the mapping. We prove that the schemes obtained with the CRD formulation satisfy these conditions by construction. Thus the nonlinear schemes proposed in this paper are always well defined. The performance of the linear and nonlinear schemes are evaluated on a series of test problems involving the solution of the Euler equations and of a two-phase flow model. We consider the resolution of strong shocks and complex interacting flow structures. The results demonstrate the robustness, accuracy and non-oscillatory character of the proposed schemes. d schemes
Birds shed RNA-viruses according to the pareto principle.
Jankowski, Mark D; Williams, Christopher J; Fair, Jeanne M; Owen, Jennifer C
2013-01-01
A major challenge in disease ecology is to understand the role of individual variation of infection load on disease transmission dynamics and how this influences the evolution of resistance or tolerance mechanisms. Such information will improve our capacity to understand, predict, and mitigate pathogen-associated disease in all organisms. In many host-pathogen systems, particularly macroparasites and sexually transmitted diseases, it has been found that approximately 20% of the population is responsible for approximately 80% of the transmission events. Although host contact rates can account for some of this pattern, pathogen transmission dynamics also depend upon host infectiousness, an area that has received relatively little attention. Therefore, we conducted a meta-analysis of pathogen shedding rates of 24 host (avian) - pathogen (RNA-virus) studies, including 17 bird species and five important zoonotic viruses. We determined that viral count data followed the Weibull distribution, the mean Gini coefficient (an index of inequality) was 0.687 (0.036 SEM), and that 22.0% (0.90 SEM) of the birds shed 80% of the virus across all studies, suggesting an adherence of viral shedding counts to the Pareto Principle. The relative position of a bird in a distribution of viral counts was affected by factors extrinsic to the host, such as exposure to corticosterone and to a lesser extent reduced food availability, but not to intrinsic host factors including age, sex, and migratory status. These data provide a quantitative view of heterogeneous virus shedding in birds that may be used to better parameterize epidemiological models and understand transmission dynamics.
Birds shed RNA-viruses according to the pareto principle.
Directory of Open Access Journals (Sweden)
Mark D Jankowski
Full Text Available A major challenge in disease ecology is to understand the role of individual variation of infection load on disease transmission dynamics and how this influences the evolution of resistance or tolerance mechanisms. Such information will improve our capacity to understand, predict, and mitigate pathogen-associated disease in all organisms. In many host-pathogen systems, particularly macroparasites and sexually transmitted diseases, it has been found that approximately 20% of the population is responsible for approximately 80% of the transmission events. Although host contact rates can account for some of this pattern, pathogen transmission dynamics also depend upon host infectiousness, an area that has received relatively little attention. Therefore, we conducted a meta-analysis of pathogen shedding rates of 24 host (avian - pathogen (RNA-virus studies, including 17 bird species and five important zoonotic viruses. We determined that viral count data followed the Weibull distribution, the mean Gini coefficient (an index of inequality was 0.687 (0.036 SEM, and that 22.0% (0.90 SEM of the birds shed 80% of the virus across all studies, suggesting an adherence of viral shedding counts to the Pareto Principle. The relative position of a bird in a distribution of viral counts was affected by factors extrinsic to the host, such as exposure to corticosterone and to a lesser extent reduced food availability, but not to intrinsic host factors including age, sex, and migratory status. These data provide a quantitative view of heterogeneous virus shedding in birds that may be used to better parameterize epidemiological models and understand transmission dynamics.
Chiral perturbation theory for generalized parton distributions and baryon distribution amplitudes
Energy Technology Data Exchange (ETDEWEB)
Wein, Philipp
2016-05-06
In this thesis we apply low-energy effective field theory to the first moments of generalized parton distributions and to baryon distribution amplitudes, which are both highly relevant for the parametrization of the nonperturbative part in hard processes. These quantities yield complementary information on hadron structure, since the former treat hadrons as a whole and, thus, give information about the (angular) momentum carried by an entire parton species on average, while the latter parametrize the momentum distribution within an individual Fock state. By performing one-loop calculations within covariant baryon chiral perturbation theory, we obtain sensible parametrizations of the quark mass dependence that are ideally suited for the subsequent analysis of lattice QCD data.
Directory of Open Access Journals (Sweden)
I. K. Romanova
2015-01-01
Full Text Available The article research concerns the multi-criteria optimization (MCO, which assumes that operation quality criteria of the system are independent and specifies a way to improve values of these criteria. Mutual contradiction of some criteria is a major problem in MCO. One of the most important areas of research is to obtain the so-called Pareto - optimal options.The subject of research is Pareto front, also called the Pareto frontier. The article discusses front classifications by its geometric representation for the case of two-criterion task. It presents a mathematical description of the front characteristics using the gradients and their projections. A review of current domestic and foreign literature has revealed that the aim of works in constructing the Pareto frontier is to conduct research in conditions of uncertainty, in the stochastic statement, with no restrictions. A topology both in two- and in three-dimensional case is under consideration. The targets of modern applications are multi-agent systems and groups of players in differential games. However, all considered works have no task to provide an active management of the front.The objective of this article is to discuss the research problem the Pareto frontier in a new production, namely, with the active co-developers of the systems and (or the decision makers (DM in the management of the Pareto frontier. It notes that such formulation differs from the traditionally accepted approach based on the analysis of already existing solutions.The article discusses three ways to describe a quality of the object management system. The first way is to use the direct quality criteria for the model of a closed system as the vibrational level of the General form. The second one is to study a specific two-loop system of an aircraft control using the angular velocity and normal acceleration loops. The third is the use of the integrated quality criteria. In all three cases, the selected criteria are
PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning
International Nuclear Information System (INIS)
Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew
2011-01-01
Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows
PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.
Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew
2011-09-01
In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number
Can we reach Pareto optimal outcomes using bottom-up approaches?
V. Sanchez-Anguix (Victor); R. Aydoğan (Reyhan); T. Baarslag (Tim); C.M. Jonker (Catholijn)
2016-01-01
textabstractClassically, disciplines like negotiation and decision making have focused on reaching Pareto optimal solutions due to its stability and efficiency properties. Despite the fact that many practical and theoretical algorithms have successfully attempted to provide Pareto optimal solutions,
Best Statistical Distribution of flood variables for Johor River in Malaysia
Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.
2012-12-01
A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow
The Urbanik generalized convolutions in the non-commutative ...
Indian Academy of Sciences (India)
−sν(dx) < ∞. Now we apply this construction to the Kendall convolution case, starting with the weakly stable measure δ1. Example 1. Let △ be the Kendall convolution, i.e. the generalized convolution with the probability kernel: δ1△δa = (1 − a)δ1 + aπ2 for a ∈ [0, 1] and π2 be the Pareto distribution with the density π2(dx) =.
Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing
Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.
2006-01-01
The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval
Efficient approximation of black-box functions and Pareto sets
Rennen, G.
2009-01-01
In the case of time-consuming simulation models or other so-called black-box functions, we determine a metamodel which approximates the relation between the input- and output-variables of the simulation model. To solve multi-objective optimization problems, we approximate the Pareto set, i.e. the
Overview of contaminant arrival distributions as general evaluation requirements
International Nuclear Information System (INIS)
Anon.
1977-01-01
The environmental consequences of subsurface contamination problems can be completely and effectively evaluated by fulfilling the following five requirements: Determine each present or future outflow boundary of contaminated groundwater; provide the location/arrival-time distributions; provide the location/outflow-quantity distributions; provide these distributions for each individual chemical or biological constituent of environmental importance; and use the arrival distributions to determine the quantity and concentration of each contaminant that will interface with the environment as time passes. The arrival distributions on which these requirements are based provide a reference point for communication among scientists and public decision makers by enabling complicated scientific analyses to be presented as simple summary relationships
A generalization information management system applied to electrical distribution
Energy Technology Data Exchange (ETDEWEB)
Geisler, K.I.; Neumann, S.A.; Nielsen, T.D.; Bower, P.K. (Empros Systems International (US)); Hughes, B.A.
1990-07-01
This article presents a system solution approach that meets the requirements being imposed by industry trends and the electric utility customer. Specifically, the solution addresses electric distribution management systems. Electrical distribution management is a particularly well suited area of application because it involves a high diversity of tasks, which are currently supported by a proliferation of automated islands. Islands of automation which currently exist include (among others) distribution operations, load management, automated mapping, facility management, work order processing, and planning.
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space
Szekely, Pablo; Korem, Yael; Moran, Uri; Mayo, Avi; Alon, Uri
2015-01-01
When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes—phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass. PMID:26465336
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space.
Szekely, Pablo; Korem, Yael; Moran, Uri; Mayo, Avi; Alon, Uri
2015-10-01
When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes--phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass.
The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space.
Directory of Open Access Journals (Sweden)
Pablo Szekely
2015-10-01
Full Text Available When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes--phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass.
Extending the alias Monte Carlo sampling method to general distributions
International Nuclear Information System (INIS)
Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.
1991-01-01
The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs
Generalization of Poisson distribution for the case of changing probability of consequential events
International Nuclear Information System (INIS)
Kushnirenko, E.
1995-01-01
The generalization of the Poisson distribution for the case of changing probabilities of the consequential events is done. It is shown that the classical Poisson distribution is the special case of this generalized distribution when the probabilities of the consequential events are constant. The using of the generalized Poisson distribution gives the possibility in some cases to obtain analytical result instead of making Monte-Carlo calculation
Generalized parton distributions and transversity from full lattice QCD
Göckeler, M.; Hägler, Ph.; Horsley, R.; Pleiter, D.; Rakow, P. E. L.; Schäfer, A.; Schierholz, G.; Zanotti, J. M.; Qcdsf Collaboration
2005-06-01
We present here the latest results from the QCDSF collaboration for moments of gener- alized parton distributions and transversity in two-flavour QCD, including a preliminary analysis of the pion mass dependence.
International Nuclear Information System (INIS)
Gharari, Rahman; Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi
2016-01-01
In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor
Energy Technology Data Exchange (ETDEWEB)
Gharari, Rahman [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi [Nuclear Engineering Dept, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)
2016-10-15
In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor.
Estimations of parameters in Pareto reliability model in the presence of masked data
International Nuclear Information System (INIS)
Sarhan, Ammar M.
2003-01-01
Estimations of parameters included in the individual distributions of the life times of system components in a series system are considered in this paper based on masked system life test data. We consider a series system of two independent components each has a Pareto distributed lifetime. The maximum likelihood and Bayes estimators for the parameters and the values of the reliability of the system's components at a specific time are obtained. Symmetrical triangular prior distributions are assumed for the unknown parameters to be estimated in obtaining the Bayes estimators of these parameters. Large simulation studies are done in order: (i) explain how one can utilize the theoretical results obtained; (ii) compare the maximum likelihood and Bayes estimates obtained of the underlying parameters; and (iii) study the influence of the masking level and the sample size on the accuracy of the estimates obtained
Risk finance for catastrophe losses with Pareto-calibrated Lévy-stable severities.
Powers, Michael R; Powers, Thomas Y; Gao, Siwei
2012-11-01
For catastrophe losses, the conventional risk finance paradigm of enterprise risk management identifies transfer, as opposed to pooling or avoidance, as the preferred solution. However, this analysis does not necessarily account for differences between light- and heavy-tailed characteristics of loss portfolios. Of particular concern are the decreasing benefits of diversification (through pooling) as the tails of severity distributions become heavier. In the present article, we study a loss portfolio characterized by nonstochastic frequency and a class of Lévy-stable severity distributions calibrated to match the parameters of the Pareto II distribution. We then propose a conservative risk finance paradigm that can be used to prepare the firm for worst-case scenarios with regard to both (1) the firm's intrinsic sensitivity to risk and (2) the heaviness of the severity's tail. © 2012 Society for Risk Analysis.
Learning general phonological rules from distributional information: a computational model.
Calamaro, Shira; Jarosz, Gaja
2015-04-01
Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles. Copyright © 2014 Cognitive Science Society, Inc.
Pareto-depth for multiple-query image retrieval.
Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O
2015-02-01
Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.
Decomposition and Simplification of Multivariate Data using Pareto Sets.
Huettenberger, Lars; Heine, Christian; Garth, Christoph
2014-12-01
Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.
Small Sample Robust Testing for Normality against Pareto Tails
Czech Academy of Sciences Publication Activity Database
Stehlík, M.; Fabián, Zdeněk; Střelec, L.
2012-01-01
Roč. 41, č. 7 (2012), s. 1167-1194 ISSN 0361-0918 Grant - others:Aktion(CZ-AT) 51p7, 54p21, 50p14, 54p13 Institutional research plan: CEZ:AV0Z10300504 Keywords : consistency * Hill estimator * t-Hill estimator * location functional * Pareto tail * power comparison * returns * robust tests for normality Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.295, year: 2012
Inference for exponentiated general class of distributions based on record values
Directory of Open Access Journals (Sweden)
Samah N. Sindi
2017-09-01
Full Text Available The main objective of this paper is to suggest and study a new exponentiated general class (EGC of distributions. Maximum likelihood, Bayesian and empirical Bayesian estimators of the parameter of the EGC of distributions based on lower record values are obtained. Furthermore, Bayesian prediction of future records is considered. Based on lower record values, the exponentiated Weibull distribution, its special cases of distributions and exponentiated Gompertz distribution are applied to the EGC of distributions.
Pareto optimal design of sectored toroidal superconducting magnet for SMES
Energy Technology Data Exchange (ETDEWEB)
Bhunia, Uttam, E-mail: ubhunia@vecc.gov.in; Saha, Subimal; Chakrabarti, Alok
2014-10-15
Highlights: • The optimization approach minimizes both the magnet size and necessary cable length of a sectored toroidal SMES unit. • Design approach is suitable for low temperature superconducting cable suitable for medium size SMES unit. • It investigates coil parameters with respect to practical engineering aspects. - Abstract: A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium–titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy.
Multicriteria Similarity-Based Anomaly Detection Using Pareto Depth Analysis.
Hsiao, Ko-Jen; Xu, Kevin S; Calder, Jeff; Hero, Alfred O
2016-06-01
We consider the problem of identifying patterns in a data set that exhibits anomalous behavior, often referred to as anomaly detection. Similarity-based anomaly detection algorithms detect abnormally large amounts of similarity or dissimilarity, e.g., as measured by the nearest neighbor Euclidean distances between a test sample and the training samples. In many application domains, there may not exist a single dissimilarity measure that captures all possible anomalous patterns. In such cases, multiple dissimilarity measures can be defined, including nonmetric measures, and one can test for anomalies by scalarizing using a nonnegative linear combination of them. If the relative importance of the different dissimilarity measures are not known in advance, as in many anomaly detection applications, the anomaly detection algorithm may need to be executed multiple times with different choices of weights in the linear combination. In this paper, we propose a method for similarity-based anomaly detection using a novel multicriteria dissimilarity measure, the Pareto depth. The proposed Pareto depth analysis (PDA) anomaly detection algorithm uses the concept of Pareto optimality to detect anomalies under multiple criteria without having to run an algorithm multiple times with different choices of weights. The proposed PDA approach is provably better than using linear combinations of the criteria, and shows superior performance on experiments with synthetic and real data sets.
Pareto optimal design of sectored toroidal superconducting magnet for SMES
International Nuclear Information System (INIS)
Bhunia, Uttam; Saha, Subimal; Chakrabarti, Alok
2014-01-01
Highlights: • The optimization approach minimizes both the magnet size and necessary cable length of a sectored toroidal SMES unit. • Design approach is suitable for low temperature superconducting cable suitable for medium size SMES unit. • It investigates coil parameters with respect to practical engineering aspects. - Abstract: A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium–titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy
Computing gap free Pareto front approximations with stochastic search algorithms.
Schütze, Oliver; Laumanns, Marco; Tantar, Emilia; Coello, Carlos A Coello; Talbi, El-Ghazali
2010-01-01
Recently, a convergence proof of stochastic search algorithms toward finite size Pareto set approximations of continuous multi-objective optimization problems has been given. The focus was on obtaining a finite approximation that captures the entire solution set in some suitable sense, which was defined by the concept of epsilon-dominance. Though bounds on the quality of the limit approximation-which are entirely determined by the archiving strategy and the value of epsilon-have been obtained, the strategies do not guarantee to obtain a gap free approximation of the Pareto front. That is, such approximations A can reveal gaps in the sense that points f in the Pareto front can exist such that the distance of f to any image point F(a), a epsilon A, is "large." Since such gap free approximations are desirable in certain applications, and the related archiving strategies can be advantageous when memetic strategies are included in the search process, we are aiming in this work for such methods. We present two novel strategies that accomplish this task in the probabilistic sense and under mild assumptions on the stochastic search algorithm. In addition to the convergence proofs, we give some numerical results to visualize the behavior of the different archiving strategies. Finally, we demonstrate the potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategy-multi-objective continuation methods-by showing that the concept of epsilon-dominance can be integrated into this approach in a suitable way.
Pareto optimal design of sectored toroidal superconducting magnet for SMES
Bhunia, Uttam; Saha, Subimal; Chakrabarti, Alok
2014-10-01
A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium-titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy.
Entropy-based derivation of generalized distributions for hydrometeorological frequency analysis
Chen, Lu; Singh, Vijay P.
2018-02-01
Frequency analysis of hydrometeorological and hydrological extremes is needed for the design of hydraulic and civil infrastructure facilities as well as water resources management. A multitude of distributions have been employed for frequency analysis of these extremes. However, no single distribution has been accepted as a global standard. Employing the entropy theory, this study derived five generalized distributions for frequency analysis that used different kinds of information encoded as constraints. These distributions were the generalized gamma (GG), the generalized beta distribution of the second kind (GB2), and the Halphen type A distribution (Hal-A), Halphen type B distribution (Hal-B) and Halphen type inverse B distribution (Hal-IB), among which the GG and GB2 distribution were previously derived by Papalexiou and Koutsoyiannis (2012) and the Halphen family was first derived using entropy theory in this paper. The entropy theory allowed to estimate parameters of the distributions in terms of the constraints used for their derivation. The distributions were tested using extreme daily and hourly rainfall data. Results show that the root mean square error (RMSE) values were very small, which indicated that the five generalized distributions fitted the extreme rainfall data well. Among them, according to the Akaike information criterion (AIC) values, generally the GB2 and Halphen family gave a better fit. Therefore, those general distributions are one of the best choices for frequency analysis. The entropy-based derivation led to a new way for frequency analysis of hydrometeorological extremes.
International Nuclear Information System (INIS)
Jiang Haimei; Liu Xinjian; Qiu Lin; Li Fengju
2014-01-01
Based on the meteorological data from weather stations around several domestic nuclear power plants, the statistical results of extreme minimum temperatures, minimum. central pressures of tropical cyclones and some other parameters are calculated using extreme value I distribution function (EV- I), generalized extreme value distribution function (GEV) and generalized Pareto distribution function (GP), respectively. The influence of different distribution functions and parameter solution methods on the statistical results of extreme values is investigated. Results indicate that generalized extreme value function has better applicability than the other two distribution functions in the determination of standard meteorological parameters for nuclear power plants. (authors)
Generalized Load Sharing for Homogeneous Networks of Distributed Environment
Directory of Open Access Journals (Sweden)
A. Satheesh
2008-01-01
Full Text Available We propose a method for job migration policies by considering effective usage of global memory in addition to CPU load sharing in distributed systems. When a node is identified for lacking sufficient memory space to serve jobs, one or more jobs of the node will be migrated to remote nodes with low memory allocations. If the memory space is sufficiently large, the jobs will be scheduled by a CPU-based load sharing policy. Following the principle of sharing both CPU and memory resources, we present several load sharing alternatives. Our objective is to reduce the number of page faults caused by unbalanced memory allocations for jobs among distributed nodes, so that overall performance of a distributed system can be significantly improved. We have conducted trace-driven simulations to compare CPU-based load sharing policies with our policies. We show that our load sharing policies not only improve performance of memory bound jobs, but also maintain the same load sharing quality as the CPU-based policies for CPU-bound jobs. Regarding remote execution and preemptive migration strategies, our experiments indicate that a strategy selection in load sharing is dependent on the amount of memory demand of jobs, remote execution is more effective for memory-bound jobs, and preemptive migration is more effective for CPU-bound jobs. Our CPU-memory-based policy using either high performance or high throughput approach and using the remote execution strategy performs the best for both CPU-bound and memory-bound job in homogeneous networks of distributed environment.
Moments of nucleon spin-dependent generalized parton distributions
International Nuclear Information System (INIS)
Schroers, W.; Brower, R.C.; Dreher, P.; Edwards, R.; Fleming, G.; Haegler, Ph.; Heller, U.M.; Lippert, Th.; Negele, J.W.; Pochinsky, A.V.; Renner, D.B.; Richards, D.; Schilling, K.
2004-01-01
We present a lattice measurement of the first two moments of the spin-dependent GPD H∼(x, ξ, t). From these we obtain the axial coupling constant and the second moment of the spin-dependent forward parton distribution. The measurements are done in full QCD using Wilson fermions. In addition, we also present results from a first exploratory study of full QCD using Asqtad sea and domain-wall valence fermions
On the limit distribution of lower extreme generalized order statistics
Indian Academy of Sciences (India)
Abstract. In a wide subclass of generalized order statistics (gOs), which contains most of the known and important models of ordered random variables, weak conver- gence of lower extremes are developed. A recent result of extreme value theory of m−gOs (as well as the classical extreme value theory of ordinary order ...
On the Limit Distribution of Lower Extreme Generalized Order Statistics
Indian Academy of Sciences (India)
In a wide subclass of generalized order statistics ( g O s ) , which contains most of the known and important models of ordered random variables, weak convergence of lower extremes are developed. A recent result of extreme value theory of m − g O s (as well as the classical extreme value theory of ordinary order statistics) ...
A generalized Dirichlet distribution accounting for singularities of the variables
DEFF Research Database (Denmark)
Lewy, Peter
1996-01-01
compared to the empirical moments. In general the estimates based on maximum likelihood are superior to the empirical moments in the small sample case. However, the main advantage of ML is not in computing the mean value, but rather in estimating the precision of the variables. In cases with many zero...
Energy-momentum distribution: A crucial problem in general relativity
Sharif, M.; Fatima, T.
2005-01-01
This paper is aimed to elaborate the problem of energy–momentum in general relativity. In this connection, we use the prescriptions of Einstein, Landau–Lifshitz, Papapetrou and Möller to compute the energy–momentum densities for two exact solutions of Einstein field equations. The space–times under
Rank distributions: A panoramic macroscopic outlook
Eliazar, Iddo I.; Cohen, Morrel H.
2014-01-01
This paper presents a panoramic macroscopic outlook of rank distributions. We establish a general framework for the analysis of rank distributions, which classifies them into five macroscopic "socioeconomic" states: monarchy, oligarchy-feudalism, criticality, socialism-capitalism, and communism. Oligarchy-feudalism is shown to be characterized by discrete macroscopic rank distributions, and socialism-capitalism is shown to be characterized by continuous macroscopic size distributions. Criticality is a transition state between oligarchy-feudalism and socialism-capitalism, which can manifest allometric scaling with multifractal spectra. Monarchy and communism are extreme forms of oligarchy-feudalism and socialism-capitalism, respectively, in which the intrinsic randomness vanishes. The general framework is applied to three different models of rank distributions—top-down, bottom-up, and global—and unveils each model's macroscopic universality and versatility. The global model yields a macroscopic classification of the generalized Zipf law, an omnipresent form of rank distributions observed across the sciences. An amalgamation of the three models establishes a universal rank-distribution explanation for the macroscopic emergence of a prevalent class of continuous size distributions, ones governed by unimodal densities with both Pareto and inverse-Pareto power-law tails.
Directory of Open Access Journals (Sweden)
M.M. Mohie El-Din
2011-10-01
Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.
Distribution of scholarly publications among academic radiology departments.
Morelli, John N; Bokhari, Danial
2013-03-01
The aim of this study was to determine whether the distribution of publications among academic radiology departments in the United States is Gaussian (ie, the bell curve) or Paretian. The search affiliation feature of the PubMed database was used to search for publications in 3 general radiology journals with high Impact Factors, originating at radiology departments in the United States affiliated with residency training programs. The distribution of the number of publications among departments was examined using χ(2) test statistics to determine whether it followed a Pareto or a Gaussian distribution more closely. A total of 14,219 publications contributed since 1987 by faculty members in 163 departments with residency programs were available for assessment. The data acquired were more consistent with a Pareto (χ(2) = 80.4) than a Gaussian (χ(2) = 659.5) distribution. The mean number of publications for departments was 79.9 ± 146 (range, 0-943). The median number of publications was 16.5. The majority (>50%) of major radiology publications from academic departments with residency programs originated in Pareto rather than a normal distribution. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Modeling fractal structure of city-size distributions using correlation functions.
Chen, Yanguang
2011-01-01
Zipf's law is one the most conspicuous empirical facts for cities, however, there is no convincing explanation for the scaling relation between rank and size and its scaling exponent. Using the idea from general fractals and scaling, I propose a dual competition hypothesis of city development to explain the value intervals and the special value, 1, of the power exponent. Zipf's law and Pareto's law can be mathematically transformed into one another, but represent different processes of urban evolution, respectively. Based on the Pareto distribution, a frequency correlation function can be constructed. By scaling analysis and multifractals spectrum, the parameter interval of Pareto exponent is derived as (0.5, 1]; Based on the Zipf distribution, a size correlation function can be built, and it is opposite to the first one. By the second correlation function and multifractals notion, the Pareto exponent interval is derived as [1, 2). Thus the process of urban evolution falls into two effects: one is the Pareto effect indicating city number increase (external complexity), and the other the Zipf effect indicating city size growth (internal complexity). Because of struggle of the two effects, the scaling exponent varies from 0.5 to 2; but if the two effects reach equilibrium with each other, the scaling exponent approaches 1. A series of mathematical experiments on hierarchical correlation are employed to verify the models and a conclusion can be drawn that if cities in a given region follow Zipf's law, the frequency and size correlations will follow the scaling law. This theory can be generalized to interpret the inverse power-law distributions in various fields of physical and social sciences.
Ultrawide Bandwidth Receiver Based on a Multivariate Generalized Gaussian Distribution
Ahmed, Qasim Zeeshan
2015-04-01
Multivariate generalized Gaussian density (MGGD) is used to approximate the multiple access interference (MAI) and additive white Gaussian noise in pulse-based ultrawide bandwidth (UWB) system. The MGGD probability density function (pdf) is shown to be a better approximation of a UWB system as compared to multivariate Gaussian, multivariate Laplacian and multivariate Gaussian-Laplacian mixture (GLM). The similarity between the simulated and the approximated pdf is measured with the help of modified Kullback-Leibler distance (KLD). It is also shown that MGGD has the smallest KLD as compared to Gaussian, Laplacian and GLM densities. A receiver based on the principles of minimum bit error rate is designed for the MGGD pdf. As the requirement is stringent, the adaptive implementation of the receiver is also carried out in this paper. Training sequence of the desired user is the only requirement when implementing the detector adaptively. © 2002-2012 IEEE.
International Nuclear Information System (INIS)
Ottosson, Rickard O.; Sjoestroem, David; Behrens, Claus F.; Karlsson, Anna; Engstroem, Per E.; Knoeoes, Tommy; Ceberg, Crister
2009-01-01
Pareto optimality is a concept that formalises the trade-off between a given set of mutually contradicting objectives. A solution is said to be Pareto optimal when it is not possible to improve one objective without deteriorating at least one of the other. A set of Pareto optimal solutions constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics of a treatment planning system (TPS), treatment strategy or delivery technique, Pareto fronts for a given case are likely to differ. The aim of this study was to investigate the feasibility of using Pareto fronts as a comparative tool for TPSs, treatment strategies and delivery techniques. In order to sample Pareto fronts, multiple treatment plans with varying target conformity and dose sparing of OAR were created for a number of prostate and head and neck IMRT cases. The DVHs of each plan were evaluated with respect to target coverage and dose to relevant OAR. Pareto fronts were successfully created for all studied cases. The results did indeed follow the definition of the Pareto concept, i.e. dose sparing of the OAR could not be improved without target coverage being impaired or vice versa. Furthermore, various treatment techniques resulted in distinguished and well separated Pareto fronts. Pareto fronts may be used to evaluate a number of parameters within radiotherapy. Examples are TPS optimization algorithms, the variation between accelerators or delivery techniques and the degradation of a plan during the treatment planning process. The issue of designing a model for unbiased comparison of parameters with such large inherent discrepancies, e.g. different TPSs, is problematic and should be carefully considered
Ottosson, Rickard O; Engstrom, Per E; Sjöström, David; Behrens, Claus F; Karlsson, Anna; Knöös, Tommy; Ceberg, Crister
2009-01-01
Pareto optimality is a concept that formalises the trade-off between a given set of mutually contradicting objectives. A solution is said to be Pareto optimal when it is not possible to improve one objective without deteriorating at least one of the other. A set of Pareto optimal solutions constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics of a treatment planning system (TPS), treatment strategy or delivery technique, Pareto fronts for a given case are likely to differ. The aim of this study was to investigate the feasibility of using Pareto fronts as a comparative tool for TPSs, treatment strategies and delivery techniques. In order to sample Pareto fronts, multiple treatment plans with varying target conformity and dose sparing of OAR were created for a number of prostate and head & neck IMRT cases. The DVHs of each plan were evaluated with respect to target coverage and dose to relevant OAR. Pareto fronts were successfully created for all studied cases. The results did indeed follow the definition of the Pareto concept, i.e. dose sparing of the OAR could not be improved without target coverage being impaired or vice versa. Furthermore, various treatment techniques resulted in distinguished and well separated Pareto fronts. Pareto fronts may be used to evaluate a number of parameters within radiotherapy. Examples are TPS optimization algorithms, the variation between accelerators or delivery techniques and the degradation of a plan during the treatment planning process. The issue of designing a model for unbiased comparison of parameters with such large inherent discrepancies, e.g. different TPSs, is problematic and should be carefully considered.
Effects of the financial crisis on the wealth distribution of Korea's companies
Lim, Kyuseong; Kim, Soo Yong; Swanson, Todd; Kim, Jooyun
2017-02-01
We investigated the distribution functions of Korea's top-rated companies during two financial crises. A power-law scaling for rank distribution, as well as cumulative probability distribution, was found and observed as a general pattern. Similar distributions can be shown in other studies of wealth and income distributions. In our study, the Pareto exponents designating the distribution differed before and after the crisis. The companies covered in this research are divided into two subgroups during a period when the subprime mortgage crisis occurred. Various industrial sectors of Korea's companies were found to respond differently during the two financial crises, especially the construction sector, financial sectors, and insurance groups.
Pareto-Optimal Model Selection via SPRINT-Race.
Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2018-02-01
In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.
Application of Pareto optimization method for ontology matching in nuclear reactor domain
International Nuclear Information System (INIS)
Meenachi, N. Madurai; Baba, M. Sai
2017-01-01
This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.
Pareto-optimal electricity tariff rates in the Republic of Armenia
International Nuclear Information System (INIS)
Kaiser, M.J.
2000-01-01
The economic impact of electricity tariff rates on the residential sector of Yerevan, Armenia, is examined. The effect of tariff design on revenue generation and equity measures is considered, and the combination of energy pricing and compensatory social policies which provides the best mix of efficiency and protection for poor households is examined. An equity measure is defined in terms of a cumulative distribution function which describes the percent of the population that spends x percent or less of their income on electricity consumption. An optimal (Pareto-efficient) tariff is designed based on the analysis of survey data and an econometric model, and the Armenian tariff rate effective 1 January 1997 to 15 September 1997 is shown to be non-optimal relative to this rate. 22 refs
Application of Pareto optimization method for ontology matching in nuclear reactor domain
Energy Technology Data Exchange (ETDEWEB)
Meenachi, N. Madurai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Planning and Human Resource Management Div.; Baba, M. Sai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Resources Management Group
2017-12-15
This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.
Evolutionary tradeoffs, Pareto optimality and the morphology of ammonite shells.
Tendler, Avichai; Mayo, Avraham; Alon, Uri
2015-03-07
Organisms that need to perform multiple tasks face a fundamental tradeoff: no design can be optimal at all tasks at once. Recent theory based on Pareto optimality showed that such tradeoffs lead to a highly defined range of phenotypes, which lie in low-dimensional polyhedra in the space of traits. The vertices of these polyhedra are called archetypes- the phenotypes that are optimal at a single task. To rigorously test this theory requires measurements of thousands of species over hundreds of millions of years of evolution. Ammonoid fossil shells provide an excellent model system for this purpose. Ammonoids have a well-defined geometry that can be parameterized using three dimensionless features of their logarithmic-spiral-shaped shells. Their evolutionary history includes repeated mass extinctions. We find that ammonoids fill out a pyramid in morphospace, suggesting five specific tasks - one for each vertex of the pyramid. After mass extinctions, surviving species evolve to refill essentially the same pyramid, suggesting that the tasks are unchanging. We infer putative tasks for each archetype, related to economy of shell material, rapid shell growth, hydrodynamics and compactness. These results support Pareto optimality theory as an approach to study evolutionary tradeoffs, and demonstrate how this approach can be used to infer the putative tasks that may shape the natural selection of phenotypes.
A. Bouter (Anton); K. Pirpinia (Kleopatra); T. Alderliesten (Tanja); P.A.N. Bosman (Peter)
2017-01-01
textabstractA multi-objective optimization approach is o.en followed by an a posteriori decision-making process, during which the most appropriate solution of the Pareto set is selected by a professional in the .eld. Conventional visualization methods do not correct for Pareto fronts with
DEFF Research Database (Denmark)
Ottosson, Rickard O; Engstrom, Per E; Sjöström, David
2008-01-01
constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics...
Directory of Open Access Journals (Sweden)
E. SCHNEIDER
2014-07-01
Full Text Available The article is part of a special issue on occasion of the publication of the entire scientific correspondence of Vilfredo Pareto with Maffeo Pantaleoni. The author reconstructs the beginning of their correspondence, the debate in pure mathematical economics and draws main conclusions on the different views of Pareto with respect to Marshal, Edgeworth and Fisher.JEL: B16, B31, C02, C60
Energy distributions of Bianchi type-VIh Universe in general relativity ...
Indian Academy of Sciences (India)
2017-03-16
Mar 16, 2017 ... butions in Bianchi type-VIh metric for different gravitation theories. ... Bianchi VIh Universe; general relativity; teleparallel gravity; energy–momentum distribution. ... In §3, we introduce energy–momentum definitions of Einstein,.
Evaluation of the League General Insurance Company child safety seat distribution program
1982-05-01
This report presents an evaluation of the child safety seat distribution initiated by the League General Insurance Company in June 1979. The program provides child safety seats as a benefit under the company's auto insurance policies to policy-holder...
A general purpose subroutine for fast fourier transform on a distributed memory parallel machine
Dubey, A.; Zubair, M.; Grosch, C. E.
1992-01-01
One issue which is central in developing a general purpose Fast Fourier Transform (FFT) subroutine on a distributed memory parallel machine is the data distribution. It is possible that different users would like to use the FFT routine with different data distributions. Thus, there is a need to design FFT schemes on distributed memory parallel machines which can support a variety of data distributions. An FFT implementation on a distributed memory parallel machine which works for a number of data distributions commonly encountered in scientific applications is presented. The problem of rearranging the data after computing the FFT is also addressed. The performance of the implementation on a distributed memory parallel machine Intel iPSC/860 is evaluated.
Bao, T.; Diks, C.; Li, H.
We estimate the CAPM model on European stock market data, allowing for asymmetric and fat-tailed return distributions using independent and identically asymmetric power distributed (IIAPD) innovations. The results indicate that the generalized CAPM with IIAPD errors has desirable properties. It is
Dictatorship, liberalism and the Pareto rule: Possible and impossible
Directory of Open Access Journals (Sweden)
Boričić Branislav
2009-01-01
Full Text Available The current economic crisis has shaken belief in the capacity of neoliberal 'free market' policies. Numerous supports of state intervention have arisen, and the interest for social choice theory has revived. In this paper we consider three standard properties for aggregating individual into social preferences: dictatorship, liberalism and the Pareto rule, and their formal negations. The context of the pure first-order classical logic makes it possible to show how some combinations of the above mentioned conditions, under the hypothesis of unrestricted domain, form simple and reasonable examples of possible or impossible social choice systems. Due to their simplicity, these examples, including the famous 'liberal paradox', could have a particular didactic value.
Optimal PMU Placement with Uncertainty Using Pareto Method
Directory of Open Access Journals (Sweden)
A. Ketabi
2012-01-01
Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.
Pareto analysis of critical factors affecting technical institution evaluation
Directory of Open Access Journals (Sweden)
Victor Gambhir
2012-08-01
Full Text Available With the change of education policy in 1991, more and more technical institutions are being set up in India. Some of these institutions provide quality education, but others are merely concentrating on quantity. These stakeholders are in a state of confusion about decision to select the best institute for their higher educational studies. Although various agencies including print media provide ranking of these institutions every year, but their results are controversial and biased. In this paper, the authors have made an endeavor to find the critical factors for technical institution evaluation from literature survey. A Pareto analysis has also been performed to find the intensity of these critical factors in evaluation. This will not only help the stake holders in taking right decisions but will also help the management of institutions in benchmarking for identifying the most important critical areas to improve the existing system. This will in turn help Indian economy.
Pareto optimization of an industrial ecosystem: sustainability maximization
Directory of Open Access Journals (Sweden)
J. G. M.-S. Monteiro
2010-09-01
Full Text Available This work investigates a procedure to design an Industrial Ecosystem for sequestrating CO2 and consuming glycerol in a Chemical Complex with 15 integrated processes. The Complex is responsible for the production of methanol, ethylene oxide, ammonia, urea, dimethyl carbonate, ethylene glycol, glycerol carbonate, β-carotene, 1,2-propanediol and olefins, and is simulated using UNISIM Design (Honeywell. The process environmental impact (EI is calculated using the Waste Reduction Algorithm, while Profit (P is estimated using classic cost correlations. MATLAB (The Mathworks Inc is connected to UNISIM to enable optimization. The objective is granting maximum process sustainability, which involves finding a compromise between high profitability and low environmental impact. Sustainability maximization is therefore understood as a multi-criteria optimization problem, addressed by means of the Pareto optimization methodology for trading off P vs. EI.
Using the Pareto principle in genome-wide breeding value estimation.
Yu, Xijiang; Meuwissen, Theo H E
2011-11-01
Genome-wide breeding value (GWEBV) estimation methods can be classified based on the prior distribution assumptions of marker effects. Genome-wide BLUP methods assume a normal prior distribution for all markers with a constant variance, and are computationally fast. In Bayesian methods, more flexible prior distributions of SNP effects are applied that allow for very large SNP effects although most are small or even zero, but these prior distributions are often also computationally demanding as they rely on Monte Carlo Markov chain sampling. In this study, we adopted the Pareto principle to weight available marker loci, i.e., we consider that x% of the loci explain (100 - x)% of the total genetic variance. Assuming this principle, it is also possible to define the variances of the prior distribution of the 'big' and 'small' SNP. The relatively few large SNP explain a large proportion of the genetic variance and the majority of the SNP show small effects and explain a minor proportion of the genetic variance. We name this method MixP, where the prior distribution is a mixture of two normal distributions, i.e. one with a big variance and one with a small variance. Simulation results, using a real Norwegian Red cattle pedigree, show that MixP is at least as accurate as the other methods in all studied cases. This method also reduces the hyper-parameters of the prior distribution from 2 (proportion and variance of SNP with big effects) to 1 (proportion of SNP with big effects), assuming the overall genetic variance is known. The mixture of normal distribution prior made it possible to solve the equations iteratively, which greatly reduced computation loads by two orders of magnitude. In the era of marker density reaching million(s) and whole-genome sequence data, MixP provides a computationally feasible Bayesian method of analysis.
A novel generalized normal distribution for human longevity and other negatively skewed data.
Robertson, Henry T; Allison, David B
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.
Directory of Open Access Journals (Sweden)
Willi Pabst
2017-03-01
Full Text Available A generalized formulation of transformation matrices is given for the reconstruction of sphere diameter distributions from their section circle diameter distributions. This generalized formulation is based on a weight shift parameter that can be adjusted from 0 to 1. It includes the well-known Saltykov and Cruz-Orive transformations as special cases (for parameter values of 0 and 0.5, respectively. The physical meaning of this generalization is explained (showing, among others, that the Woodhead transformation should be bounded by the Saltykov transformation on the one side and by our transformation from the other and its numerical performance is investigated. In particular, it is shown that our generalized transformation is numerically highly unstable, i.e. introduces numerical artefacts (oscillations or even unphysical negative sphere frequencies into the reconstruction, and can lead to completely wrong results when a critical value of the parameter (usually in the range 0.7-0.9, depending on the type of distribution is exceeded. It is shown that this numerical instability is an intrinsic feature of these transformations that depends not only on the weight shift parameter value and is affected both by the type and the position of the distribution. It occurs in a natural way also for the Cruz-Orive and other transformations with finite weight shift parameter values and is not just caused by inadequate input data (e.g. as a consequence of an insufficient number of objects counted, as commonly assumed. Finally it is shown that an even more general class of transformation matrices can be defined that includes, in addition to the aformentioned transformations, also the Wicksell transformation.
Regular distributive efficiency and the distributive liberal social contract.
Jean Mercier Ythier
2009-01-01
We consider abstract social systems of private property, made of n individuals endowed with non-paternalistic interdependent preferences, who interact through exchanges on competitive markets and Pareto-efficient lumpsum transfers. The transfers follow from a distributive liberal social contract defined as a redistribution of initial endowments such that the resulting market equilibrium allocation is, both, Pareto-efficient relative to individual interdependent preferences, and unanimously we...
Competition and fragmentation: a simple model generating lognormal-like distributions
International Nuclear Information System (INIS)
Schwaemmle, V; Queiros, S M D; Brigatti, E; Tchumatchenko, T
2009-01-01
The current distribution of language size in terms of speaker population is generally described using a lognormal distribution. Analyzing the original real data we show how the double-Pareto lognormal distribution can give an alternative fit that indicates the existence of a power law tail. A simple Monte Carlo model is constructed based on the processes of competition and fragmentation. The results reproduce the power law tails of the real distribution well and give better results for a poorly connected topology of interactions.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
International Nuclear Information System (INIS)
Hoffmann, Aswin L; Siem, Alex Y D; Hertog, Dick den; Kaanders, Johannes H A M; Huizenga, Henk
2006-01-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning
DEFF Research Database (Denmark)
Madsen, Henrik; Rosbjerg, Dan
1997-01-01
parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified......A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...
Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.
2018-04-01
We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.
Moments of generalized Husimi distributions and complexity of many-body quantum states
International Nuclear Information System (INIS)
Sugita, Ayumu
2003-01-01
We consider generalized Husimi distributions for many-body systems, and show that their moments are good measures of complexity of many-body quantum states. Our construction of the Husimi distribution is based on the coherent state of the single-particle transformation group. Then the coherent states are independent-particle states, and, at the same time, the most localized states in the Husimi representation. Therefore delocalization of the Husimi distribution, which can be measured by the moments, is a sign of many-body correlation (entanglement). Since the delocalization of the Husimi distribution is also related to chaoticity of the dynamics, it suggests a relation between entanglement and chaos. Our definition of the Husimi distribution can be applied not only to systems of distinguishable particles, but also to those of identical particles, i.e., fermions and bosons. We derive an algebraic formula to evaluate the moments of the Husimi distribution
Directory of Open Access Journals (Sweden)
Purczyńskiz Jan
2014-07-01
Full Text Available This paper examines the application of the so called generalized Student’s t-distribution in modeling the distribution of empirical return rates on selected Warsaw stock exchange indexes. It deals with distribution parameters by means of the method of logarithmic moments, the maximum likelihood method and the method of moments. Generalized Student’s t-distribution ensures better fitting to empirical data than the classical Student’s t-distribution.
Statistical distribution for generalized ideal gas of fractional-statistics particles
International Nuclear Information System (INIS)
Wu, Y.
1994-01-01
We derive the occupation-number distribution in a generalized ideal gas of particles obeying fractional statistics, including mutual statistics, by adopting a state-counting definition. When there is no mutual statistics, the statistical distribution interpolates between bosons and fermions, and respects a fractional exclusion principle (except for bosons). Anyons in a strong magnetic field at low temperatures constitute such a physical system. Applications to the thermodynamic properties of quasiparticle excitations in the Laughlin quantum Hall fluid are discussed
Garrido-Balsells, José María; Jurado-Navas, Antonio; Paris, José Francisco; Castillo-Vazquez, Miguel; Puerta-Notario, Antonio
2015-03-09
In this paper, a novel and deeper physical interpretation on the recently published Málaga or ℳ statistical distribution is provided. This distribution, which is having a wide acceptance by the scientific community, models the optical irradiance scintillation induced by the atmospheric turbulence. Here, the analytical expressions previously published are modified in order to express them by a mixture of the known Generalized-K and discrete Binomial and Negative Binomial distributions. In particular, the probability density function (pdf) of the ℳ model is now obtained as a linear combination of these Generalized-K pdf, in which the coefficients depend directly on the parameters of the ℳ distribution. In this way, the Málaga model can be physically interpreted as a superposition of different optical sub-channels each of them described by the corresponding Generalized-K fading model and weighted by the ℳ dependent coefficients. The expressions here proposed are simpler than the equations of the original ℳ model and are validated by means of numerical simulations by generating ℳ -distributed random sequences and their associated histogram. This novel interpretation of the Málaga statistical distribution provides a valuable tool for analyzing the performance of atmospheric optical channels for every turbulence condition.
Pire, B.; Szymanowski, L.
2017-12-01
We calculate at the leading order in αs the QCD amplitude for exclusive neutrino production of a D* or Ds* charmed vector meson on a nucleon. We work in the framework of the collinear QCD approach where generalized parton distributions (GPDs) factorize from perturbatively calculable coefficient functions. We include O (mc) terms in the coefficient functions and the O (mD) term in the definition of heavy meson distribution amplitudes. The show that the analysis of the angular distribution of the decay D(s) *→D(s )π allows us to access the transversity gluon GPDs.
Feynman quasi probability distribution for spin-(1/2), and its generalizations
International Nuclear Information System (INIS)
Colucci, M.
1999-01-01
It has been examined the Feynman's paper Negative probability, in which, after a discussion about the possibility of attributing a real physical meaning to quasi probability distributions, he introduces a new kind of distribution for spin-(1/2), with a possible method of generalization to systems with arbitrary number of states. The principal aim of this article is to shed light upon the method of construction of these distributions, taking into consideration their application to some experiments, and discussing their positive and negative aspects
Pareto-Optimization of HTS CICC for High-Current Applications in Self-Field
Directory of Open Access Journals (Sweden)
Giordano Tomassetti
2018-01-01
Full Text Available The ENEA superconductivity laboratory developed a novel design for Cable-in-Conduit Conductors (CICCs comprised of stacks of 2nd-generation REBCO coated conductors. In its original version, the cable was made up of 150 HTS tapes distributed in five slots, twisted along an aluminum core. In this work, taking advantage of a 2D finite element model, able to estimate the cable’s current distribution in the cross-section, a multiobjective optimization procedure was implemented. The aim of optimization was to simultaneously maximize both engineering current density and total current flowing inside the tapes when operating in self-field, by varying the cross-section layout. Since the optimization process involved both integer and real geometrical variables, the choice of an evolutionary search algorithm was strictly necessary. The use of an evolutionary algorithm in the frame of a multiple objective optimization made it an obliged choice to numerically approach the problem using a nonstandard fast-converging optimization algorithm. By means of this algorithm, the Pareto frontiers for the different configurations were calculated, providing a powerful tool for the designer to achieve the desired preliminary operating conditions in terms of engineering current density and/or total current, depending on the specific application field, that is, power transmission cable and bus bar systems.
Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels
International Nuclear Information System (INIS)
Chhaiba, Hassan; Demni, Nizar; Mouayn, Zouhair
2016-01-01
To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering the total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.
Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels
Energy Technology Data Exchange (ETDEWEB)
Chhaiba, Hassan, E-mail: chhaiba.hassan@gmail.com [Department of Mathematics, Faculty of Sciences, Ibn Tofail University, P.O. Box 133, Kénitra (Morocco); Demni, Nizar, E-mail: nizar.demni@univ-rennes1.fr [IRMAR, Université de Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex (France); Mouayn, Zouhair, E-mail: mouayn@fstbm.ac.ma [Department of Mathematics, Faculty of Sciences and Technics (M’Ghila), Sultan Moulay Slimane, P.O. Box 523, Béni Mellal (Morocco)
2016-07-15
To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering the total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.
Accident investigation of construction sites in Qom city using Pareto chart (2009-2012
Directory of Open Access Journals (Sweden)
M. H. Beheshti
2015-07-01
.Conclusions: Employing Pareto charts as a method for analyzing and identification of accident causes can have an effective role in the management of work-related accidents, proper allocation of funds and time.
Computing the Pareto-Nash equilibrium set in finite multi-objective mixed-strategy games
Directory of Open Access Journals (Sweden)
Victoria Lozan
2013-10-01
Full Text Available The Pareto-Nash equilibrium set (PNES is described as intersection of graphs of efficient response mappings. The problem of PNES computing in finite multi-objective mixed-strategy games (Pareto-Nash games is considered. A method for PNES computing is studied. Mathematics Subject Classification 2010: 91A05, 91A06, 91A10, 91A43, 91A44.
He, Lu; Friedman, Alan M.; Bailey-Kellogg, Chris
2016-01-01
In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability vs. novelty, affinity vs. specificity, activity vs. immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not “dominated”; i.e., no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), in order to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, PEPFR (Protein Engineering Pareto FRontier), that hierarchically subdivides the objective space, employing appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. PMID:22180081
A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.
Yang, Shaofu; Liu, Qingshan; Wang, Jun
2018-04-01
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
Saborido, Rubén; Ruiz, Ana B; Luque, Mariano
2017-01-01
In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA ( global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.
Income- and energy-taxation for redistribution in general equilibrium
International Nuclear Information System (INIS)
FitzRoy, F.R.
1993-01-01
In a 3-factor General Equilibrium (GE)-model with a continuum of ability, the employed choose optimal labour supply, and equilibrium unemployment is determined by benefits funded by wage- and energy-taxes. Aggregate labour and the net wage may increase or decrease with taxation (and unemployment), and conditions for a reduction in redistributive wage-taxes to be Pareto-improving are derived. A small energy tax always raises the net wage, providing the wage tax is reduced to maintain constant employment and a balanced budget. High ability households prefer higher energy taxes when externalities are uniformly distributed and non-distorting. (author)
International Nuclear Information System (INIS)
Manoukian, E.B.
1986-01-01
Generalized conditions (rules) are set up for the existence of the distributional zero-mass limit of renormalized Feynman amplitudes in Minkowski space. These rules are generalizations of rules that have been set up earlier by us and hence are applicable to a larger class of graphs. The study is very general as the vanishing masses are led to vanish at different rates. All subtractions of renormalization are carried out directly in momentum space, about the origin, with the degree of divergence of a subtraction coinciding with the dimensionality of the corresponding subdiagram
Pareto Efficient Solutions of Attack-Defence Trees
DEFF Research Database (Denmark)
Aslanyan, Zaruhi; Nielson, Flemming
2015-01-01
Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes, such as proba......Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes......, such as probability or cost of attacks and defences. In case of multiple parameters most analytical methods optimise one parameter at a time, e.g., minimise cost or maximise probability of an attack. Such methods may lead to sub-optimal solutions when optimising conflicting parameters, e.g., minimising cost while...... maximising probability. In order to tackle this challenge, we devise automated techniques that optimise all parameters at once. Moreover, in the case of conflicting parameters our techniques compute the set of all optimal solutions, defined in terms of Pareto efficiency. The developments are carried out...
Using Pareto points for model identification in predictive toxicology
2013-01-01
Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649
Pareto-Optimal Multi-objective Inversion of Geophysical Data
Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham
2018-01-01
In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.
A general approach to double-moment normalization of drop size distributions
Lee, G.W.; Zawadzki, I.; Szyrmer, W.; Sempere Torres, D.; Uijlenhoet, R.
2004-01-01
Normalization of drop size distributions (DSDs) is reexamined here. First, an extension of the scaling normalization that uses one moment of the DSD as a scaling parameter to a more general scaling normalization that uses two moments as scaling parameters of the normalization is presented. In
Nucleon-generalized parton distributions in the light-front quark model
Indian Academy of Sciences (India)
2016-01-12
Jan 12, 2016 ... 1. Introduction. Generalized parton distributions (GPDs) are the important set of parameters that give us ... The AdS/CFT is the correspondence between the string theory on a higher-dimensional anti-de Sitter ... matching the soft-wall model of AdS/QCD and light-front QCD for EFFs of hadrons with arbitrary ...
International Nuclear Information System (INIS)
Lee, S.J.; Mekjian, A.Z.
2004-01-01
Various phenomenological models of particle multiplicity distributions are discussed using a general form of a unified model which is based on the grand canonical partition function and Feynman's path integral approach to statistical processes. These models can be written as special cases of a more general distribution which has three control parameters which are a,x,z. The relation to these parameters to various physical quantities are discussed. A connection of the parameter a with Fisher's critical exponent τ is developed. Using this grand canonical approach, moments, cumulants and combinants are discussed and a physical interpretation of the combinants are given and their behavior connected to the critical exponent τ. Various physical phenomena such as hierarchical structure, void scaling relations, Koba-Nielson-Olesen or KNO scaling features, clan variables, and branching laws are shown in terms of this general approach. Several of these features which were previously developed in terms of the negative binomial distribution are found to be more general. Both hierarchical structure and void scaling relations depend on the Fisher exponent τ. Applications of our approach to the charged particle multiplicity distribution in jets of L3 and H1 data are given
Effect of a generalized particle momentum distribution on plasma nuclear fusion rates
International Nuclear Information System (INIS)
Kim, Yeong E.; Zubarev, Alexander L.
2006-01-01
We investigate the effect of a generalized particle momentum distribution derived by Galitskii and Yakimets (GY) on nuclear reaction rates in plasma. We derive an approximate semi-analytical formula for nuclear fusion reaction rate between nuclei in a plasma (quantum plasma nuclear fusion; or QPNF). The QPNF formula is applied to calculate deuteron-deuteron fusion rate in a plasma, and the results are compared with the results calculated with the conventional Maxwell-Boltzmann velocity distribution. As an application, we investigate the deuteron-deuteron fusion rate for mobile deuterons in a deuterated metal/alloy. The calculated deuteron-deuteron fusion rates at low energies are enormously enhanced due to the modified tail of the GY's generalized momentum distribution. Our preliminary estimates indicate also that the deuteron-lithium (D+Li) fusion rate and the proton-lithium (p+Li) fusion rate in a metal/alloy at ambient temperatures are also substantially enhanced. (author)
Directory of Open Access Journals (Sweden)
Chi Zhang
2015-05-01
Full Text Available To model correlated bivariate count data with extra zero observations, this paper proposes two new bivariate zero-inflated generalized Poisson (ZIGP distributions by incorporating a multiplicative factor (or dependency parameter λ, named as Type I and Type II bivariate ZIGP distributions, respectively. The proposed distributions possess a flexible correlation structure and can be used to fit either positively or negatively correlated and either over- or under-dispersed count data, comparing to the existing models that can only fit positively correlated count data with over-dispersion. The two marginal distributions of Type I bivariate ZIGP share a common parameter of zero inflation while the two marginal distributions of Type II bivariate ZIGP have their own parameters of zero inflation, resulting in a much wider range of applications. The important distributional properties are explored and some useful statistical inference methods including maximum likelihood estimations of parameters, standard errors estimation, bootstrap confidence intervals and related testing hypotheses are developed for the two distributions. A real data are thoroughly analyzed by using the proposed distributions and statistical methods. Several simulation studies are conducted to evaluate the performance of the proposed methods.
Directory of Open Access Journals (Sweden)
Paddy K.C. Janssen
2016-03-01
Full Text Available Purpose: To find the most accurate mathematical description of the intravaginal ejaculation latency time (IELT distribution in the general male population. Materials and Methods: We compared the fitness of various well-known mathematical distributions with the IELT distribution of two previously published stopwatch studies of the Caucasian general male population and a stopwatch study of Dutch Caucasian men with lifelong premature ejaculation (PE. The accuracy of fitness is expressed by the Goodness of Fit (GOF. The smaller the GOF, the more accurate is the fitness. Results: The 3 IELT distributions are gamma distributions, but the IELT distribution of lifelong PE is another gamma distribution than the IELT distribution of men in the general male population. The Lognormal distribution of the gamma distributions most accurately fits the IELT distribution of 965 men in the general population, with a GOF of 0.057. The Gumbel Max distribution most accurately fits the IELT distribution of 110 men with lifelong PE with a GOF of 0.179. There are more men with lifelong PE ejaculating within 30 and 60 seconds than can be extrapolated from the probability density curve of the Lognormal IELT distribution of men in the general population. Conclusions: Men with lifelong PE have a distinct IELT distribution, e.g., a Gumbel Max IELT distribution, that can only be retrieved from the general male population Lognormal IELT distribution when thousands of men would participate in a IELT stopwatch study. The mathematical formula of the Lognormal IELT distribution is useful for epidemiological research of the IELT.
Janssen, Paddy K.C.
2016-01-01
Purpose To find the most accurate mathematical description of the intravaginal ejaculation latency time (IELT) distribution in the general male population. Materials and Methods We compared the fitness of various well-known mathematical distributions with the IELT distribution of two previously published stopwatch studies of the Caucasian general male population and a stopwatch study of Dutch Caucasian men with lifelong premature ejaculation (PE). The accuracy of fitness is expressed by the Goodness of Fit (GOF). The smaller the GOF, the more accurate is the fitness. Results The 3 IELT distributions are gamma distributions, but the IELT distribution of lifelong PE is another gamma distribution than the IELT distribution of men in the general male population. The Lognormal distribution of the gamma distributions most accurately fits the IELT distribution of 965 men in the general population, with a GOF of 0.057. The Gumbel Max distribution most accurately fits the IELT distribution of 110 men with lifelong PE with a GOF of 0.179. There are more men with lifelong PE ejaculating within 30 and 60 seconds than can be extrapolated from the probability density curve of the Lognormal IELT distribution of men in the general population. Conclusions Men with lifelong PE have a distinct IELT distribution, e.g., a Gumbel Max IELT distribution, that can only be retrieved from the general male population Lognormal IELT distribution when thousands of men would participate in a IELT stopwatch study. The mathematical formula of the Lognormal IELT distribution is useful for epidemiological research of the IELT. PMID:26981594
Progress in Application of Generalized Wigner Distribution to Growth and Other Problems
Einstein, T. L.; Morales-Cifuentes, Josue; Pimpinelli, Alberto; Gonzalez, Diego Luis
We recap the use of the (single-parameter) Generalized Wigner Distribution (GWD) to analyze capture-zone distributions associated with submonolayer epitaxial growth. We discuss recent applications to physical systems, as well as key simulations. We pay particular attention to how this method compares with other methods to assess the critical nucleus size characterizing growth. The following talk discusses a particular case when special insight is needed to reconcile the various methods. We discuss improvements that can be achieved by going to a 2-parameter fragmentation approach. At a much larger scale we have applied this approach to various distributions in socio-political phenomena (areas of secondary administrative units [e.g., counties] and distributions of subway stations). Work at UMD supported by NSF CHE 13-05892.
An R Package for a General Class of Inverse Gaussian Distributions
Directory of Open Access Journals (Sweden)
Victor Leiva
2007-03-01
Full Text Available The inverse Gaussian distribution is a positively skewed probability model that has received great attention in the last 20 years. Recently, a family that generalizes this model called inverse Gaussian type distributions has been developed. The new R package named ig has been designed to analyze data from inverse Gaussian type distributions. This package contains basic probabilistic functions, lifetime indicators and a random number generator from this model. Also, parameter estimates and diagnostics analysis can be obtained using likelihood methods by means of this package. In addition, goodness-of-ﬁt methods are implemented in order to detect the suitability of the model to the data. The capabilities and features of the ig package are illustrated using simulated and real data sets. Furthermore, some new results related to the inverse Gaussian type distribution are also obtained. Moreover, a simulation study is conducted for evaluating the estimation method implemented in the ig package.
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
Log-concavity property for some well-known distributions
Directory of Open Access Journals (Sweden)
G. R. Mohtashami Borzadaran
2011-12-01
Full Text Available Interesting properties and propositions, in many branches of science such as economics have been obtained according to the property of cumulative distribution function of a random variable as a concave function. Caplin and Nalebuff (1988,1989, Bagnoli and Khanna (1989 and Bagnoli and Bergstrom (1989 , 1989, 2005 have discussed the log-concavity property of probability distributions and their applications, especially in economics. Log-concavity concerns twice differentiable real-valued function g whose domain is an interval on extended real line. g as a function is said to be log-concave on the interval (a,b if the function ln(g is a concave function on (a,b. Log-concavity of g on (a,b is equivalent to g'/g being monotone decreasing on (a,b or (ln(g" 6] have obtained log-concavity for distributions such as normal, logistic, extreme-value, exponential, Laplace, Weibull, power function, uniform, gamma, beta, Pareto, log-normal, Student's t, Cauchy and F distributions. We have discussed and introduced the continuous versions of the Pearson family, also found the log-concavity for this family in general cases, and then obtained the log-concavity property for each distribution that is a member of Pearson family. For the Burr family these cases have been calculated, even for each distribution that belongs to Burr family. Also, log-concavity results for distributions such as generalized gamma distributions, Feller-Pareto distributions, generalized Inverse Gaussian distributions and generalized Log-normal distributions have been obtained.
International Nuclear Information System (INIS)
Khromov, V.V.
1978-01-01
The notion of neutron importance when applied to nuclear reactor statics problems described by time-independent homogeneous equations of neutron transport with provision for normalization of neutron distribution is considered. An equation has been obtained for the function of neutron importance in a conditionally critical reactor with respect to an arbitrary nons linear functional determined for the normalized neutron distribution. Relation between this function and the generalized Green function of the selfconjugated operator of the reactor equation is determined and the formula of small perturbations for the functionals of a conditionally critical reactor is deduced
Universality of Generalized Parton Distributions in Light-Front Holographic QCD
de Téramond, Guy F.; Liu, Tianbo; Sufian, Raza Sabbir; Dosch, Hans Günter; Brodsky, Stanley J.; Deur, Alexandre; Hlfhs Collaboration
2018-05-01
The structure of generalized parton distributions is determined from light-front holographic QCD up to a universal reparametrization function w (x ) which incorporates Regge behavior at small x and inclusive counting rules at x →1 . A simple ansatz for w (x ) that fulfills these physics constraints with a single-parameter results in precise descriptions of both the nucleon and the pion quark distribution functions in comparison with global fits. The analytic structure of the amplitudes leads to a connection with the Veneziano model and hence to a nontrivial connection with Regge theory and the hadron spectrum.
International Nuclear Information System (INIS)
Suryawan, Herry P.; Gunarso, Boby
2017-01-01
The generalized mixed fractional Brownian motion is defined by taking linear combinations of a finite number of independent fractional Brownian motions with different Hurst parameters. It is a Gaussian process with stationary increments, posseses self-similarity property, and, in general, is neither a Markov process nor a martingale. In this paper we study the generalized mixed fractional Brownian motion within white noise analysis framework. As a main result, we prove that for any spatial dimension and for arbitrary Hurst parameter the self-intersection local times of the generalized mixed fractional Brownian motions, after a suitable renormalization, are well-defined as Hida white noise distributions. The chaos expansions of the self-intersection local times in the terms of Wick powers of white noises are also presented. (paper)
Nikov, V.S.; Nikova, S.I.; Preneel, B.; Vandewalle, J.; Menezes, A.; Sarkar, P.
2002-01-01
A Key Distribution Center of a network is a server enabling private communications within groups of users. A Distributed Key Distribution Center is a set of servers that jointly realizes a Key Distribution Center. In this paper we build a robust Distributed Key Distribution Center Scheme secure
Song, Q Chelsea; Wee, Serena; Newman, Daniel A
2017-12-01
To reduce adverse impact potential and improve diversity outcomes from personnel selection, one promising technique is De Corte, Lievens, and Sackett's (2007) Pareto-optimal weighting strategy. De Corte et al.'s strategy has been demonstrated on (a) a composite of cognitive and noncognitive (e.g., personality) tests (De Corte, Lievens, & Sackett, 2008) and (b) a composite of specific cognitive ability subtests (Wee, Newman, & Joseph, 2014). Both studies illustrated how Pareto-weighting (in contrast to unit weighting) could lead to substantial improvement in diversity outcomes (i.e., diversity improvement), sometimes more than doubling the number of job offers for minority applicants. The current work addresses a key limitation of the technique-the possibility of shrinkage, especially diversity shrinkage, in the Pareto-optimal solutions. Using Monte Carlo simulations, sample size and predictor combinations were varied and cross-validated Pareto-optimal solutions were obtained. Although diversity shrinkage was sizable for a composite of cognitive and noncognitive predictors when sample size was at or below 500, diversity shrinkage was typically negligible for a composite of specific cognitive subtest predictors when sample size was at least 100. Diversity shrinkage was larger when the Pareto-optimal solution suggested substantial diversity improvement. When sample size was at least 100, cross-validated Pareto-optimal weights typically outperformed unit weights-suggesting that diversity improvement is often possible, despite diversity shrinkage. Implications for Pareto-optimal weighting, adverse impact, sample size of validation studies, and optimizing the diversity-job performance tradeoff are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.
Bouguila, Nizar; Ziou, Djemel
2010-01-01
In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.
Energy Technology Data Exchange (ETDEWEB)
Guzey, Vadim; Goeke, Klaus; Siddikov, Marat
2009-01-01
We generalize the leading twist theory of nuclear shadowing and calculate quark and gluon generalized parton distributions (GPDs) of spinless nuclei. We predict very large nuclear shadowing for nuclear GPDs. In the limit of the purely transverse momentum transfer, our nuclear GPDs become impact parameter dependent nuclear parton distributions (PDFs). Nuclear shadowing induces non-trivial correlations between the impact parameter $b$ and the light-cone fraction $x$. We make predictions for the deeply virtual Compton scattering (DVCS) amplitude and the DVCS cross section on $^{208}$Pb at high energies. We calculate the cross section of the Bethe-Heitler (BH) process and address the issue of the extraction of the DVCS signal from the $e A \\to e \\gamma A$ cross section. We find that the $e A \\to e \\gamma A$ differential cross section is dominated by DVCS at the momentum transfer $t$ near the minima of the nuclear form factor. We also find that nuclear shadowing leads
Topology of event distributions as a generalized definition of phase transitions in finite systems
International Nuclear Information System (INIS)
Chomaz, Ph.; Duflot, V.; Gulminelli, F.; Duflot, V.
2000-01-01
We propose a definition of phase transitions in finite systems based on topology anomalies of the event distribution in the space of observations. This generalizes all the definitions based on the curvature anomalies of thermodynamical potentials and provides a natural definition of order parameters. It is directly operational from the experimental point of view. It allows to study phase transitions in Gibbs equilibria as well as in other ensembles such as the Tsallis ensemble. (author)
Baryon form factors at high momentum transfer and generalized parton distributions
International Nuclear Information System (INIS)
Stoler, Paul
2002-01-01
Nucleon form factors at high momentum transfer t are treated in the framework of generalized parton distributions (GPD's). The possibility of obtaining information about parton high transverse momentum components by application of GPD's to form factors is discussed. This is illustrated by applying an ad hoc 2-body parton wave function to elastic nucleon form factors F 1 and F 2 , the N→Δ transition magnetic form factor G M * , and the wide angle Compton scattering form factor R 1
Bayesian analysis of general failure data from an ageing distribution: advances in numerical methods
International Nuclear Information System (INIS)
Procaccia, H.; Villain, B.; Clarotti, C.A.
1996-01-01
EDF and ENEA carried out a joint research program for developing the numerical methods and computer codes needed for Bayesian analysis of component-lives in the case of ageing. Early results of this study were presented at ESREL'94. Since then the following further steps have been gone: input data have been generalized to the case that observed lives are censored both on the right and on the left; allowable life distributions are Weibull and gamma - their parameters are both unknown and can be statistically dependent; allowable priors are histograms relative to different parametrizations of the life distribution of concern; first-and-second-order-moments of the posterior distributions can be computed. In particular the covariance will give some important information about the degree of the statistical dependence between the parameters of interest. An application of the code to the appearance of a stress corrosion cracking in a tube of the PWR Steam Generator system is presented. (authors)
International Nuclear Information System (INIS)
Mace, R.L.
1996-01-01
We report on a new form for the dielectric tensor for a plasma containing superthermal particles. The individual particle components are modelled by 3-dimensional isotropic kappa, or generalized Lorentzian, distributions with arbitrary real-valued index κ. The new dielectric tensor is valid for arbitrary wavevectors. The dielectric tensor, which resembles Trubnikov's dielectric tensor for a relativistic plasma, is compared with the familiar Maxwellian form. When the dielectric tensor is used in the plasma dispersion relation for waves propagating parallel to the magnetic field it reproduces previously derived dispersion relations for various electromagnetic and electrostatic waves in plasmas modelled by Lorentzian particle distributions. Within the constraints of propagation parallel to the ambient magnetic field, we extend the above results to incorporate loss-cone Lorentzian particle distributions, which have important applications in laboratory mirror devices, as well as in space and astrophysical environments. (orig.)
Analysis of nonlocal neural fields for both general and gamma-distributed connectivities
Hutt, Axel; Atay, Fatihcan M.
2005-04-01
This work studies the stability of equilibria in spatially extended neuronal ensembles. We first derive the model equation from statistical properties of the neuron population. The obtained integro-differential equation includes synaptic and space-dependent transmission delay for both general and gamma-distributed synaptic connectivities. The latter connectivity type reveals infinite, finite, and vanishing self-connectivities. The work derives conditions for stationary and nonstationary instabilities for both kernel types. In addition, a nonlinear analysis for general kernels yields the order parameter equation of the Turing instability. To compare the results to findings for partial differential equations (PDEs), two typical PDE-types are derived from the examined model equation, namely the general reaction-diffusion equation and the Swift-Hohenberg equation. Hence, the discussed integro-differential equation generalizes these PDEs. In the case of the gamma-distributed kernels, the stability conditions are formulated in terms of the mean excitatory and inhibitory interaction ranges. As a novel finding, we obtain Turing instabilities in fields with local inhibition-lateral excitation, while wave instabilities occur in fields with local excitation and lateral inhibition. Numerical simulations support the analytical results.
Level Diagrams analysis of Pareto Front for multiobjective system redundancy allocation
International Nuclear Information System (INIS)
Zio, E.; Bazzo, R.
2011-01-01
Reliability-based and risk-informed design, operation, maintenance and regulation lead to multiobjective (multicriteria) optimization problems. In this context, the Pareto Front and Set found in a multiobjective optimality search provide a family of solutions among which the decision maker has to look for the best choice according to his or her preferences. Efficient visualization techniques for Pareto Front and Set analyses are needed for helping decision makers in the selection task. In this paper, we consider the multiobjective optimization of system redundancy allocation and use the recently introduced Level Diagrams technique for graphically representing the resulting Pareto Front and Set. Each objective and decision variable is represented on separate diagrams where the points of the Pareto Front and Set are positioned according to their proximity to ideally optimal points, as measured by a metric of normalized objective values. All diagrams are synchronized across all objectives and decision variables. On the basis of the analysis of the Level Diagrams, we introduce a procedure for reducing the number of solutions in the Pareto Front; from the reduced set of solutions, the decision maker can more easily identify his or her preferred solution.
Directory of Open Access Journals (Sweden)
Ziaul Huque
2012-01-01
Full Text Available A Computational Fluid Dynamics (CFD and response surface-based multiobjective design optimization were performed for six different 2D airfoil profiles, and the Pareto optimal front of each airfoil is presented. FLUENT, which is a commercial CFD simulation code, was used to determine the relevant aerodynamic loads. The Lift Coefficient (CL and Drag Coefficient (CD data at a range of 0° to 12° angles of attack (α and at three different Reynolds numbers (Re=68,459, 479, 210, and 958, 422 for all the six airfoils were obtained. Realizable k-ε turbulence model with a second-order upwind solution method was used in the simulations. The standard least square method was used to generate response surface by the statistical code JMP. Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II was used to determine the Pareto optimal set based on the response surfaces. Each Pareto optimal solution represents a different compromise between design objectives. This gives the designer a choice to select a design compromise that best suits the requirements from a set of optimal solutions. The Pareto solution set is presented in the form of a Pareto optimal front.
Bíró, Gábor; Barnaföldi, Gergely Gábor; Biró, Tamás Sándor; Shen, Keming
2018-02-01
The latest, high-accuracy identified hadron spectra measurements in highenergy nuclear collisions led us to the investigation of the strongly interacting particles and collective effects in small systems. Since microscopical processes result in a statistical Tsallis - Pareto distribution, the fit parameters q and T are well suited for identifying system size scalings and initial conditions. Moreover, parameter values provide information on the deviation from the extensive, Boltzmann - Gibbs statistics in finite-volumes. We apply here the fit procedure developed in our earlier study for proton-proton collisions [1, 2]. The observed mass and center-of-mass energy trends in the hadron production are compared to RHIC dAu and LHC pPb data in different centrality/multiplicity classes. Here we present new results on mass hierarchy in pp and pA from light to heavy hadrons.
A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.
Brusco, Michael J; Steinley, Douglas
2012-02-01
There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.
Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.
Elhossini, Ahmed; Areibi, Shawki; Dony, Robert
2010-01-01
This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.
Directory of Open Access Journals (Sweden)
Jarosław Rudy
2015-01-01
Full Text Available In this paper the job shop scheduling problem (JSP with minimizing two criteria simultaneously is considered. JSP is frequently used model in real world applications of combinatorial optimization. Multi-objective job shop problems (MOJSP were rarely studied. We implement and compare two multi-agent nature-based methods, namely ant colony optimization (ACO and genetic algorithm (GA for MOJSP. Both of those methods employ certain technique, taken from the multi-criteria decision analysis in order to establish ranking of solutions. ACO and GA differ in a method of keeping information about previously found solutions and their quality, which affects the course of the search. In result, new features of Pareto approximations provided by said algorithms are observed: aside from the slight superiority of the ACO method the Pareto frontier approximations provided by both methods are disjoint sets. Thus, both methods can be used to search mutually exclusive areas of the Pareto frontier.
Rozenberg, P
2017-06-01
Ultrasound measurement of cervical length in the general population enables the identification of women at risk for spontaneous preterm delivery. Vaginal progesterone is effective in reducing the risk of preterm delivery in this population. This screening associated with treatment by vaginal progesterone is cost-effective. Universal screening of cervical length can therefore be considered justified. Nonetheless, this screening will not appreciably reduce the preterm birth prevalence: in France or UK, where the preterm delivery rate is around 7.4%, this strategy would make it possible to reduce it only to 7.0%. This small benefit must be set against the considerable effort required in terms of screening ultrasound scans. Universal ultrasound screening of cervical length is the inverse of Pareto's principle: a small benefit against a considerable effort. © 2016 Royal College of Obstetricians and Gynaecologists.
Calculation of the dielectric tensor for a generalized Lorentzian (kappa) distribution function
International Nuclear Information System (INIS)
Summers, D.; Xue, S.; Thorne, R.M.
1994-01-01
Expressions are derived for the elements of the dielectric tensor for linear waves propagating at an arbitrary angle to a uniform magnetic field in a fully hot plasma whose constituent particle species σ are modeled by generalized Lorentzian distribution functions. The expressions involve readily computable single integrals whose integrands involve only elementary functions, Bessel functions, and modified plasma dispersion functions, the latter being available in the form of finite algebraic series. Analytical forms for the integrals are derived in the limits λ→0 and λ→∞, where λ=(k perpendicular ρ Lσ ) 2 /2, with k perpendicular the component of wave vector perpendicular to the ambient magnetic field, and ρ Lσ the Larmor radius for the particle species σ. Consideration is given to the important limits of wave propagation parallel and perpendicular to the ambient magnetic field, and also to the cold plasma limit. Since most space plasmas are well modeled by generalized Lorentzian particle distribution functions, the results obtained in this paper provide a powerful tool for analyzing kinetic (micro-) instabilities in space plasmas in a very general context, limited only by the assumptions of linear plasma theory
Dan, Youquan; Xu, Yonggen
2018-04-01
The evolution law of arbitrary order moments of the Wigner distribution function, which can be applied to the different spatial power spectra, is obtained for partially coherent general beams propagating in atmospheric turbulence using the extended Huygens-Fresnel principle. A coupling coefficient of radiant intensity distribution (RID) in turbulence is introduced. Analytical expressions of the evolution of the first five-order moments, kurtosis parameter, coupling coefficient of RID for general beams in turbulence are derived, and the formulas are applied to Airy beams. Results show that there exist two types for general beams in turbulence. A larger value of kurtosis parameter for Airy beams also reveals that coupling effect due to turbulence is stronger. Both theoretical analysis and numerical results show that the maximum value of kurtosis parameter for an Airy beam in turbulence is independent of turbulence strength parameter and is only determined by inner scale of turbulence. Relative angular spread, kurtosis and coupling coefficient are less influenced by turbulence for Airy beams with a smaller decay factor and a smaller initial width of the first lobe.
Directory of Open Access Journals (Sweden)
Yan Sun
2015-09-01
Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Boer, Marie
2017-09-01
Generalized Parton Distributions (GPDs) contain the correlation between the parton's longitudinal momentum and their transverse distribution. They are accessed through hard exclusive processes, such as Deeply Virtual Compton Scattering (DVCS). DVCS has already been measured in several experiments and several models allow for extracting GPDs from these measurements. Timelike Compton Scattering (TCS) is, at leading order, the time-reversal equivalent process to DVCS and accesses GPDs at the same kinematics. Comparing GPDs extracted from DVCS and TCS is a unique way for proving GPD universality. Combining fits from the two processes will also allow for better constraining the GPDs. We will present our method for extracting GPDs from DVCS and TCS pseudo-data. We will compare fit results from the two processes in similar conditions and present what can be expected in term of contraints on GPDs from combined fits.
Evaluation of water vapor distribution in general circulation models using satellite observations
Soden, Brian J.; Bretherton, Francis P.
1994-01-01
This paper presents a comparison of the water vapor distribution obtained from two general circulation models, the European Centre for Medium-Range Weather Forecasts (ECMWF) model and the National Center for Atmospheric Research (NCAR) Community Climate Model (CCM), with satellite observations of total precipitable water (TPW) from Special Sensor Microwave/Imager (SSM/I) and upper tropospheric relative humidity (UTH) from GOES. Overall, both models are successful in capturing the primary features of the observed water vapor distribution and its seasonal variation. For the ECMWF model, however, a systematic moist bias in TPW is noted over well-known stratocumulus regions in the eastern subtropical oceans. Comparison with radiosonde profiles suggests that this problem is attributable to difficulties in modeling the shallowness of the boundary layer and large vertical water vapor gradients which characterize these regions. In comparison, the CCM is more successful in capturing the low values of TPW in the stratocumulus regions, although it tends to exhibit a dry bias over the eastern half of the subtropical oceans and a corresponding moist bias in the western half. The CCM also significantly overestimates the daily variability of the moisture fields in convective regions, suggesting a problem in simulating the temporal nature of moisture transport by deep convection. Comparison of the monthly mean UTH distribution indicates generally larger discrepancies than were noted for TPW owing to the greater influence of large-scale dynamical processes in determining the distribution of UTH. In particular, the ECMWF model exhibits a distinct dry bias along the Intertropical Convergence Zone (ITCZ) and a moist bias over the subtropical descending branches of the Hadley cell, suggesting an underprediction in the strength of the Hadley circulation. The CCM, on the other hand, demonstrates greater discrepancies in UTH than are observed for the ECMWF model, but none that are as
DEFF Research Database (Denmark)
Andersen, Kurt Munk; Sandqvist, Allan
1997-01-01
We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution.......We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution....
On Usage of Pareto curves to Select Wind Turbine Controller Tunings to the Wind Turbulence Level
DEFF Research Database (Denmark)
Odgaard, Peter Fogh
2015-01-01
Model predictive control has in recently publications shown its potential for lowering of cost of energy of modern wind turbines. Pareto curves can be used to evaluate performance of these controllers with multiple conflicting objectives of power and fatigue loads. In this paper an approach...... to update an model predictive wind turbine controller tuning as the wind turbulence increases, as increased turbulence levels results in higher loads for the same controller tuning. In this paper the Pareto curves are computed using an industrial high fidelity aero-elastic model. Simulations show...
Cross-channel analysis of quark and gluon generalized parton distributions with helicity flip
International Nuclear Information System (INIS)
Pire, B.; Semenov-Tian-Shansky, K.; Szymanowski, L.; Wallon, S.
2014-01-01
Quark and gluon helicity flip generalized parton distributions (GPDs) address the transversity quark and gluon structure of the nucleon. In order to construct a theoretically consistent parametrization of these hadronic matrix elements, we work out the set of combinations of those GPDs suitable for the SO(3) partial wave (PW) expansion in the cross-channel. This universal result will help to build up a flexible parametrization of these important hadronic non-perturbative quantities, using, for instance, the approaches based on the conformal PW expansion of GPDs such as the Mellin-Barnes integral or the dual parametrization techniques. (orig.)
Cross-channel analysis of quark and gluon generalized parton distributions with helicity flip
Energy Technology Data Exchange (ETDEWEB)
Pire, B. [CNRS, CPhT, Ecole Polytechnique, Palaiseau (France); Semenov-Tian-Shansky, K. [Universite de Liege, IFPA, Departement AGO, Liege (Belgium); Szymanowski, L. [National Centre for Nuclear Research (NCBJ), Warsaw (Poland); Wallon, S. [Universite de Paris-Sud, CNRS, LPT, Orsay (France); Universite Paris 06, Faculte de Physique, UPMC, Paris (France)
2014-05-15
Quark and gluon helicity flip generalized parton distributions (GPDs) address the transversity quark and gluon structure of the nucleon. In order to construct a theoretically consistent parametrization of these hadronic matrix elements, we work out the set of combinations of those GPDs suitable for the SO(3) partial wave (PW) expansion in the cross-channel. This universal result will help to build up a flexible parametrization of these important hadronic non-perturbative quantities, using, for instance, the approaches based on the conformal PW expansion of GPDs such as the Mellin-Barnes integral or the dual parametrization techniques. (orig.)
Very short-term probabilistic forecasting of wind power with generalized logit-Normal distributions
DEFF Research Database (Denmark)
Pinson, Pierre
2012-01-01
and probability masses at the bounds. Both auto-regressive and conditional parametric auto-regressive models are considered for the dynamics of their location and scale parameters. Estimation is performed in a recursive least squares framework with exponential forgetting. The superiority of this proposal over......Very-short-term probabilistic forecasts, which are essential for an optimal management of wind generation, ought to account for the non-linear and double-bounded nature of that stochastic process. They take here the form of discrete–continuous mixtures of generalized logit–normal distributions...
Directory of Open Access Journals (Sweden)
K. Gawdzińska
2011-04-01
Full Text Available This author discusses the use of selected quality management tools, i.e. the Pareto chart and Ishikawa fishbone diagram, for the descriptionof composite casting defects. The Pareto chart allows to determine defect priority related with metallic composite castings, while theIshikawa diagram indicates the causes of defect formation and enables calculating defect weights.
K. Gawdzińska
2011-01-01
This author discusses the use of selected quality management tools, i.e. the Pareto chart and Ishikawa fishbone diagram, for the descriptionof composite casting defects. The Pareto chart allows to determine defect priority related with metallic composite castings, while theIshikawa diagram indicates the causes of defect formation and enables calculating defect weights.
International Nuclear Information System (INIS)
Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.
2007-01-01
Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
Jean Mercier-Ythier
2010-01-01
We consider abstract social systems of private property, made of n individuals endowed with non-paternalistic interdependent preferences, who interact through exchanges on competitive markets and Pareto-efficient lumpsum transfers. The transfers follow from a distributive liberal social contract defined as a redistribution of initial endowments such that the resulting market equilibrium allocation is both Pareto-efficient relative to individual interdependent preferences, and unanimously weak...
Distributing Correlation Coefficients of Linear Structure-Activity/Property Models
Directory of Open Access Journals (Sweden)
Sorana D. BOLBOACA
2011-12-01
Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.
Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis
Das, Samiran
2018-04-01
The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.
Liu, Bo; Liu, Pei; Xu, Zhenli; Zhou, Shenggao
2013-10-01
Near a charged surface, counterions of different valences and sizes cluster; and their concentration profiles stratify. At a distance from such a surface larger than the Debye length, the electric field is screened by counterions. Recent studies by a variational mean-field approach that includes ionic size effects and by Monte Carlo simulations both suggest that the counterion stratification is determined by the ionic valence-to-volume ratios. Central in the mean-field approach is a free-energy functional of ionic concentrations in which the ionic size effects are included through the entropic effect of solvent molecules. The corresponding equilibrium conditions define the generalized Boltzmann distributions relating the ionic concentrations to the electrostatic potential. This paper presents a detailed analysis and numerical calculations of such a free-energy functional to understand the dependence of the ionic charge density on the electrostatic potential through the generalized Boltzmann distributions, the role of ionic valence-to-volume ratios in the counterion stratification, and the modification of Debye length due to the effect of ionic sizes.
Han, Fang; Liu, Han
2017-02-01
Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.
Structure of general-population antibody titer distributions to influenza A virus.
Nhat, Nguyen Thi Duy; Todd, Stacy; de Bruin, Erwin; Thao, Tran Thi Nhu; Vy, Nguyen Ha Thao; Quan, Tran Minh; Vinh, Dao Nguyen; van Beek, Janko; Anh, Pham Hong; Lam, Ha Minh; Hung, Nguyen Thanh; Thanh, Nguyen Thi Le; Huy, Huynh Le Anh; Ha, Vo Thi Hong; Baker, Stephen; Thwaites, Guy E; Lien, Nguyen Thi Nam; Hong, Tran Thi Kim; Farrar, Jeremy; Simmons, Cameron P; Chau, Nguyen Van Vinh; Koopmans, Marion; Boni, Maciej F
2017-07-20
Seroepidemiological studies aim to understand population-level exposure and immunity to infectious diseases. Their results are normally presented as binary outcomes describing the presence or absence of pathogen-specific antibody, despite the fact that many assays measure continuous quantities. A population's natural distribution of antibody titers to an endemic infectious disease may include information on multiple serological states - naiveté, recent infection, non-recent infection, childhood infection - depending on the disease in question and the acquisition and waning patterns of immunity. In this study, we investigate 20,152 general-population serum samples from southern Vietnam collected between 2009 and 2013 from which we report antibody titers to the influenza virus HA1 protein using a continuous titer measurement from a protein microarray assay. We describe the distributions of antibody titers to subtypes 2009 H1N1 and H3N2. Using a model selection approach to fit mixture distributions, we show that 2009 H1N1 antibody titers fall into four titer subgroups and that H3N2 titers fall into three subgroups. For H1N1, our interpretation is that the two highest-titer subgroups correspond to recent and historical infection, which is consistent with 2009 pandemic attack rates. Similar interpretations are available for H3N2, but right-censoring of titers makes these interpretations difficult to validate.
International Nuclear Information System (INIS)
Girod, F.X.
2006-12-01
The structure of the nucleon, among the first fundamental problems in hadronic physics, is the subject of a renewed interest. The lightest baryonic state has historically been described in two complementary approaches: through elastic scattering, measuring form factors which reflect the spatial shape of charge distributions, and through deep inelastic scattering, providing access to parton distribution functions which encode the momentum content carried by the constituents. The recently developed formalism of Generalized Parton Distributions unifies those approaches and provides access to new informations. The cleanest process sensitive to GPDs is the deeply virtual Compton scattering (DVCS) contributing to the ep → epγ reaction. This work deals with a dedicated experiment accomplished with the CLAS detector, completed with two specific equipments: a lead tungstate calorimeter covering photon detection at small angles, and a superconducting solenoid actively shielding the electromagnetic background. The entire project is covered: from the upgrade of the experimental setup, through the update of the software, data taking and analysis, up to a first comparison of the beam spin asymmetry to model predictions. (author)
Age-related changes in abdominal fat distribution in Japanese adults in the general population.
Sugihara, Masako; Oka, Rie; Sakurai, Masaru; Nakamura, Koshi; Moriuchi, Tadashi; Miyamoto, Susumu; Takeda, Yoshiyu; Yagi, Kunimasa; Yamagishi, Masakazu
2011-01-01
Early studies have indicated that body fat shifts from peripheral stores to central stores with aging. The objective of this study was to investigate age-related changes in abdominal fat distribution of Japanese men and women of the general population over a wide range of body mass indices (BMI). A total of 2,220 non-diabetic, apparently healthy Japanese adults (1,240 men and 980 women; age range 40-69 years) were included in the study sample. All subjects underwent a CT scan at the level of the umbilicus, and the areas of visceral adipose tissue (AT) and subcutaneous AT were quantified. When the subjects were stratified by BMI into 18.5-23.0 kg/m(2), 23.0-27.5 kg/m(2), and 27.5 kg/m(2) or higher, visceral AT was positively correlated with age in all of the BMI strata in both genders (pabdominal fat distribution, women retained the subcutaneous-dominant type of fat distribution up to 70 years.
TU-C-17A-01: A Data-Based Development for Pratical Pareto Optimality Assessment and Identification
International Nuclear Information System (INIS)
Ruan, D; Qi, S; DeMarco, J; Kupelian, P; Low, D
2014-01-01
Purpose: To develop an efficient Pareto optimality assessment scheme to support plan comparison and practical determination of best-achievable practical treatment plan goals. Methods: Pareto efficiency reflects the tradeoffs among competing target coverage and normal tissue sparing in multi-criterion optimization (MCO) based treatment planning. Assessing and understanding Pareto optimality provides insightful guidance for future planning. However, current MCO-driven Pareto estimation makes relaxed assumptions about the Pareto structure and insufficiently account for practical limitations in beam complexity, leading to performance upper bounds that may be unachievable. This work proposed an alternative data-driven approach that implicitly incorporates the practical limitations, and identifies the Pareto frontier subset by eliminating dominated plans incrementally using the Edgeworth Pareto hull (EPH). The exactness of this elimination process also permits the development of a hierarchical procedure for speedup when the plan cohort size is large, by partitioning the cohort and performing elimination in each subset before a final aggregated elimination. The developed algorithm was first tested on 2D and 3D where accuracy can be reliably assessed. As a specific application, the algorithm was applied to compare systematic plan quality for lower head-and-neck, amongst 4 competing treatment modalities. Results: The algorithm agrees exactly with brute-force pairwise comparison and visual inspection in low dimensions. The hierarchical algorithm shows sqrt(k) folds speedup with k being the number of data points in the plan cohort, demonstrating good efficiency enhancement for heavy testing tasks. Application to plan performance comparison showed superiority of tomotherapy plans for the lower head-and-neck, and revealed a potential nonconvex Pareto frontier structure. Conclusion: An accurate and efficient scheme to identify Pareto frontier from a plan cohort has been
TU-C-17A-01: A Data-Based Development for Pratical Pareto Optimality Assessment and Identification
Energy Technology Data Exchange (ETDEWEB)
Ruan, D; Qi, S; DeMarco, J; Kupelian, P; Low, D [UCLA Department of Radiation Oncology, Los Angeles, CA (United States)
2014-06-15
Purpose: To develop an efficient Pareto optimality assessment scheme to support plan comparison and practical determination of best-achievable practical treatment plan goals. Methods: Pareto efficiency reflects the tradeoffs among competing target coverage and normal tissue sparing in multi-criterion optimization (MCO) based treatment planning. Assessing and understanding Pareto optimality provides insightful guidance for future planning. However, current MCO-driven Pareto estimation makes relaxed assumptions about the Pareto structure and insufficiently account for practical limitations in beam complexity, leading to performance upper bounds that may be unachievable. This work proposed an alternative data-driven approach that implicitly incorporates the practical limitations, and identifies the Pareto frontier subset by eliminating dominated plans incrementally using the Edgeworth Pareto hull (EPH). The exactness of this elimination process also permits the development of a hierarchical procedure for speedup when the plan cohort size is large, by partitioning the cohort and performing elimination in each subset before a final aggregated elimination. The developed algorithm was first tested on 2D and 3D where accuracy can be reliably assessed. As a specific application, the algorithm was applied to compare systematic plan quality for lower head-and-neck, amongst 4 competing treatment modalities. Results: The algorithm agrees exactly with brute-force pairwise comparison and visual inspection in low dimensions. The hierarchical algorithm shows sqrt(k) folds speedup with k being the number of data points in the plan cohort, demonstrating good efficiency enhancement for heavy testing tasks. Application to plan performance comparison showed superiority of tomotherapy plans for the lower head-and-neck, and revealed a potential nonconvex Pareto frontier structure. Conclusion: An accurate and efficient scheme to identify Pareto frontier from a plan cohort has been
Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.
2015-11-01
In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.
Directory of Open Access Journals (Sweden)
Carlos Pozo
Full Text Available Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study
Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Sorribas, Albert; Jiménez, Laureano
2012-01-01
Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the
DEFF Research Database (Denmark)
Barmby, Tim; Smith, Nina
1996-01-01
This paper analyses the labour supply behaviour of households in Denmark and Britain. It employs models in which the preferences of individuals within the household are explicitly represented. The households are then assumed to decide on their labour supply in a Pareto-Optimal fashion. Describing...
Spectral-Efficiency - Illumination Pareto Front for Energy Harvesting Enabled VLC System
Abdelhady, Amr Mohamed Abdelaziz
2017-12-13
The continuous improvement in optical energy harvesting devices motivates visible light communication (VLC) system developers to utilize such available free energy sources. An outdoor VLC system is considered where an optical base station sends data to multiple users that are capable of harvesting the optical energy. The proposed VLC system serves multiple users using time division multiple access (TDMA) with unequal time and power allocation, which are allocated to improve the system performance. The adopted optical system provides users with illumination and data communication services. The outdoor optical design objective is to maximize the illumination, while the communication design objective is to maximize the spectral efficiency (SE). The design objectives are shown to be conflicting, therefore, a multiobjective optimization problem is formulated to obtain the Pareto front performance curve for the proposed system. To this end, the marginal optimization problems are solved first using low complexity algorithms. Then, based on the proposed algorithms, a low complexity algorithm is developed to obtain an inner bound of the Pareto front for the illumination-SE tradeoff. The inner bound for the Pareto-front is shown to be close to the optimal Pareto-frontier via several simulation scenarios for different system parameters.
Approximating the Pareto set of multiobjective linear programs via robust optimization
Gorissen, B.L.; den Hertog, D.
2012-01-01
We consider problems with multiple linear objectives and linear constraints and use adjustable robust optimization and polynomial optimization as tools to approximate the Pareto set with polynomials of arbitrarily large degree. The main difference with existing techniques is that we optimize a
Reddy, P.V.; Engwerda, J.C.
2011-01-01
In this article we derive necessary and sufficient conditions for the existence of Pareto optimal solutions for infinite horizon cooperative differential games. We consider games defined by non autonomous and discounted autonomous systems. The obtained results are used to analyze the regular
Kyroudi, Archonteia; Petersson, Kristoffer; Ghandour, Sarah; Pachoud, Marc; Matzinger, Oscar; Ozsahin, Mahmut; Bourhis, Jean; Bochud, François; Moeckli, Raphaël
2016-08-01
Multi-criteria optimization provides decision makers with a range of clinical choices through Pareto plans that can be explored during real time navigation and then converted into deliverable plans. Our study shows that dosimetric differences can arise between the two steps, which could compromise the clinical choices made during navigation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Huang, Hui; Ning, Jixian
2017-01-01
Prederivatives play an important role in the research of set optimization problems. First, we establish several existence theorems of prederivatives for γ -paraconvex set-valued mappings in Banach spaces with [Formula: see text]. Then, in terms of prederivatives, we establish both necessary and sufficient conditions for the existence of Pareto minimal solution of set optimization problems.
Approximating the Pareto Set of Multiobjective Linear Programs via Robust Optimization
Gorissen, B.L.; den Hertog, D.
2012-01-01
Abstract: The Pareto set of a multiobjective optimization problem consists of the solutions for which one or more objectives can not be improved without deteriorating one or more other objectives. We consider problems with linear objectives and linear constraints and use Adjustable Robust
Searching for the Pareto frontier in multi-objective protein design.
Nanda, Vikas; Belure, Sandeep V; Shir, Ofer M
2017-08-01
The goal of protein engineering and design is to identify sequences that adopt three-dimensional structures of desired function. Often, this is treated as a single-objective optimization problem, identifying the sequence-structure solution with the lowest computed free energy of folding. However, many design problems are multi-state, multi-specificity, or otherwise require concurrent optimization of multiple objectives. There may be tradeoffs among objectives, where improving one feature requires compromising another. The challenge lies in determining solutions that are part of the Pareto optimal set-designs where no further improvement can be achieved in any of the objectives without degrading one of the others. Pareto optimality problems are found in all areas of study, from economics to engineering to biology, and computational methods have been developed specifically to identify the Pareto frontier. We review progress in multi-objective protein design, the development of Pareto optimization methods, and present a specific case study using multi-objective optimization methods to model the tradeoff between three parameters, stability, specificity, and complexity, of a set of interacting synthetic collagen peptides.
Fernández Caballero, Juan Carlos; Martínez, Francisco José; Hervás, César; Gutiérrez, Pedro Antonio
2010-05-01
This paper proposes a multiclassification algorithm using multilayer perceptron neural network models. It tries to boost two conflicting main objectives of multiclassifiers: a high correct classification rate level and a high classification rate for each class. This last objective is not usually optimized in classification, but is considered here given the need to obtain high precision in each class in real problems. To solve this machine learning problem, we use a Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm. We consider a memetic Pareto evolutionary approach based on the NSGA2 evolutionary algorithm (MPENSGA2). Once the Pareto front is built, two strategies or automatic individual selection are used: the best model in accuracy and the best model in sensitivity (extremes in the Pareto front). These methodologies are applied to solve 17 classification benchmark problems obtained from the University of California at Irvine (UCI) repository and one complex real classification problem. The models obtained show high accuracy and a high classification rate for each class.
Karanikas, Nektarios
2016-01-01
Although reengineering is strategically advantageous for organisations in order to keep functional and sustainable, safety must remain a priority and respective efforts need to be maintained. This paper suggests the combination of soft system methodology (SSM) and Pareto analysis on the scope of
International Nuclear Information System (INIS)
Rausch, Sebastian; Metcalf, Gilbert E.; Reilly, John M.
2011-01-01
Many policies to limit greenhouse gas emissions have at their core efforts to put a price on carbon emissions. Carbon pricing impacts households both by raising the cost of carbon intensive products and by changing factor prices. A complete analysis requires taking both effects into account. The impact of carbon pricing is determined by heterogeneity in household spending patterns across income groups as well as heterogeneity in factor income patterns across income groups. It is also affected by precise formulation of the policy (how is the revenue from carbon pricing distributed) as well as the treatment of other government policies (e.g. the treatment of transfer payments). What is often neglected in analyses of policy is the heterogeneity of impacts across households even within income or regional groups. In this paper, we incorporate 15,588 households from the U.S. Consumer and Expenditure Survey data as individual agents in a comparative-static general equilibrium framework. These households are represented within the MIT USREP model, a detailed general equilibrium model of the U.S. economy. In particular, we categorize households by full household income (factor income as well as transfer income) and apply various measures of lifetime income to distinguish households that are temporarily low-income (e.g., retired households drawing down their financial assets) from permanently low-income households. We also provide detailed within-group distributional measures of burden impacts from various policy scenarios. - Highlights: → We develop a simulation model with 15,588 households to study the distributional impacts of carbon pricing in the US. → Sources side impacts have typically been ignored in the literature biasing studies towards finding carbon pricing to be regressive. → Our general equilibrium framework allows us to capture uses and sources side impacts from carbon pricing. → We find that variation in impacts within broad socioeconomic groups may
Barodka, S.; Krasovsky, A.; Shalamyansky, A.
2014-12-01
The height of the tropopause, which divided the stratosphere and the troposphere, is a result of two rival categories of processes: the tropospheric vertical convection and the radiative heating of the stratosphere resulting from the ozone cycle. Hence, it is natural that tropospheric and stratospheric phenomena can have effect each other in manifold processes of stratosphere-troposphere interactions. In the present study we focus our attention to the "top-down" side of the interaction: the impact of stratospheric ozone distribution on the features of tropospheric circulation and the associated weather patterns and regional climate conditions. We proceed from analyzes of the observational data performed at the A.I. Voeikov Main Geophysical Observatory, which suggest a distinct correlation between stratospheric ozone distribution, synoptic formations and air-masses boundaries in the upper troposphere and the temperature field of the lower stratosphere [1]. Furthermore, we analyze local features of atmospheric general circulation and stratospheric ozone distribution from the atmospheric reanalyses and general circulation model data, focusing our attention to instantaneous positions of subtropical and polar stationary atmospheric fronts, which define regional characteristics of the general circulation cells in the troposphere and separate global tropospheric air-masses, correspond to distinct meteorological regimes in the TOC field [2, 3]. We assume that by altering the tropopause height, stratospheric ozone-related processes can have an impact on the location of the stationary atmospheric fronts, thereby exerting influence on circulation processes in troposphere and lower stratosphere. For midlatitudes, the tropopause height controls the position of the polar stationary front, which has a direct impact on the trajectory of motion of active vortices on synoptic tropospheric levels, thereby controlling weather patterns in that region and the regional climate. This
Trade-off bounds for the Pareto surface approximation in multi-criteria IMRT planning
International Nuclear Information System (INIS)
Serna, J I; Monz, M; Kuefer, K H; Thieke, C
2009-01-01
One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide on the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to ensure that all clinically meaningful plans are covered and that as many clinically irrelevant plans as possible are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that constitutes the first phase. It is possible that two plans on the Pareto surface have a small, clinically insignificant difference in one criterion and a significant difference in another criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, the so-called trade-off bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects. The proposed scheme is applied to two artificial cases and one clinical case of a paraspinal tumor. For all cases, the quality of the Pareto surface approximation is measured with respect to the number of computed plans, and the range of values occurring in the approximation for different criteria is compared. Through enforcing trade-off bounds, the scheme disregards clinically irrelevant plans during the approximation. Thereby, the number of plans necessary to achieve a good approximation quality can be significantly reduced. Thus, trade-off bounds are an effective tool to focus the planning and to reduce computation time.
Trade-off bounds for the Pareto surface approximation in multi-criteria IMRT planning.
Serna, J I; Monz, M; Küfer, K H; Thieke, C
2009-10-21
One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide on the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to ensure that all clinically meaningful plans are covered and that as many clinically irrelevant plans as possible are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that constitutes the first phase. It is possible that two plans on the Pareto surface have a small, clinically insignificant difference in one criterion and a significant difference in another criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, the so-called trade-off bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects. The proposed scheme is applied to two artificial cases and one clinical case of a paraspinal tumor. For all cases, the quality of the Pareto surface approximation is measured with respect to the number of computed plans, and the range of values occurring in the approximation for different criteria is compared. Through enforcing trade-off bounds, the scheme disregards clinically irrelevant plans during the approximation. Thereby, the number of plans necessary to achieve a good approximation quality can be significantly reduced. Thus, trade-off bounds are an effective tool to focus the planning and to reduce computation time.
Ranking of microRNA target prediction scores by Pareto front analysis.
Sahoo, Sudhakar; Albrecht, Andreas A
2010-12-01
Over the past ten years, a variety of microRNA target prediction methods has been developed, and many of the methods are constantly improved and adapted to recent insights into miRNA-mRNA interactions. In a typical scenario, different methods return different rankings of putative targets, even if the ranking is reduced to selected mRNAs that are related to a specific disease or cell type. For the experimental validation it is then difficult to decide in which order to process the predicted miRNA-mRNA bindings, since each validation is a laborious task and therefore only a limited number of mRNAs can be analysed. We propose a new ranking scheme that combines ranked predictions from several methods and - unlike standard thresholding methods - utilises the concept of Pareto fronts as defined in multi-objective optimisation. In the present study, we attempt a proof of concept by applying the new ranking scheme to hsa-miR-21, hsa-miR-125b, and hsa-miR-373 and prediction scores supplied by PITA and RNAhybrid. The scores are interpreted as a two-objective optimisation problem, and the elements of the Pareto front are ranked by the STarMir score with a subsequent re-calculation of the Pareto front after removal of the top-ranked mRNA from the basic set of prediction scores. The method is evaluated on validated targets of the three miRNA, and the ranking is compared to scores from DIANA-microT and TargetScan. We observed that the new ranking method performs well and consistent, and the first validated targets are elements of Pareto fronts at a relatively early stage of the recurrent procedure, which encourages further research towards a higher-dimensional analysis of Pareto fronts. Copyright © 2010 Elsevier Ltd. All rights reserved.
Pareto navigation: algorithmic foundation of interactive multi-criteria IMRT planning.
Monz, M; Küfer, K H; Bortfeld, T R; Thieke, C
2008-02-21
Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection' and 'restriction'. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle -- a set constructed from pre-computed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions. Pareto navigation was implemented in the MIRA navigator software and allows real-time manipulation of the current plan and the set of considered plans. The changes are triggered by simple mouse operations on the so-called navigation star and lead to real-time updates of the navigation star and the dose visualizations. Since any Pareto-optimal plan in the plan bundle can be found with just a few navigation operations the MIRA navigator allows a fast and directed plan determination. Besides, the concept allows for a refinement of the plan bundle, thus offering a middle course between single plan computation and multi-criteria optimization. Pareto navigation offers so far unmatched real-time interactions, ease of use and plan variety, setting it apart from the multi-criteria IMRT planning methods proposed so far.
Pareto navigation-algorithmic foundation of interactive multi-criteria IMRT planning
International Nuclear Information System (INIS)
Monz, M; Kuefer, K H; Bortfeld, T R; Thieke, C
2008-01-01
Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection' and 'restriction'. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle-a set constructed from pre-computed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions. Pareto navigation was implemented in the MIRA navigator software and allows real-time manipulation of the current plan and the set of considered plans. The changes are triggered by simple mouse operations on the so-called navigation star and lead to real-time updates of the navigation star and the dose visualizations. Since any Pareto-optimal plan in the plan bundle can be found with just a few navigation operations the MIRA navigator allows a fast and directed plan determination. Besides, the concept allows for a refinement of the plan bundle, thus offering a middle course between single plan computation and multi-criteria optimization. Pareto navigation offers so far unmatched real-time interactions, ease of use and plan variety, setting it apart from the multi-criteria IMRT planning methods proposed so far
International Nuclear Information System (INIS)
Serov, I.V.; Hoogenboom, J.E.
1994-01-01
Physical quantities can be obtained by utilizing different informational sources. The available information is usually associated with systematic and statistical errors. If the informational sources are utilized simultaneously, then it is possible to obtain posterior estimates of the quantities with better statistical properties than exhibited by any prior estimates. The general technique for confluence of any number possibly dependent informational sources can be developed. Insight into the nature of the informational source allows different types of data associated with the source to be improved. The formulas of the technique are presented and applied to the power distribution determination for research reactor HOR of the Delft University of Technology, employing calculational and experimental data. (authors). 5 refs., 1 tab., 5 figs
mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons
Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris
2018-02-01
mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.
Hasan, Husna; Salam, Norfatin; Kassim, Suraiya
2013-04-01
Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.
Modeling extreme PM10 concentration in Malaysia using generalized extreme value distribution
Hasan, Husna; Mansor, Nadiah; Salleh, Nur Hanim Mohd
2015-05-01
Extreme PM10 concentration from the Air Pollutant Index (API) at thirteen monitoring stations in Malaysia is modeled using the Generalized Extreme Value (GEV) distribution. The data is blocked into monthly selection period. The Mann-Kendall (MK) test suggests a non-stationary model so two models are considered for the stations with trend. The likelihood ratio test is used to determine the best fitted model and the result shows that only two stations favor the non-stationary model (Model 2) while the other eleven stations favor stationary model (Model 1). The return level of PM10 concentration that is expected to exceed the maximum once within a selected period is obtained.
A distributed-memory hierarchical solver for general sparse linear systems
Energy Technology Data Exchange (ETDEWEB)
Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering
2017-12-20
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.
Double Differential Cross Sections and Generalized Oscillator Strength Distributions of Ammonia
International Nuclear Information System (INIS)
Yamamoto, Karin; Nogami, Keisuke; Hino, Yuta; Sakai, Yasuhiro
2011-01-01
The absolute double differential cross section (DDCS), the generalized oscillator strength distribution (GOSD), and the ionization efficiency of ammonia (NH 3 ) were investigated from the threshold to 40 eV under the condition of 200 and 400 eV incident electron energies and 6 and 8 degree scattering angles using electron energy-loss spectroscopy and electron- ion coincidence techniques. To determine the absolute values, we used a mixture of helium (He) and NH 3 and normalized the measured relative DDCS spectrum by the differential cross section for 2 1 P excitation of He. Our results are in close agreement with previous dipole (e, e) spectroscopy, although the incident electron energy is lower. The ionization efficiency curve obtained from coincidence measurements indicated the existence of doubly excited states that cause neutral dissociation.
Bedlinskiy, I.; Kubarovsky, V.; Stoler, P.; Adhikari, K. P.; Akbar, Z.; Anefalos Pereira, S.; Avakian, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Batourine, V.; Biselli, A. S.; Boiarinov, S.; Briscoe, W. J.; Burkert, V. D.; Cao, T.; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Ciullo, G.; Clark, L.; Colaneri, L.; Cole, P. L.; Contalbrigo, M.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Dupre, R.; Alaoui, A. El; Fassi, L. El; Elouadrhiri, L.; Eugenio, P.; Fanchini, E.; Fedotov, G.; Fersch, R.; Filippi, A.; Fleming, J. A.; Forest, T. A.; Garçon, M.; Gevorgyan, N.; Ghandilyan, Y.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Gleason, C.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Harrison, N.; Hattawy, M.; Hicks, K.; Hughes, S. M.; Hyde, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jenkins, D.; Jiang, H.; Jo, H. S.; Joo, K.; Joosten, S.; Keller, D.; Khachatryan, G.; Khachatryan, M.; Khandaker, M.; Kim, A.; Kim, W.; Klein, F. J.; Kuhn, S. E.; Kuleshov, S. V.; Lanza, L.; Lenisa, P.; Livingston, K.; MacGregor, I. J. D.; Markov, N.; McKinnon, B.; Meziani, Z. E.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Movsisyan, A.; Munoz Camacho, C.; Nadel-Turonski, P.; Net, L. A.; Ni, A.; Niccolai, S.; Niculescu, G.; Osipenko, M.; Ostrovidov, A. I.; Paolone, M.; Paremuzyan, R.; Park, K.; Pasyuk, E.; Peng, P.; Phelps, W.; Pisano, S.; Pogorelko, O.; Price, J. W.; Prok, Y.; Protopopescu, D.; Puckett, A. J. R.; Raue, B. A.; Ripani, M.; Rizzo, A.; Rosner, G.; Rossi, P.; Roy, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schumacher, R. A.; Sharabian, Y. G.; Skorodumina, Iu.; Smith, G. D.; Sokhan, D.; Sparveris, N.; Stepanyan, S.; Strakovsky, I. I.; Strauch, S.; Taiuti, M.; Tian, Ye; Torayev, B.; Turisini, M.; Ungaro, M.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Watts, D. P.; Wei, X.; Weinstein, L. B.; Wood, M. H.; Yurov, M.; Zachariou, N.; Zhang, J.; Zonta, I.; CLAS Collaboration
2017-03-01
The cross section of the exclusive η electroproduction reaction e p →e'p'η was measured at Jefferson Laboratory with a 5.75 GeV electron beam and the CLAS detector. Differential cross sections d4σ /d t d Q2d xBd ϕη and structure functions σU=σT+ɛ σL,σT T , and σL T, as functions of t , were obtained over a wide range of Q2 and xB. The η structure functions are compared with those previously measured for π0 at the same kinematics. At low t , both π0 and η are described reasonably well by generalized parton distributions (GPDs) in which chiral-odd transversity GPDs are dominant. The π0 and η data, when taken together, can facilitate the flavor decomposition of the transversity GPDs.
Jo, H. S.; Girod, F. X.; Avakian, H.; Burkert, V. D.; Garçon, M.; Guidal, M.; Kubarovsky, V.; Niccolai, S.; Stoler, P.; Adhikari, K. P.; Adikaram, D.; Amaryan, M. J.; Anderson, M. D.; Anefalos Pereira, S.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Batourine, V.; Bedlinskiy, I.; Biselli, A. S.; Boiarinov, S.; Briscoe, W. J.; Brooks, W. K.; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Colaneri, L.; Cole, P. L.; Compton, N.; Contalbrigo, M.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Dupre, R.; Alaoui, A. El; Fassi, L. El; Elouadrhiri, L.; Fedotov, G.; Fegan, S.; Filippi, A.; Fleming, J. A.; Garillon, B.; Gevorgyan, N.; Ghandilyan, Y.; Gilfoyle, G. P.; Giovanetti, K. L.; Goetz, J. T.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guegan, B.; Guler, N.; Guo, L.; Hafidi, K.; Hakobyan, H.; Harrison, N.; Hattawy, M.; Hicks, K.; Hirlinger Saylor, N.; Ho, D.; Holtrop, M.; Hughes, S. M.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Jenkins, D.; Joo, K.; Joosten, S.; Keller, D.; Khachatryan, G.; Khandaker, M.; Kim, A.; Kim, W.; Klein, A.; Klein, F. J.; Kuhn, S. E.; Kuleshov, S. V.; Lenisa, P.; Livingston, K.; Lu, H. Y.; MacGregor, I. J. D.; McKinnon, B.; Meziani, Z. E.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Moutarde, H.; Movsisyan, A.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Net, L. A.; Niculescu, G.; Osipenko, M.; Ostrovidov, A. I.; Paolone, M.; Park, K.; Pasyuk, E.; Phillips, J. J.; Pisano, S.; Pogorelko, O.; Price, J. W.; Procureur, S.; Prok, Y.; Puckett, A. J. R.; Raue, B. A.; Ripani, M.; Rizzo, A.; Rosner, G.; Rossi, P.; Roy, P.; Sabatié, F.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seder, E.; Simonyan, A.; Skorodumina, Iu.; Smith, G. D.; Sokhan, D.; Sparveris, N.; Stepanyan, S.; Strakovsky, I. I.; Strauch, S.; Sytnik, V.; Tian, Ye; Tkachenko, S.; Ungaro, M.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Watts, D. P.; Wei, X.; Weinstein, L. B.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, Z. W.; Zonta, I.; CLAS Collaboration
2015-11-01
Unpolarized and beam-polarized fourfold cross sections (d4σ /d Q2d xBd t d ϕ ) for the e p →e'p'γ reaction were measured using the CLAS detector and the 5.75-GeV polarized electron beam of the Jefferson Lab accelerator, for 110 (Q2,xB,t ) bins over the widest phase space ever explored in the valence-quark region. Several models of generalized parton distributions (GPDs) describe the data well at most of our kinematics. This increases our confidence that we understand the GPD H , expected to be the dominant contributor to these observables. Through a leading-twist extraction of Compton form factors, these results support the model predictions of a larger nucleon size at lower quark-momentum fraction xB.
Alvarez-Martinez, R.; Martinez-Mekler, G.; Cocho, G.
2011-01-01
The behavior of rank-ordered distributions of phenomena present in a variety of fields such as biology, sociology, linguistics, finance and geophysics has been a matter of intense research. Often power laws have been encountered; however, their validity tends to hold mainly for an intermediate range of rank values. In a recent publication (Martínez-Mekler et al., 2009 [7]), a generalization of the functional form of the beta distribution has been shown to give excellent fits for many systems of very diverse nature, valid for the whole range of rank values, regardless of whether or not a power law behavior has been previously suggested. Here we give some insight on the significance of the two free parameters which appear as exponents in the functional form, by looking into discrete probabilistic branching processes with conflicting dynamics. We analyze a variety of realizations of these so-called expansion-modification models first introduced by Wentian Li (1989) [10]. We focus our attention on an order-disorder transition we encounter as we vary the modification probability p. We characterize this transition by means of the fitting parameters. Our numerical studies show that one of the fitting exponents is related to the presence of long-range correlations exhibited by power spectrum scale invariance, while the other registers the effect of disordering elements leading to a breakdown of these properties. In the absence of long-range correlations, this parameter is sensitive to the occurrence of unlikely events. We also introduce an approximate calculation scheme that relates this dynamics to multinomial multiplicative processes. A better understanding through these models of the meaning of the generalized beta-fitting exponents may contribute to their potential for identifying and characterizing universality classes.
Phadnis, Milind A; Wetmore, James B; Mayo, Matthew S
2017-11-20
Traditional methods of sample size and power calculations in clinical trials with a time-to-event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment. Copyright © 2017 John Wiley & Sons, Ltd.
Coherent deeply virtual Compton scattering off 3He and neutron generalized parton distributions
Directory of Open Access Journals (Sweden)
Rinaldi Matteo
2014-06-01
Full Text Available It has been recently proposed to study coherent deeply virtual Compton scattering (DVCS off 3He nuclei to access neutron generalized parton distributions (GPDs. In particular, it has been shown that, in Impulse Approximation (IA and at low momentum transfer, the sum of the quark helicity conserving GPDs of 3He, H and E, is dominated by the neutron contribution. This peculiar result makes the 3He target very promising to access the neutron information. We present here the IA calculation of the spin dependent GPD H See Formula in PDF of 3He. Also for this quantity the neutron contribution is found to be the dominant one, at low momentum transfer. The known forward limit of the IA calculation of H See Formula in PDF , yielding the polarized parton distributions of 3He, is correctly recovered. The extraction of the neutron information could be anyway non trivial, so that a procedure, able to take into account the nuclear effects encoded in the IA analysis, is proposed. These calculations, essential for the evaluation of the coherent DVCS cross section asymmetries, which depend on the GPDs H,E and H See Formula in PDF , represent a crucial step for planning possible experiments at Jefferson Lab.
Lambert, Amaury
2011-07-01
We consider a general, neutral, dynamical model of biodiversity. Individuals have i.i.d. lifetime durations, which are not necessarily exponentially distributed, and each individual gives birth independently at constant rate λ. Thus, the population size is a homogeneous, binary Crump-Mode-Jagers process (which is not necessarily a Markov process). We assume that types are clonally inherited. We consider two classes of speciation models in this setting. In the immigration model, new individuals of an entirely new species singly enter the population at constant rate μ (e.g., from the mainland into the island). In the mutation model, each individual independently experiences point mutations in its germ line, at constant rate θ. We are interested in the species abundance distribution, i.e., in the numbers, denoted I(n)(k) in the immigration model and A(n)(k) in the mutation model, of species represented by k individuals, k = 1, 2, . . . , n, when there are n individuals in the total population. In the immigration model, we prove that the numbers (I(t)(k); k ≥ 1) of species represented by k individuals at time t, are independent Poisson variables with parameters as in Fisher's log-series. When conditioning on the total size of the population to equal n, this results in species abundance distributions given by Ewens' sampling formula. In particular, I(n)(k) converges as n → ∞ to a Poisson r.v. with mean γ/k, where γ : = μ/λ. In the mutation model, as n → ∞, we obtain the almost sure convergence of n (-1) A(n)(k) to a nonrandom explicit constant. In the case of a critical, linear birth-death process, this constant is given by Fisher's log-series, namely n(-1) A(n)(k) converges to α(k)/k, where α : = λ/(λ + θ). In both models, the abundances of the most abundant species are briefly discussed.
Piotrowska, M. J.; Bodnar, M.
2018-01-01
We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.
"From Plato to Pareto": The Western Civilization Course Reconsidered.
Mullaney, Marie Marmo
1986-01-01
Discusses the importance of historical study within general education. Reviews the rise and fall of the Western Civilization course as the core of general education in the humanities. Suggests ways a revised version of this course can be restored to a central place in the curriculum. (AYC)
Craft, David
2010-10-01
A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Pelumi E. Oguntunde
2017-01-01
Full Text Available Developing new compound distributions which are more flexible than the existing distributions have become the new trend in distribution theory. In this present study, the Lomax distribution was extended using the Gompertz family of distribution, its resulting densities and statistical properties were carefully derived, and the method of maximum likelihood estimation was proposed in estimating the model parameters. A simulation study to assess the performance of the parameters of Gompertz Lomax distribution was provided and an application to real life data was provided to assess the potentials of the newly derived distribution. Excerpt from the analysis indicates that the Gompertz Lomax distribution performed better than the Beta Lomax distribution, Weibull Lomax distribution, and Kumaraswamy Lomax distribution.
International Nuclear Information System (INIS)
Shojaeefard, Mohammad Hasan; Behnagh, Reza Abdi; Akbari, Mostafa; Givi, Mohammad Kazem Besharati; Farhani, Foad
2013-01-01
Highlights: ► Defect-free friction stir welds have been produced for AA5083-O/AA7075-O. ► Back-propagation was sufficient for predicting hardness and tensile strength. ► A hybrid multi-objective algorithm is proposed to deal with this MOP. ► Multi-objective particle swarm optimization was used to find the Pareto solutions. ► TOPSIS is used to rank the given alternatives of the Pareto solutions. -- Abstract: Friction Stir Welding (FSW) has been successfully used to weld similar and dissimilar cast and wrought aluminium alloys, especially for aircraft aluminium alloys, that generally present with low weldability by the traditional fusion welding process. This paper focuses on the microstructural and mechanical properties of the Friction Stir Welding (FSW) of AA7075-O to AA5083-O aluminium alloys. Weld microstructures, hardness and tensile properties were evaluated in as-welded condition. Tensile tests indicated that mechanical properties of the joint were better than in the base metals. An Artificial Neural Network (ANN) model was developed to simulate the correlation between the Friction Stir Welding parameters and mechanical properties. Performance of the ANN model was excellent and the model was employed to predict the ultimate tensile strength and hardness of butt joint of AA7075–AA5083 as functions of weld and rotational speeds. The multi-objective particle swarm optimization was used to obtain the Pareto-optimal set. Finally, the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) was applied to determine the best compromised solution.
International Nuclear Information System (INIS)
Quinn, K.G.
1992-01-01
The application of benefit-cost analysis to environmental problems in general, and to global warming as demonstrated by Kosobud in particular, is a very useful tool. Depending upon the limitations of the relevant data available benefit-cost analysis can offer information to society about how to improve its condition. However, beyond the criticism of its estimate of the Pareto optimal point benefit-cost analysis suffers from a fundamental weakness: It cannot speak to the distribution of the net benefits of implementation of an international greenhouse policy. Within an individual country, debate on a particular policy intervention can effectively separate the issues of achieving a potential Pareto optimum and distributing the benefits necessary to actually accomplish Pareto optimality. This situation occurs because (theoretically, anyway) these decisions are made in the presence of a binding enforcement regime that can redistribute benefits as seen fit. A policy can then be introduced in the manner that achieves the best overall net benefits, and the allocation of these benefits can be treated as a stand-alone problem
Gauy, Henrique Matheus; Ramos-Caro, Javier
2018-03-01
By considering the Einstein-Vlasov system for static spherically symmetric distributions of matter, we show that configurations with constant anisotropy parameter β , leading to asymptotically flat spacetimes, have necessarily a distribution function (DF) of the form F =l-2 βξ (ɛ ) , where ɛ =E /m and l =L /m are the relativistic energy and angular momentum per unit rest mass, respectively. We exploit this result to obtain DFs for the general relativistic extension of the hypervirial family introduced by Nguyen and Lingam [Mon. Not. R. Astron. Soc. 436, 2014 (2013), 10.1093/mnras/stt1719], which Newtonian potential is given by ϕ (r )=-ϕo/[1 +(r /a )n]1 /n (a and ϕo are positive free parameters, n =1 ,2 ,… ). Such DFs can be written in the form Fn=ln -2ξn(ɛ ) . For odd n , we find that ξn is a polynomial of order 2 n +1 in ɛ , as in the case of the Hernquist model (n =1 ), for which F1∝l-1(2 ɛ -1 ) (ɛ-1 ) 2 . For even n , we can write ξn in terms of incomplete beta functions (Plummer model, n =2 , is an example). Since we demand that F ≥0 throughout the phase space, the particular form of each ξn leads to restrictions for the values of ϕo. For example, for the Hernquist model we find that 0 ≤ϕo≤2 /3 , i.e., an upper bounding value less than the one obtained for Nguyen and Lingam (0 ≤ϕo≤1 ), based on energy conditions.
Directory of Open Access Journals (Sweden)
Tara Zhafira
2017-03-01
Full Text Available Background: Appendicitis is a medical emergency and a common cause of emergency surgeries worldwide. Its frequency is varied based on many factors, including age and sex. Histopathologic examination is a gold standard for diagnosis, and complications like gangrene formation and perforation lead to high mortality and morbidity in almost all age groups. This study was conducted to describe the distribution pattern of appendicitis according to age, sex, and histopathologic type. Methods: This cross-sectional study was carried out in the Department of Pathology Anatomy, Dr. Hasan Sadikin General Hospital, Bandung, Indonesia, from August–October 2013. Secondary data were obtained from medical records of January 1st to December 31st, 2012. A total of 503 out of 516 cases were included to be reviewed. Age, sex, and histopathologic type from medical records were then evaluated. Any specific case and perforation were also noted. Results: Data showed the highest prevalence of appendicitis occurred in the 10- 19 age group (28.4% and in the female group (52.3%. Acute appendicitis was more common than chronic appendicitis in both sexes and all age groups. Perforation rate was high (41.4%, and was more prevalent in male (54.9% and in the 0–9 age group (65.7%. Conclusions: Appendicitis, both acute and chronic, is more distributed in the second decade, and is slightly more prevalent in females. Acute cases are more common than chronic. Perforation rate is significant and peaks in the first decade and in males. [AMJ.2017;4(1:36–41
Multivariate Pareto Minification Processes | Umar | Journal of the ...
African Journals Online (AJOL)
Autoregressive (AR) and autoregressive moving average (ARMA) processes with multivariate exponential (ME) distribution are presented and discussed. The theory of positive dependence is used to show that in many cases, multivariate exponential autoregressive (MEAR) and multivariate autoregressive moving average ...
Pareto Principle in Datamining: an Above-Average Fencing Algorithm
Directory of Open Access Journals (Sweden)
K. Macek
2008-01-01
Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.
Ding, Chunguang; Pan, Yajuan; Zhang, Aihua; Zhu, Chun; Liu, Deye; Xu, Guang; Zheng, Yuxin; Yan, Huifang
2015-12-01
To investigate the distribution of rubidium (Rb), cesium (Cs), beryllium (Be), strontium (Sr), and barium (Ba) in blood and urine in general Chinese population. A total of 18 120 subjects aged 6~60 years were enrolled from 24 regions in 8 provinces in Eastern, Central, and Western China from 2009 to 2010 based on the method of cluster random sampling. Questionnaire survey was conducted to collect the data on living environment and health status. Blood and urine samples were collected from these subjects, and the levels of Rb, Cs, Be, Sr, and Ba in these samples were determined by inductively coupled plasma mass spectrometry. The distribution of these elements in blood and urine in male or female subjects living in different regions was analyzed statistically. In the general Chinese population, the concentration of Be in the whole blood was below the detection limit (0.06 μg/L); the geometric mean (GM) of Ba in the whole blood was below the detection limit (0.45 μg/L), with the 95th percentile (P95)of 1.37 μg/L; the GMs (95% CI)of Rb, Cs, and Sr in the whole blood were 2 374(2 357~2 392) μg/L, 2.01 (1.98~2.05) μg/L, and 23.5 (23.3~23.7) μg/L, respectively; in males and females, the GMs (95%CI)of blood Rb, Cs, and Sr were 2 506 (2 478~2 533) μg/L and 2 248 (2 227~2 270) μg/L, 1.88 (1.83~1.94) μg/L and 2.16 (2.11~2.20) μg/L, and 23.4 (23.1~23.7) μg/L and 23.6 (23.3~23.9) μg/L, respectively(P0.05, and P>0.05). In the general Chinese population, the GM of urine Be was below the detection limit (0.06 μg/L), while the GMs (95%CI)of urine Rb, Cs, Sr, and Ba were 854 (836~873) μg/L, 3.65 (3.56~3.74) μg/L, 39.5 (38.4~40.6) μg/L, and 1.10 (1.07~1.12) μg/L, respectively; in males and females, the GMs (95%CI)of urine Rb, Cs, Sr, and Ba were 876 (849~904) μg/L and 832 (807~858) μg/L, 3.83 (3.70~3.96) μg/L and 3.47 (3.35~3.60) μg/L, 42.5 (40.9~44.2) μg/L and 36.6 (35.1~38.0) μg/L, and 1.15 (1.12~1.19) μg/L and 1.04 (1.01~1.07) μg/L, respectively (all P< 0
Bedlinskiy, I; Kubarovsky, V; Niccolai, S; Stoler, P; Adhikari, K P; Aghasyan, M; Amaryan, M J; Anghinolfi, M; Avakian, H; Baghdasaryan, H; Ball, J; Baltzell, N A; Battaglieri, M; Bennett, R P; Biselli, A S; Bookwalter, C; Boiarinov, S; Briscoe, W J; Brooks, W K; Burkert, V D; Carman, D S; Celentano, A; Chandavar, S; Charles, G; Contalbrigo, M; Crede, V; D'Angelo, A; Daniel, A; Dashyan, N; De Vita, R; De Sanctis, E; Deur, A; Djalali, C; Doughty, D; Dupre, R; Egiyan, H; El Alaoui, A; El Fassi, L; Elouadrhiri, L; Eugenio, P; Fedotov, G; Fegan, S; Fleming, J A; Forest, T A; Fradi, A; Garçon, M; Gevorgyan, N; Giovanetti, K L; Girod, F X; Gohn, W; Gothe, R W; Graham, L; Griffioen, K A; Guegan, B; Guidal, M; Guo, L; Hafidi, K; Hakobyan, H; Hanretty, C; Heddle, D; Hicks, K; Holtrop, M; Ilieva, Y; Ireland, D G; Ishkhanov, B S; Isupov, E L; Jo, H S; Joo, K; Keller, D; Khandaker, M; Khetarpal, P; Kim, A; Kim, W; Klein, F J; Koirala, S; Kubarovsky, A; Kuhn, S E; Kuleshov, S V; Kvaltine, N D; Livingston, K; Lu, H Y; MacGregor, I J D; Mao, Y; Markov, N; Martinez, D; Mayer, M; McKinnon, B; Meyer, C A; Mineeva, T; Mirazita, M; Mokeev, V; Moutarde, H; Munevar, E; Munoz Camacho, C; Nadel-Turonski, P; Niculescu, G; Niculescu, I; Osipenko, M; Ostrovidov, A I; Pappalardo, L L; Paremuzyan, R; Park, K; Park, S; Pasyuk, E; Anefalos Pereira, S; Phelps, E; Pisano, S; Pogorelko, O; Pozdniakov, S; Price, J W; Procureur, S; Prok, Y; Protopopescu, D; Puckett, A J R; Raue, B A; Ricco, G; Rimal, D; Ripani, M; Rosner, G; Rossi, P; Sabatié, F; Saini, M S; Salgado, C; Saylor, N; Schott, D; Schumacher, R A; Seder, E; Seraydaryan, H; Sharabian, Y G; Smith, G D; Sober, D I; Sokhan, D; Stepanyan, S S; Stepanyan, S; Strauch, S; Taiuti, M; Tang, W; Taylor, C E; Tian, Ye; Tkachenko, S; Ungaro, M; Vineyard, M F; Vlassov, A; Voskanyan, H; Voutier, E; Walford, N K; Watts, D P; Weinstein, L B; Weygand, D P; Wood, M H; Zachariou, N; Zhang, J; Zhao, Z W; Zonta, I
2012-09-14
Exclusive π(0) electroproduction at a beam energy of 5.75 GeV has been measured with the Jefferson Lab CLAS spectrometer. Differential cross sections were measured at more than 1800 kinematic values in Q(2), x(B), t, and ϕ(π), in the Q(2) range from 1.0 to 4.6 GeV(2), -t up to 2 GeV(2), and x(B) from 0.1 to 0.58. Structure functions σ(T)+ϵσ(L), σ(TT), and σ(LT) were extracted as functions of t for each of 17 combinations of Q(2) and x(B). The data were compared directly with two handbag-based calculations including both longitudinal and transversity generalized parton distributions (GPDs). Inclusion of only longitudinal GPDs very strongly underestimates σ(T)+ϵσ(L) and fails to account for σ(TT) and σ(LT), while inclusion of transversity GPDs brings the calculations into substantially better agreement with the data. There is very strong sensitivity to the relative contributions of nucleon helicity-flip and helicity nonflip processes. The results confirm that exclusive π(0) electroproduction offers direct experimental access to the transversity GPDs.
Bedlinskiy, I.; Kubarovsky, V.; Niccolai, S.; Stoler, P.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anghinolfi, M.; Avakian, H.; Baghdasaryan, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Bennett, R. P.; Biselli, A. S.; Bookwalter, C.; Boiarinov, S.; Briscoe, W. J.; Brooks, W. K.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Contalbrigo, M.; Crede, V.; D'Angelo, A.; Daniel, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Doughty, D.; Dupre, R.; Egiyan, H.; El Alaoui, A.; El Fassi, L.; Elouadrhiri, L.; Eugenio, P.; Fedotov, G.; Fegan, S.; Fleming, J. A.; Forest, T. A.; Fradi, A.; Garçon, M.; Gevorgyan, N.; Giovanetti, K. L.; Girod, F. X.; Gohn, W.; Gothe, R. W.; Graham, L.; Griffioen, K. A.; Guegan, B.; Guidal, M.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Heddle, D.; Hicks, K.; Holtrop, M.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jo, H. S.; Joo, K.; Keller, D.; Khandaker, M.; Khetarpal, P.; Kim, A.; Kim, W.; Klein, F. J.; Koirala, S.; Kubarovsky, A.; Kuhn, S. E.; Kuleshov, S. V.; Kvaltine, N. D.; Livingston, K.; Lu, H. Y.; MacGregor, I. J. D.; Mao, Y.; Markov, N.; Martinez, D.; Mayer, M.; McKinnon, B.; Meyer, C. A.; Mineeva, T.; Mirazita, M.; Mokeev, V.; Moutarde, H.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Park, S.; Pasyuk, E.; Anefalos Pereira, S.; Phelps, E.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Prok, Y.; Protopopescu, D.; Puckett, A. J. R.; Raue, B. A.; Ricco, G.; Rimal, D.; Ripani, M.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Saylor, N.; Schott, D.; Schumacher, R. A.; Seder, E.; Seraydaryan, H.; Sharabian, Y. G.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Stepanyan, S. S.; Stepanyan, S.; Strauch, S.; Taiuti, M.; Tang, W.; Taylor, C. E.; Tian, Ye; Tkachenko, S.; Ungaro, M.; Vineyard, M. F.; Vlassov, A.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Watts, D. P.; Weinstein, L. B.; Weygand, D. P.; Wood, M. H.; Zachariou, N.; Zhang, J.; Zhao, Z. W.; Zonta, I.
2012-09-01
Exclusive π0 electroproduction at a beam energy of 5.75 GeV has been measured with the Jefferson Lab CLAS spectrometer. Differential cross sections were measured at more than 1800 kinematic values in Q2, xB, t, and ϕπ, in the Q2 range from 1.0 to 4.6GeV2, -t up to 2GeV2, and xB from 0.1 to 0.58. Structure functions σT+ɛσL, σTT, and σLT were extracted as functions of t for each of 17 combinations of Q2 and xB. The data were compared directly with two handbag-based calculations including both longitudinal and transversity generalized parton distributions (GPDs). Inclusion of only longitudinal GPDs very strongly underestimates σT+ɛσL and fails to account for σTT and σLT, while inclusion of transversity GPDs brings the calculations into substantially better agreement with the data. There is very strong sensitivity to the relative contributions of nucleon helicity-flip and helicity nonflip processes. The results confirm that exclusive π0 electroproduction offers direct experimental access to the transversity GPDs.
Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, Michiel; Allkim, T.P.; van Arem, Bart
2010-01-01
Multi objective optimization of externalities of traffic is performed solving a network design problem in which Dynamic Traffic Management measures are used. The resulting Pareto optimal set is determined by employing the SPEA2+ evolutionary algorithm.
A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...
Directory of Open Access Journals (Sweden)
Akbar A. Tabriz
2011-07-01
Full Text Available Concurrent engineering (CE is one of the widest known techniques for simultaneous planning of product and process design. In concurrent engineering, design processes are often complicated with multiple conflicting criteria and discrete sets of feasible alternatives. Thus multi-criteria decision making (MCDM techniques are integrated into CE to perform concurrent design. This paper proposes a design framework governed by MCDM technique, which are in conflict in the sense of competing for common resources to achieve variously different performance objectives such as financial, functional, environmental, etc. The Pareto MCDM model is applied to polyethylene pipe concurrent design governed by four criteria to determine the best alternative design to Pareto-compromise design.
Pareto evolution of gene networks: an algorithm to optimize multiple fitness objectives
International Nuclear Information System (INIS)
Warmflash, Aryeh; Siggia, Eric D; Francois, Paul
2012-01-01
The computational evolution of gene networks functions like a forward genetic screen to generate, without preconceptions, all networks that can be assembled from a defined list of parts to implement a given function. Frequently networks are subject to multiple design criteria that cannot all be optimized simultaneously. To explore how these tradeoffs interact with evolution, we implement Pareto optimization in the context of gene network evolution. In response to a temporal pulse of a signal, we evolve networks whose output turns on slowly after the pulse begins, and shuts down rapidly when the pulse terminates. The best performing networks under our conditions do not fall into categories such as feed forward and negative feedback that also encode the input–output relation we used for selection. Pareto evolution can more efficiently search the space of networks than optimization based on a single ad hoc combination of the design criteria. (paper)
Directory of Open Access Journals (Sweden)
Kristoffer Petersson
2017-07-01
Full Text Available We present a clinical distance measure for Pareto front evaluation studies in radiotherapy, which we show strongly correlates (r = 0.74 and 0.90 with clinical plan quality evaluation. For five prostate cases, sub-optimal treatment plans located at a clinical distance value of >0.32 (0.28–0.35 from fronts of Pareto optimal plans, were assessed to be of lower plan quality by our (12 observers (p < .05. In conclusion, the clinical distance measure can be used to determine if the difference between a front and a given plan (or between different fronts corresponds to a clinically significant plan quality difference.
A Pareto Algorithm for Efficient De Novo Design of Multi-functional Molecules.
Daeyaert, Frits; Deem, Micheal W
2017-01-01
We have introduced a Pareto sorting algorithm into Synopsis, a de novo design program that generates synthesizable molecules with desirable properties. We give a detailed description of the algorithm and illustrate its working in 2 different de novo design settings: the design of putative dual and selective FGFR and VEGFR inhibitors, and the successful design of organic structure determining agents (OSDAs) for the synthesis of zeolites. We show that the introduction of Pareto sorting not only enables the simultaneous optimization of multiple properties but also greatly improves the performance of the algorithm to generate molecules with hard-to-meet constraints. This in turn allows us to suggest approaches to address the problem of false positive hits in de novo structure based drug design by introducing structural and physicochemical constraints in the designed molecules, and by forcing essential interactions between these molecules and their target receptor. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pareto evolution of gene networks: an algorithm to optimize multiple fitness objectives.
Warmflash, Aryeh; Francois, Paul; Siggia, Eric D
2012-10-01
The computational evolution of gene networks functions like a forward genetic screen to generate, without preconceptions, all networks that can be assembled from a defined list of parts to implement a given function. Frequently networks are subject to multiple design criteria that cannot all be optimized simultaneously. To explore how these tradeoffs interact with evolution, we implement Pareto optimization in the context of gene network evolution. In response to a temporal pulse of a signal, we evolve networks whose output turns on slowly after the pulse begins, and shuts down rapidly when the pulse terminates. The best performing networks under our conditions do not fall into categories such as feed forward and negative feedback that also encode the input-output relation we used for selection. Pareto evolution can more efficiently search the space of networks than optimization based on a single ad hoc combination of the design criteria.
Concentration and size distribution of particles in abstracted groundwater.
van Beek, C G E M; de Zwart, A H; Balemans, M; Kooiman, J W; van Rosmalen, C; Timmer, H; Vandersluys, J; Stuyfzand, P J
2010-02-01
Particle number concentrations have been counted and particle size distributions calculated in groundwater derived by abstraction wells. Both concentration and size distribution are governed by the discharge rate: the higher this rate the higher the concentration and the higher the proportion of larger particles. However, the particle concentration in groundwater derived from abstraction wells, with high groundwater flow velocities, is much lower than in groundwater from monitor wells, with minimal flow velocities. This inconsistency points to exhaustion of the particle supply in the aquifer around wells due to groundwater abstraction for many years. The particle size distribution can be described with the help of a power law or Pareto distribution. Comparing the measured particle size distribution with the Pareto distribution shows that particles with a diameter >7 microm are under-represented. As the particle size distribution is dependent on the flow velocity, so is the value of the "Pareto" slope beta. (c) 2009 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
J. Blanchet
2015-12-01
SCHADEX method for extreme flood estimation. Regional scores of evaluation are used in a split sample framework to compare the MEWP distribution with more general heavy-tailed distributions, in this case the Multi Generalized Pareto Weather Pattern (MGPWP distribution. The analysis shows the clear benefit obtained from seasonal and weather pattern-based subsampling for extreme value estimation. The MEWP distribution is found to have an overall better performance as compared with the MGPWP, which tends to overfit the data and lacks robustness. Finally, we take advantage of the split sample framework to present evidence for an increase in extreme rainfall in the southwestern part of Norway during the period 1979–2009, relative to 1948–1978.
Improving predicted protein loop structure ranking using a Pareto-optimality consensus method.
Li, Yaohang; Rata, Ionel; Chiu, See-wing; Jakobsson, Eric
2010-07-20
Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of approximately 20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set.
Optimal Reinsurance Design for Pareto Optimum: From the Perspective of Multiple Reinsurers
Directory of Open Access Journals (Sweden)
Xing Rong
2016-01-01
Full Text Available This paper investigates optimal reinsurance strategies for an insurer which cedes the insured risk to multiple reinsurers. Assume that the insurer and every reinsurer apply the coherent risk measures. Then, we find out the necessary and sufficient conditions for the reinsurance market to achieve Pareto optimum; that is, every ceded-loss function and the retention function are in the form of “multiple layers reinsurance.”
He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris
2012-03-01
In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability versus novelty, affinity versus specificity, activity versus immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not "dominated"; that is, no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, Protein Engineering Pareto FRontier (PEPFR), that hierarchically subdivides the objective space, using appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. Copyright © 2011 Wiley Periodicals, Inc.
From microscopic taxation and redistribution models to macroscopic income distributions
Bertotti, Maria Letizia; Modanese, Giovanni
2011-10-01
We present here a general framework, expressed by a system of nonlinear differential equations, suitable for the modeling of taxation and redistribution in a closed society. This framework allows one to describe the evolution of income distribution over the population and to explain the emergence of collective features based on knowledge of the individual interactions. By making different choices of the framework parameters, we construct different models, whose long-time behavior is then investigated. Asymptotic stationary distributions are found, which enjoy similar properties as those observed in empirical distributions. In particular, they exhibit power law tails of Pareto type and their Lorenz curves and Gini indices are consistent with some real world ones.
A κ-generalized statistical mechanics approach to income analysis
International Nuclear Information System (INIS)
Clementi, F; Gallegati, M; Kaniadakis, G
2009-01-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful
A κ-generalized statistical mechanics approach to income analysis
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2009-02-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.
Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah
2017-04-20
This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele's (ZDT's) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.
Application of the Pareto principle to identify and address drug-therapy safety issues.
Müller, Fabian; Dormann, Harald; Pfistermeister, Barbara; Sonst, Anja; Patapovas, Andrius; Vogler, Renate; Hartmann, Nina; Plank-Kiegele, Bettina; Kirchner, Melanie; Bürkle, Thomas; Maas, Renke
2014-06-01
Adverse drug events (ADE) and medication errors (ME) are common causes of morbidity in patients presenting at emergency departments (ED). Recognition of ADE as being drug related and prevention of ME are key to enhancing pharmacotherapy safety in ED. We assessed the applicability of the Pareto principle (~80 % of effects result from 20 % of causes) to address locally relevant problems of drug therapy. In 752 cases consecutively admitted to the nontraumatic ED of a major regional hospital, ADE, ME, contributing drugs, preventability, and detection rates of ADE by ED staff were investigated. Symptoms, errors, and drugs were sorted by frequency in order to apply the Pareto principle. In total, 242 ADE were observed, and 148 (61.2 %) were assessed as preventable. ADE contributed to 110 inpatient hospitalizations. The ten most frequent symptoms were causally involved in 88 (80.0 %) inpatient hospitalizations. Only 45 (18.6 %) ADE were recognized as drug-related problems until discharge from the ED. A limited set of 33 drugs accounted for 184 (76.0 %) ADE; ME contributed to 57 ADE. Frequency-based listing of ADE, ME, and drugs involved allowed identification of the most relevant problems and development of easily to implement safety measures, such as wall and pocket charts. The Pareto principle provides a method for identifying the locally most relevant ADE, ME, and involved drugs. This permits subsequent development of interventions to increase patient safety in the ED admission process that best suit local needs.
Directory of Open Access Journals (Sweden)
Ajibade Oluwaseyi Ayodele
2016-01-01
Full Text Available In this attempt, which is a second part of discussions on tapped density optimisation for four agricultural wastes (particles of coconut, periwinkle, palm kernel and egg shells, performance analysis for comparative basis is made. This paper pioneers a study direction in which optimisation of process variables are pursued using Taguchi method integrated with the Pareto 80-20 rule. Negative percentage improvements resulted when the optimal tapped density was compared with the average tapped density. However, the performance analysis between optimal tapped density and the peak tapped density values yielded positive percentage improvements for the four filler particles. The performance analysis results validate the effectiveness of using the Taguchi method in improving the tapped density properties of the filler particles. The application of the Pareto 80-20 rule to the table of parameters and levels produced revised tables of parameters and levels which helped to identify the factor-levels position of each parameter that is economical to optimality. The Pareto 80-20 rule also produced revised S/N response tables which were used to know the relevant S/N ratios that are relevant to optimality.
Variation of rain intensity and drop size distribution with General Weather Patterns (GWL)
Ghada, Wael; Buras, Allan; Lüpke, Marvin; Menzel, Annette
2017-04-01
Short-duration rainfall extremes may cause flash floods in certain catchments (e.g. cities or fast responding watersheds) and pose a great risk to affected communities. In order to predict their occurrence under future climate change scenarios, their link to atmospheric circulation patterns needs to be well understood. We used a comprehensive data set of meteorological data (temperature, rain gauge precipitation) and precipitation spectra measured by a disdrometer (OTT PARSIVEL) between October 2008 and June 2010 at Freising, southern Germany. For the 21 months of the study period, we integrated the disdrometer spectra over intervals of 10 minutes to correspond to the temporal resolution of the weather station data and discarded measurements with air temperatures below 0°C. Daily General Weather Patterns ("Großwetterlagen", GWL) were downloaded from the website of the German Meteorological Service. Out of the 29 GWL, 14 were included in the analysis for which we had at least 12 rain events during our study period. For the definition of a rain event, we tested different lengths of minimum inter-event times and chose 30 min as a good compromise between number and length of resulting events; rain events started when more than 0.001 mm/h (sensitivity of the disdrometer) were recorded. The length of the rain events ranged between 10 min and 28 h (median 130 min) with the maximum rain intensity recorded being 134 mm/h on 24-07-2009. Seasonal differences were identified for rain event average intensities and maximum intensities per event. The influence of GWL on rain properties such as rain intensity and drop size distribution per time step and per event was investigated based on the above mentioned rain event definition. Pairwise Wilcoxon-tests revealed that higher rain intensity and larger drops were associated with the GWL "Low over the British Isles" (TB), whereas low rain intensities and less drops per interval were associated with the GWL "High over Central Europe
Directory of Open Access Journals (Sweden)
Li Ran
2017-01-01
Full Text Available Optimal allocation of generalized power sources in distribution network is researched. A simple index of voltage stability is put forward. Considering the investment and operation benefit, the stability of voltage and the pollution emissions of generalized power sources in distribution network, a multi-objective optimization planning model is established. A multi-objective particle swarm optimization algorithm is proposed to solve the optimal model. In order to improve the global search ability, the strategies of fast non-dominated sorting, elitism and crowding distance are adopted in this algorithm. Finally, tested the model and algorithm by IEEE-33 node system to find the best configuration of GP, the computed result shows that with the generalized power reasonable access to the active distribution network, the investment benefit and the voltage stability of the system is improved, and the proposed algorithm has better global search capability.
Generalized parton distributions and rapidity gap survival in exclusive diffractive pp scattering
Energy Technology Data Exchange (ETDEWEB)
Leonid Frankfurt; Charles Hyde-Wright; Mark Strikman; Christian Weiss
2007-03-01
We propose a new approach to the problem of rapidity gap survival (RGS) in the production of high-mass systems (H = dijet, heavy quarkonium, Higgs boson) in double-gap exclusive diffractive pp scattering, pp-->p + (gap) + H + (gap) + p. It is based on the idea that hard and soft interactions proceed over widely different time- and distance scales and are thus approximately independent. The high-mass system is produced in a hard scattering process with exchange of two gluons between the protons. Its amplitude is calculable in terms of the gluon generalized parton distributions (GPDs) in the protons, which can be measured in J= production in exclusive ep scattering. The hard scattering process is modified by soft spectator interactions, which we calculate in a model-independent way in terms of the pp elastic scattering amplitude. Contributions from inelastic intermediate states are suppressed. A simple geometric picture of the interplay of hard and soft interactions in diffraction is obtained. The onset of the black-disk limit in pp scattering at TeV energies strongly suppresses diffraction at small impact parameters and is the main factor in determining the RGS probability. Correlations between hard and soft interactions (e.g. due to scattering from the long-range pion field of the proton, or due to possible short-range transverse correlations between partons) further decrease the RGS probability. We also investigate the dependence of the diffractive cross section on the transverse momenta of the final-state protons (''diffraction pattern''). By measuring this dependence one can perform detailed tests of the interplay of hard and soft interactions, and even extract information about the gluon GPD in the proton. Such studies appear to be feasible with the planned forward detectors at the LHC.
Generalized parton distributions and rapidity gap survival in exclusive diffractive pp scattering
International Nuclear Information System (INIS)
Leonid Frankfurt; Charles Hyde-Wright; Mark Strikman; Christian Weiss
2006-01-01
We propose a new approach to the problem of rapidity gap survival (RGS) in the production of high-mass systems (H = dijet, heavy quarkonium, Higgs boson) in double-gap exclusive diffractive pp scattering, pp-->p + (gap) + H + (gap) + p. It is based on the idea that hard and soft interactions proceed over widely different time- and distance scales and are thus approximately independent. The high-mass system is produced in a hard scattering process with exchange of two gluons between the protons. Its amplitude is calculable in terms of the gluon generalized parton distributions (GPDs) in the protons, which can be measured in J= production in exclusive ep scattering. The hard scattering process is modified by soft spectator interactions, which we calculate in a model-independent way in terms of the pp elastic scattering amplitude. Contributions from inelastic intermediate states are suppressed. A simple geometric picture of the interplay of hard and soft interactions in diffraction is obtained. The onset of the black-disk limit in pp scattering at TeV energies strongly suppresses diffraction at small impact parameters and is the main factor in determining the RGS probability. Correlations between hard and soft interactions (e.g. due to scattering from the long-range pion field of the proton, or due to possible short-range transverse correlations between partons) further decrease the RGS probability. We also investigate the dependence of the diffractive cross section on the transverse momenta of the final-state protons (''diffraction pattern''). By measuring this dependence one can perform detailed tests of the interplay of hard and soft interactions, and even extract information about the gluon GPD in the proton. Such studies appear to be feasible with the planned forward detectors at the LHC
Financial Intermediation, Moral Hazard and Pareto Inferior Trade
DEFF Research Database (Denmark)
Olai Hansen, Bodil; Keiding, Hans
2004-01-01
We consider a simple model of international trade under uncertainty, whereproduction takes time and is subject to uncertainty. The riskiness of production dependson the choices of the producers, not observable to the general public, and these choicesare influenced by the availability and cost...... the model, the market may not be able to supply credits to one of the countries.The introduction of financial intermediaries with the ability to control the debtorsmay change this situation in a direction which is welfare improving (in a suitable sense)by increasing expected output in the country with high...... interest rates, while opening upfor new problems of asymmetric information with respect to the monitoring activity ofthe banks.Keywords: Capital outflow, financial intermediaries, moral hazardJEL classification: F36, D92, E44...
Directory of Open Access Journals (Sweden)
Isis Didier Lins
2018-03-01
Full Text Available The Generalized Renewal Process (GRP is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull distribution, widely used in the reliability context. In this paper, we develop novel GRP models based on probability distributions that stem from the Tsallis’ non-extensive entropy, namely the q-Exponential and the q-Weibull distributions. The q-Exponential and Weibull distributions can model decreasing, constant or increasing failure intensity functions. However, the power law behavior of the q-Exponential probability density function for specific parameter values is an advantage over the Weibull distribution when adjusting data containing extreme values. The q-Weibull probability distribution, in turn, can also fit data with bathtub-shaped or unimodal failure intensities in addition to the behaviors already mentioned. Therefore, the q-Exponential-GRP is an alternative for the Weibull-GRP model and the q-Weibull-GRP generalizes both. The method of maximum likelihood is used for their parameters’ estimation by means of a particle swarm optimization algorithm, and Monte Carlo simulations are performed for the sake of validation. The proposed models and algorithms are applied to examples involving reliability-related data of complex systems and the obtained results suggest GRP plus q-distributions are promising techniques for the analyses of repairable systems.
The randomly renewed general item and the randomly inspected item with exponential life distribution
International Nuclear Information System (INIS)
Schneeweiss, W.G.
1979-01-01
For a randomly renewed item the probability distributions of the time to failure and of the duration of down time and the expectations of these random variables are determined. Moreover, it is shown that the same theory applies to randomly checked items with exponential probability distribution of life such as electronic items. The case of periodic renewals is treated as an example. (orig.) [de
Kistner, Emily O.; Muller, Keith E.
2004-01-01
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact…
Directory of Open Access Journals (Sweden)
J. L. Guardado
2014-01-01
Full Text Available Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2 and the Nondominated Sorting Genetic Algorithm II (NSGA-II. The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.
Guardado, J L; Rivas-Davalos, F; Torres, J; Maximov, S; Melgoza, E
2014-01-01
Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD) technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and the Nondominated Sorting Genetic Algorithm II (NSGA-II). The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.
International Nuclear Information System (INIS)
Qin, Hong; Chung, Moses; Davidson, Ronald C.
2009-01-01
In an uncoupled lattice, the Kapchinskij-Vladimirskij (KV) distribution function first analyzed in 1959 is the only known exact solution of the nonlinear Vlasov-Maxwell equations for high- intensity beams including self-fields in a self-consistent manner. The KV solution is generalized here to high-intensity beams in a coupled transverse lattice using the recently developed generalized Courant-Snyder invariant for coupled transverse dynamics. This solution projects to a rotating, pulsating elliptical beam in transverse configuration space, determined by the generalized matrix envelope equation.
Directory of Open Access Journals (Sweden)
Juan Carlos Osorio
2012-12-01
Full Text Available El problema del scheduling es uno de los problemas más ampliamente tratados en la literatura; sin embargo, es un problema complejo NP hard. Cuando, además, se involucra más de un objetivo, este problema se convierte en uno de los más complejos en el campo de la investigación de operaciones. Se presenta entonces un modelo biobjetivo para el job shop scheduling que incluye el makespan y el tiempo de flujo medio. Para resolver el modelo se ha utilizado una propuesta que incluye el uso del meta-heurístico Recocido Simulado (SA y el enfoque de Pareto. Este modelo es evaluado en tres problemas presentados en la literatura de tamaños 6x6, 10x5 y 10x10. Los resultados del modelo se comparan con otros meta-heurísticos y se encuentra que este modelo presenta buenos resultados en los tres problemas evaluados.The scheduling problem is one of the most widely treated problems in literature; however, it is an NP hard complex problem. Also, when more than one objective is involved, this problem becomes one of the most complex ones in the field of operations research. A bio-objective model is then emerged for the Job-Shop Scheduling, including makespan and mean flow time. For solving the model a proposal which includes the use of Simulated Annealing (SA metaheuristic and Pareto Principle. This model is evaluated in three problems described in literature with the following sizes: 6x6, 10x5 and 10x10. Results of the model are compared to other metaheuristics and it has been found that this model shows good results in the three problems evaluated.
General specifications for the development of a USL/DBMS NASA/PC R and D distributed workstation
Dominick, Wayne D. (Editor); Chum, Frank Y.
1984-01-01
The general specifications for the development of a PC-based distributed workstation (PCDWS) for an information storage and retrieval systems environment are defined. This research proposes the development of a PCDWS prototype as part of the University of Southwestern Louisiana Data Base Management System (USL/DBMS) NASA/PC R and D project in the PC-based workstation environment.
Directory of Open Access Journals (Sweden)
Sanjay Kumar Singh
2011-06-01
Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.
International Nuclear Information System (INIS)
Fukuda, Ikuo; Nakamura, Haruki
2010-01-01
Several molecular dynamics techniques applying the Tsallis generalized distribution are presented. We have developed a deterministic dynamics to generate an arbitrary smooth density function ρ. It creates a measure-preserving flow with respect to the measure ρdω and realizes the density ρ under the assumption of the ergodicity. It can thus be used to investigate physical systems that obey such distribution density. Using this technique, the Tsallis distribution density based on a full energy function form along with the Tsallis index q ≥ 1 can be created. From the fact that an effective support of the Tsallis distribution in the phase space is broad, compared with that of the conventional Boltzmann-Gibbs (BG) distribution, and the fact that the corresponding energy-surface deformation does not change energy minimum points, the dynamics enhances the physical state sampling, in particular for a rugged energy surface spanned by a complicated system. Other feature of the Tsallis distribution is that it provides more degree of the nonlinearity, compared with the case of the BG distribution, in the deterministic dynamics equation, which is very useful to effectively gain the ergodicity of the dynamical system constructed according to the scheme. Combining such methods with the reconstruction technique of the BG distribution, we can obtain the information consistent with the BG ensemble and create the corresponding free energy surface. We demonstrate several sampling results obtained from the systems typical for benchmark tests in MD and from biomolecular systems.
Yin, Fancheng; Yu, Xiaoyan
2015-01-01
This paper is concerned with the existence of stationary distribution and extinction for multispecies stochastic Lotka-Volterra predator-prey system. The contributions of this paper are as follows. (a) By using Lyapunov methods, the sufficient conditions on existence of stationary distribution and extinction are established. (b) By using the space decomposition technique and the continuity of probability, weaker conditions on extinction of the system are obtained. Finally, a numer...
Distribution of cocaine on banknotes in general circulation in England and Wales.
Aitken, C G G; Wilson, A; Sleeman, R; Morgan, B E M; Huish, J
2017-01-01
A study of the quantities of cocaine on banknotes in general circulation was conducted to investigate regional variations across England and Wales. No meaningful support was found for the proposition that there is regional variation in the quantities of cocaine in banknotes in general circulation in England and Wales. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Lina Yang
2018-02-01
Full Text Available Land-use allocation is of great significance in urban development. This type of allocation is usually considered to be a complex multi-objective spatial optimization problem, whose optimized result is a set of Pareto-optimal solutions (Pareto front reflecting different tradeoffs in several objectives. However, obtaining a Pareto front is a challenging task, and the Pareto front obtained by state-of-the-art algorithms is still not sufficient. To achieve better Pareto solutions, taking the grid-representative land-use allocation problem with two objectives as an example, an artificial bee colony optimization algorithm for multi-objective land-use allocation (ABC-MOLA is proposed. In this algorithm, the traditional ABC’s search direction guiding scheme and solution maintaining process are modified. In addition, a knowledge-informed neighborhood search strategy, which utilizes the auxiliary knowledge of natural geography and spatial structures to facilitate the neighborhood spatial search around each solution, is developed to further improve the Pareto front’s quality. A series of comparison experiments (a simulated experiment with small data volume and a real-world data experiment for a large area shows that all the Pareto fronts obtained by ABC-MOLA totally dominate the Pareto fronts by other algorithms, which demonstrates ABC-MOLA’s effectiveness in achieving Pareto fronts of high quality.
La narrazione dell’azione sociale: spunti dal Trattato di Vilfredo Pareto
Directory of Open Access Journals (Sweden)
Ilaria Riccioni
2017-08-01
Full Text Available La rilettura dei classici porta con sé sempre una duplice operazione: da una parte un ritorno a riflessioni, ritmi, storicità che spesso sembrano già superate; dall’altra la riscoperta delle origini di fenomeni contemporanei da punti di vista che ne delineano le interconnessioni profonde, non più visibili allo stato di avanzamento in cui le osserviamo oggi. Tale maggiore chiarezza è forse dovuta al fatto che ogni fenomeno nella sua fase aurorale è più chiaramente identificabile rispetto alle sue fasi successive, dove le caratteristiche primarie tendono a stemperarsi nelle cifre dominanti della contemporaneità, perdendosi nelle pratiche quotidiane che ne celano la provenienza. Se la sociologia è un processo di conoscenza della realtà dei fenomeni, il punto centrale della scienza sociale va distinto tra quelle scienze che schematizzano il reale in equazioni formali funzionali e funzionanti, il sistema economico, normativo, e le scienze sociali che si occupano della realtà e della sua complessità, che in quanto scienze si devono occupare non tanto di ciò che la realtà deve essere, bensì di ciò che la realtà è, di come si pone e di come manifesta i movimenti desideranti e profondi del vivere collettivo oltre il sistema che ne gestisce il funzionamento. Il punto che Pareto sembra scorgere, con estrema lucidità, è la necessità di ribaltare l’importanza della logica economica nell’organizzazione sociale da scienza che detta la realtà a scienza che propone uno schema di gestione di essa: da essa si cerca di dettare la realtà, ma l’economia, dal greco moderno Oikòs, Oikòsgeneia (casa e generazione, il termine utilizzato per definire l’unità famigliare non è di fatto “la realtà”, sembra dirci Pareto in più digressioni, bensì l’arte e la scienza della gestione di unità familiari e produttive. La realtà rimane in ombra e non può che essere “avvicinata” da una scienza che ne registri, ed eventualmente
Directory of Open Access Journals (Sweden)
Ana Milstein
1979-01-01
Full Text Available The vertical distribution of each developmental stage of Paracalanus crassirostris was studied in a shallow water station at Ubatuba, SP, Brazil (23º30'S-45º07'W. Samples were collected monthly at the surface, 2m and near bottom levels . Salinity, temperature, dissolved oxygen, tide height, light penetration arid solar radiation were also recorded. Data were analysed by the general linear model. It showed that the amount of individuals at any developmental stage is affected diversely by hour, depth, hour-depth interaction and environmental factors throughout the year and that these effects are stronger in summer. All developmental stages were spread in the water column showing no regular vertical migrations. On the other hand, the number of organisms caught in a particular hour seemed to dependmore on the tide than on the animals behaviour. The results of the present paper showed, as observed by some other authors, the lack of vertical migration of a coastal copepod which is a grazer of fine particles throughout its life.A distribuição vertical dos diferentes estádios de desenvolvimento de P. crassirostris foi estudada durante um ano (junho 1976 - maio 1977, numa estação pouco profunda (5 m em Ubatuba. As amostras foram coletadas mensalmente, em tres profundidades, cada quatro horas, com garrafa van Dorn de 9 l registrando-se dados ambientais. Os dados foram processados com a técnica dos Mínimos Quadrados, na forma de uma Aralise de Regressão de um Modelo Linear que inclui covariáveis. O modelo foi construído a priori, considerando densidade de organismos por amostra, fatores ambientais, diferenças entre amostras procedentes de diferentes profundidades e horas, também como interações entre hora e profundidade. Para cada estádio de P. crassirostris, o modelo foi repetido 9 vezes, com os dados de dois meses cada vez, a fim de obter a variação das respostas no ano. Os resultados do modelo indicaram que a quantidade de indiv
Energy Technology Data Exchange (ETDEWEB)
Kontaxis, C; Bol, G; Lagendijk, J; Raaymakers, B [University Medical Center Utrecht, Utrecht (Netherlands); Breedveld, S; Sharfo, A; Heijmen, B [Erasmus University Medical Center Rotterdam, Rotterdam (Netherlands)
2016-06-15
Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certain percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan
International Nuclear Information System (INIS)
Kontaxis, C; Bol, G; Lagendijk, J; Raaymakers, B; Breedveld, S; Sharfo, A; Heijmen, B
2016-01-01
Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certain percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1
Langenbrunner, B.; Neelin, J. D.
2017-09-01
Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.
Zalazinsky, A. G.; Kryuchkov, D. I.; Nesterenko, A. V.; Titov, V. G.
2017-12-01
The results of an experimental study of the mechanical properties of pressed and sintered briquettes consisting of powders obtained from a high-strength VT-22 titanium alloy by plasma spraying with additives of PTM-1 titanium powder obtained by the hydride-calcium method and powder of PV-N70Yu30 nickel-aluminum alloy are presented. The task is set for the choice of an optimal charge material composition of a composite material providing the required mechanical characteristics and cost of semi-finished products and items. Pareto optimal values for the composition of the composite material charge have been obtained.
Pareto law of the expenditure of a person in convenience stores
Mizuno, Takayuki; Toriyama, Masahiro; Terano, Takao; Takayasu, Misako
2008-06-01
We study the statistical laws of the expenditure of a person in convenience stores by analyzing around 100 million receipts. The density function of expenditure exhibits a fat tail that follows a power law. Using the Lorenz curve, the Gini coefficient is estimated to be 0.70; this implies that loyal customers contribute significantly to a store’s sales. We observe the Pareto principle where both the top 25% and 2% of the customers account for 80% and 25% of the store’s sales, respectively.
Inferring biological tasks using Pareto analysis of high-dimensional data.
Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri
2015-03-01
We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.
Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization
Directory of Open Access Journals (Sweden)
Na Tian
2015-01-01
Full Text Available A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.
Finding the Pareto Optimal Equitable Allocation of Homogeneous Divisible Goods Among Three Players
Directory of Open Access Journals (Sweden)
Marco Dall'Aglio
2017-01-01
Full Text Available We consider the allocation of a finite number of homogeneous divisible items among three players. Under the assumption that each player assigns a positive value to every item, we develop a simple algorithm that returns a Pareto optimal and equitable allocation. This is based on the tight relationship between two geometric objects of fair division: The Individual Pieces Set (IPS and the Radon-Nykodim Set (RNS. The algorithm can be considered as an extension of the Adjusted Winner procedure by Brams and Taylor to the three-player case, without the guarantee of envy-freeness. (original abstract
DEFF Research Database (Denmark)
Chen, Shuheng; Hu, Weihao; Chen, Zhe
2014-01-01
Based on generalized chain-table storage structure (GCTSS), a novel power flow method is proposed, which can be used to solve the power flow of weakly meshed distribution networks with multiple distributed generators (DGs). GCTSS is designed based on chain-table technology and its target is to de......Based on generalized chain-table storage structure (GCTSS), a novel power flow method is proposed, which can be used to solve the power flow of weakly meshed distribution networks with multiple distributed generators (DGs). GCTSS is designed based on chain-table technology and its target...... is to describe the topology of radial distribution networks with a clear logic and a small memory size. The strategies of compensating the equivalent currents of break-point branches and the reactive power outputs of PV-type DGs are presented on the basis of superposition theorem. Their formulations...... are simplified to be the final multi-variable linear functions. Furthermore, an accelerating factor is applied to the outer-layer reactive power compensation for improving the convergence procedure. Finally, the proposed power flow method is performed in program language VC++ 6.0, and numerical tests have been...
Directory of Open Access Journals (Sweden)
A. D. Kachan
2004-01-01
Full Text Available The paper proposes utilization of discharged heat of gas-piston engine (GPE or contact steam-gas plants (SGP with the purpose to heat up gas at gas-distribution stations (GDS of combined power plants with turbine and gas-expansion units. Calculations prove significant economic efficiency of the proposed variant in comparison with the application of ordinary gas- turbine units. Technical and economic calculation is used to determine gas-piston engine or contact steam-gas plant power for specific operational conditions of gas-distribution stations and utilization rate of discharged heat.
Distribution flow: a general process in the top layer of water repellent soils
Ritsema, C.J.; Dekker, L.W.
1995-01-01
Distribution flow is the process of water and solute flowing in a lateral direction over and through the very first millimetre or centimetre of the soil profile. A potassium bromide tracer was applied in two water-repellent sandy soils to follow the actual flow paths of water and solutes in the
International Nuclear Information System (INIS)
Xie, M.; Goh, T.N.; Tang, Y.
2004-01-01
The failure rate function and mean residual life function are two important characteristics in reliability analysis. Although many papers have studied distributions with bathtub-shaped failure rate and their properties, few have focused on the underlying associations between the mean residual life and failure rate function of these distributions, especially with respect to their changing points. It is known that the change point for mean residual life can be much earlier than that of failure rate function. In fact, the failure rate function should be flat for a long period of time for a distribution to be useful in practice. When the difference between the change points is large, the flat portion tends to be longer. This paper investigates the change points and focuses on the difference of the changing points. The exponentiated Weibull, a modified Weibull, and an extended Weibull distribution, all with bathtub-shaped failure rate function will be used. Some other issues related to the flatness of the bathtub curve are discussed
Directory of Open Access Journals (Sweden)
Joon-Ho Choi
2013-09-01
Full Text Available A distribution system was designed and operated by considering unidirectional power flow from a utility source to end-use loads. The large penetrations of distributed generation (DG into the existing distribution system causes a variety of technical problems, such as frequent tap changing problems of the on-load tap changer (OLTC transformer, local voltage rise, protection coordination, exceeding short-circuit capacity, and harmonic distortion. In view of voltage regulation, the intermittent fluctuation of the DG output power results in frequent tap changing operations of the OLTC transformer. Thus, many utilities limit the penetration level of DG and are eager to find the reasonable penetration limits of DG in the distribution system. To overcome this technical problem, utilities have developed a new voltage regulation method in the distribution system with a large DG penetration level. In this paper, the impact of DG on the OLTC operations controlled by the line drop compensation (LDC method is analyzed. In addition, a generalized determination methodology for the DG penetration limits in a distribution substation transformer is proposed. The proposed DG penetration limits could be adopted for a simplified interconnection process in DG interconnection guidelines.
Han, Fang; Liu, Han
2016-01-01
Correlation matrices play a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, it is not an effective estimator when facing heavy-tailed distributions. As a robust alternative, Han and Liu [J. Am. Stat. Assoc. 109 (2015) 275-2...
A two-component generalized extreme value distribution for precipitation frequency analysis
Czech Academy of Sciences Publication Activity Database
Rulfová, Zuzana; Buishand, A.; Roth, M.; Kyselý, Jan
2016-01-01
Roč. 534, March (2016), s. 659-668 ISSN 0022-1694 R&D Projects: GA ČR(CZ) GA14-18675S Institutional support: RVO:68378289 Keywords : precipitation extremes * two-component extreme value distribution * regional frequency analysis * convective precipitation * stratiform precipitation * Central Europe Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.483, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022169416000500
General regularities of Sr 90 distribution in system soil-plant under natural conditions
International Nuclear Information System (INIS)
Gudeliene, I.; Marchiulioniene, D.; Petroshius, R.
2006-01-01
Sr 90 distribution in system 'soil - underground part of plant - aboveground part of plant' was investigated. It was determined that Sr 90 activity concentration in underground and aboveground part of plants and in mosses was not dependent on its activity concentration in soil. There was direct dependence of Sr 90 activity concentration in aboveground on underground parts of plants. Sr 90 transfer factor from soil to underground part of plants and mosses was directly dependent on this radionuclide activity concentration in them. (authors)
Productivity Growth and the General X-factor in Austria´s Gas Distribution
Gugler, Klaus; Liebensteiner, Mario
2016-01-01
We estimate cost functions to derive productivity growth using a unique database on costs and outputs of essentially all regulated Austrian gas distribution companies over the period 2002-2013, covering the times before and after the introduction of incentive regulation in 2008. We estimate a concave relation between total costs and time, and a significant one-off but permanent reduction in real costs after an imposed reduction in granted costs in the course of the introduction of incentive r...
Mapping the Pareto optimal design space for a functionally deimmunized biotherapeutic candidate.
Salvat, Regina S; Parker, Andrew S; Choi, Yoonjoo; Bailey-Kellogg, Chris; Griswold, Karl E
2015-01-01
The immunogenicity of biotherapeutics can bottleneck development pipelines and poses a barrier to widespread clinical application. As a result, there is a growing need for improved deimmunization technologies. We have recently described algorithms that simultaneously optimize proteins for both reduced T cell epitope content and high-level function. In silico analysis of this dual objective design space reveals that there is no single global optimum with respect to protein deimmunization. Instead, mutagenic epitope deletion yields a spectrum of designs that exhibit tradeoffs between immunogenic potential and molecular function. The leading edge of this design space is the Pareto frontier, i.e. the undominated variants for which no other single design exhibits better performance in both criteria. Here, the Pareto frontier of a therapeutic enzyme has been designed, constructed, and evaluated experimentally. Various measures of protein performance were found to map a functional sequence space that correlated well with computational predictions. These results represent the first systematic and rigorous assessment of the functional penalty that must be paid for pursuing progressively more deimmunized biotherapeutic candidates. Given this capacity to rapidly assess and design for tradeoffs between protein immunogenicity and functionality, these algorithms may prove useful in augmenting, accelerating, and de-risking experimental deimmunization efforts.
Single Cell Dynamics Causes Pareto-Like Effect in Stimulated T Cell Populations.
Cosette, Jérémie; Moussy, Alice; Onodi, Fanny; Auffret-Cariou, Adrien; Neildez-Nguyen, Thi My Anh; Paldi, Andras; Stockholm, Daniel
2015-12-09
Cell fate choice during the process of differentiation may obey to deterministic or stochastic rules. In order to discriminate between these two strategies we used time-lapse microscopy of individual murine CD4 + T cells that allows investigating the dynamics of proliferation and fate commitment. We observed highly heterogeneous division and death rates between individual clones resulting in a Pareto-like dominance of a few clones at the end of the experiment. Commitment to the Treg fate was monitored using the expression of a GFP reporter gene under the control of the endogenous Foxp3 promoter. All possible combinations of proliferation and differentiation were observed and resulted in exclusively GFP-, GFP+ or mixed phenotype clones of very different population sizes. We simulated the process of proliferation and differentiation using a simple mathematical model of stochastic decision-making based on the experimentally observed parameters. The simulations show that a stochastic scenario is fully compatible with the observed Pareto-like imbalance in the final population.
Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph
2015-01-01
Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature - integrin β4 (ITGB4) - was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance.
Directory of Open Access Journals (Sweden)
J. S. Sadaghiani
2014-04-01
Full Text Available Flexible job shop scheduling problem is a key factor of using efficiently in production systems. This paper attempts to simultaneously optimize three objectives including minimization of the make span, total workload and maximum workload of jobs. Since the multi objective flexible job shop scheduling problem is strongly NP-Hard, an integrated heuristic approach has been used to solve it. The proposed approach was based on a floating search procedure that has used some heuristic algorithms. Within floating search procedure utilize local heuristic algorithms; it makes the considered problem into two sections including assigning and sequencing sub problem. First of all search is done upon assignment space achieving an acceptable solution and then search would continue on sequencing space based on a heuristic algorithm. This paper has used a multi-objective approach for producing Pareto solution. Thus proposed approach was adapted on NSGA II algorithm and evaluated Pareto-archives. The elements and parameters of the proposed algorithms were adjusted upon preliminary experiments. Finally, computational results were used to analyze efficiency of the proposed algorithm and this results showed that the proposed algorithm capable to produce efficient solutions.
A Pareto-based multi-objective optimization algorithm to design energy-efficient shading devices
International Nuclear Information System (INIS)
Khoroshiltseva, Marina; Slanzi, Debora; Poli, Irene
2016-01-01
Highlights: • We present a multi-objective optimization algorithm for shading design. • We combine Harmony search and Pareto-based procedures. • Thermal and daylighting performances of external shading were considered. • We applied the optimization process to a residential social housing in Madrid. - Abstract: In this paper we address the problem of designing new energy-efficient static daylight devices that will surround the external windows of a residential building in Madrid. Shading devices can in fact largely influence solar gains in a building and improve thermal and lighting comforts by selectively intercepting the solar radiation and by reducing the undesirable glare. A proper shading device can therefore significantly increase the thermal performance of a building by reducing its energy demand in different climate conditions. In order to identify the set of optimal shading devices that allow a low energy consumption of the dwelling while maintaining high levels of thermal and lighting comfort for the inhabitants we derive a multi-objective optimization methodology based on Harmony Search and Pareto front approaches. The results show that the multi-objective approach here proposed is an effective procedure in designing energy efficient shading devices when a large set of conflicting objectives characterizes the performance of the proposed solutions.
Othman, Muhammad Murtadha; Abd Rahman, Nurulazmi; Musirin, Ismail; Fotuhi-Firuzabad, Mahmud; Rajabi-Ghahnavieh, Abbas
2015-01-01
This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.
Directory of Open Access Journals (Sweden)
Muhammad Murtadha Othman
2015-01-01
Full Text Available This paper introduces a novel multiobjective approach for capacity benefit margin (CBM assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE in various conditions. Eventually, the power transfer based available transfer capability (ATC is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.
The Reduction of Modal Sensor Channels through a Pareto Chart Methodology
Directory of Open Access Journals (Sweden)
Kaci J. Lemler
2015-01-01
Full Text Available Presented herein is a new experimental sensor placement procedure developed to assist in placing sensors in key locations in an efficient method to reduce the number of channels for a full modal analysis. It is a fast, noncontact method that uses a laser vibrometer to gather a candidate set of sensor locations. These locations are then evaluated using a Pareto chart to obtain a reduced set of sensor locations that still captures the motion of the structure. The Pareto chart is employed to identify the points on a structure that have the largest reaction to an input excitation and thus reduce the number of channels while capturing the most significant data. This method enhances the correct and efficient placement of sensors which is crucial in modal testing. Previously this required the development and/or use of a complicated model or set of equations. This new technique is applied in a case study on a small unmanned aerial system. The test procedure is presented and the results are discussed.
Directory of Open Access Journals (Sweden)
Parsa M.
2014-01-01
Full Text Available Mean residual life and failure rate functions are ubiquitously employed in reliability analysis. The term of useful period of lifetime distributions of bathtub-shaped failure rate functions is referred to the flat rigion of this function and has attracted authors and researchers in reliability, actuary, and survival analysis. In recent years, considering the change points of mean residual life and failure rate functions has been extensively utelized in determining the optimum burn-in time. In this paper we investigate the difference between the change points of failure rate and mean residual life functions of some generalized gamma type distributions due to the capability of these distributions in modeling various bathtub-shaped failure rate functions.
van de Schoot, A J A J; Visser, J; van Kesteren, Z; Janssen, T M; Rasch, C R N; Bel, A
2016-02-21
The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D(99%)) and OAR doses (rectum V30Gy; bladder V40Gy). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D(99%), rectum V(30Gy) and bladder V(40Gy) to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D(99%) on average by 0.2 Gy and decreased the median rectum V(30Gy) and median bladder V(40Gy) on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal
International Nuclear Information System (INIS)
Van de Schoot, A J A J; Visser, J; Van Kesteren, Z; Rasch, C R N; Bel, A; Janssen, T M
2016-01-01
The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D 99% ) and OAR doses (rectum V 30Gy ; bladder V 40Gy ). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D 99% , rectum V 30Gy and bladder V 40Gy to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D 99% on average by 0.2 Gy and decreased the median rectum V 30Gy and median bladder V 40Gy on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal in
International Nuclear Information System (INIS)
Gollub, C; De Vivie-Riedle, R
2009-01-01
A multi-objective genetic algorithm is applied to optimize picosecond laser fields, driving vibrational quantum processes. Our examples are state-to-state transitions and unitary transformations. The approach allows features of the shaped laser fields and of the excitation mechanisms to be controlled simultaneously with the quantum yield. Within the parameter range accessible to the experiment, we focus on short pulse durations and low pulse energies to optimize preferably robust laser fields. Multidimensional Pareto fronts for these conflicting objectives could be constructed. Comparison with previous work showed that the solutions from Pareto optimizations and from optimal control theory match very well.
Vlach, Haley A; Sandhofer, Catherine M
2012-01-01
The spacing effect describes the robust finding that long-term learning is promoted when learning events are spaced out in time rather than presented in immediate succession. Studies of the spacing effect have focused on memory processes rather than for other types of learning, such as the acquisition and generalization of new concepts. In this study, early elementary school children (5- to 7-year-olds; N = 36) were presented with science lessons on 1 of 3 schedules: massed, clumped, and spaced. The results revealed that spacing lessons out in time resulted in higher generalization performance for both simple and complex concepts. Spaced learning schedules promote several types of learning, strengthening the implications of the spacing effect for educational practices and curriculum. © 2012 The Authors. Child Development © 2012 Society for Research in Child Development, Inc.
Reddy, P.V.; Engwerda, J.C.
2010-01-01
In this article we derive necessary and sufficient conditions for the existence of Pareto optimal solutions for an N player cooperative infinite horizon differential game. Firstly, we write the problem of finding Pareto candidates as solving N constrained optimal control subproblems. We derive some
Directory of Open Access Journals (Sweden)
Claudia eCasellato
2015-02-01
Full Text Available The cerebellum plays a crucial role in motor learning and it acts as a predictive controller. Modeling it and embedding it into sensorimotor tasks allows us to create functional links between plasticity mechanisms, neural circuits and behavioral learning. Moreover, if applied to real-time control of a neurorobot, the cerebellar model has to deal with a real noisy and changing environment, thus showing its robustness and effectiveness in learning. A biologically inspired cerebellar model with distributed plasticity, both at cortical and nuclear sites, has been used. Two cerebellum-mediated paradigms have been designed: an associative Pavlovian task and a vestibulo-ocular reflex, with multiple sessions of acquisition and extinction and with different stimuli and perturbation patterns. The cerebellar controller succeeded to generate conditioned responses and finely tuned eye movement compensation, thus reproducing human-like behaviors. Through a productive plasticity transfer from cortical to nuclear sites, the distributed cerebellar controller showed in both tasks the capability to optimize learning on multiple time-scales, to store motor memory and to effectively adapt to dynamic ranges of stimuli.
Directory of Open Access Journals (Sweden)
S. C. Oukouomi Noutchie
2014-01-01
Full Text Available We make use of Laplace transform techniques and the method of characteristics to solve fragmentation equations explicitly. Our result is a breakthrough in the analysis of pure fragmentation equations as this is the first instance where an exact solution is provided for the fragmentation evolution equation with general fragmentation rates. This paper is the key for resolving most of the open problems in fragmentation theory including “shattering” and the sudden appearance of infinitely many particles in some systems with initial finite particles number.
DEFF Research Database (Denmark)
Seven, Ekim; Thuesen, Betina H; Linneberg, Allan
2016-01-01
-up examination and among them 203 had developed hypertension. In models including both VAT and SAT, the Framingham Hypertension Risk Score variables (age, sex, smoking status, family history of hypertension, and baseline blood pressure) and glycated hemoglobin, odds ratio (95% confidence interval) for incident......Abdominal obesity is a major risk factor for hypertension. However, different distributions of abdominal adipose tissue may affect hypertension risk differently. The main purpose of this study was to explore the association of subcutaneous abdominal adipose tissue (SAT) and visceral adipose tissue......)). We constructed multiple logistic regression models to compute standardized odds ratios with 95% confidence intervals per SD increase in SAT and VAT. Of the 2119 normotensive participants at baseline, 1432, with mean SAT of 2.8 cm and mean VAT of 5.7 cm, returned 5 years later for a follow...
Directory of Open Access Journals (Sweden)
Leonid Berezansky
2003-02-01
Full Text Available We study a scalar delay differential equation with a bounded distributed delay, $$ dot{x}(t+ int_{h(t}^t x(s,d_s R(t,s - int_{g(t}^t x(s,d_s T(t,s=0, $$ where $R(t,s$, $T(t,s$ are nonnegative nondecreasing in $s$ for any $t$, $$ R(t,h(t=T(t,g(t=0, quad R(t,s geq T(t,s. $$ We establish a connection between non-oscillation of this differential equation and the corresponding differential inequalities, and between positiveness of the fundamental function and the existence of a nonnegative solution for a nonlinear integral inequality that constructed explicitly. We also present comparison theorems, and explicit non-oscillation and oscillation results. In a separate publication (part II, we will consider applications of this theory to differential equations with several concentrated delays, integrodifferential, and mixed equations.
A general approach to double-moment normalization of drop size distributions
Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.
2003-04-01
Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.
Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico
2005-05-01
In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.
CyNC - towards a General Tool for Performance Analysis of Complex Distributed Real Time Systems
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens F. Dalsgaard
2005-01-01
The paper addresses the current state and the ongoing activities of a tool for performance analysis of complex real time systems. The tool named CyNC is based on network calculus allowing for the computation of backlogs and delays in a system from specified lower and upper bounds of external...... workflow and computational resources. The current version of the tool implements an extension to previous work in that it allows for general workflow and resource bounds and provides optimal solutions even to systems with cyclic dependencies. Despite the virtues of the current tool, improvements...... and extensions still remain, which are in focus of ongoing activities. Improvements include accounting for phase information to improve bounds, whereas the tool awaits extension to include flow control models, which both depend on the possibility of accounting for propagation delay. Since the current version...
Generalized Gibbs distribution and energy localization in the semiclassical FPU problem
Hipolito, Rafael; Danshita, Ippei; Oganesyan, Vadim; Polkovnikov, Anatoli
2011-03-01
We investigate dynamics of the weakly interacting quantum mechanical Fermi-Pasta-Ulam (qFPU) model in the semiclassical limit below the stochasticity threshold. Within this limit we find that initial quantum fluctuations lead to the damping of FPU oscillations and relaxation of the system to a slowly evolving steady state with energy localized within few momentum modes. We find that in large systems this state can be described by the generalized Gibbs ensemble (GGE), with the Lagrange multipliers being very weak functions of time. This ensembles gives accurate description of the instantaneous correlation functions, both quadratic and quartic. Based on these results we conjecture that GGE generically appears as a prethermalized state in weakly non-integrable systems.
Arkell, Karolina; Knutson, Hans-Kristian; Frederiksen, Søren S; Breil, Martin P; Nilsson, Bernt
2018-01-12
With the shift of focus of the regulatory bodies, from fixed process conditions towards flexible ones based on process understanding, model-based optimization is becoming an important tool for process development within the biopharmaceutical industry. In this paper, a multi-objective optimization study of separation of three insulin variants by reversed-phase chromatography (RPC) is presented. The decision variables were the load factor, the concentrations of ethanol and KCl in the eluent, and the cut points for the product pooling. In addition to the purity constraints, a solubility constraint on the total insulin concentration was applied. The insulin solubility is a function of the ethanol concentration in the mobile phase, and the main aim was to investigate the effect of this constraint on the maximal productivity. Multi-objective optimization was performed with and without the solubility constraint, and visualized as Pareto fronts, showing the optimal combinations of the two objectives productivity and yield for each case. Comparison of the constrained and unconstrained Pareto fronts showed that the former diverges when the constraint becomes active, because the increase in productivity with decreasing yield is almost halted. Consequently, we suggest the operating point at which the total outlet concentration of insulin reaches the solubility limit as the most suitable one. According to the results from the constrained optimizations, the maximal productivity on the C 4 adsorbent (0.41 kg/(m 3 column h)) is less than half of that on the C 18 adsorbent (0.87 kg/(m 3 column h)). This is partly caused by the higher selectivity between the insulin variants on the C 18 adsorbent, but the main reason is the difference in how the solubility constraint affects the processes. Since the optimal ethanol concentration for elution on the C 18 adsorbent is higher than for the C 4 one, the insulin solubility is also higher, allowing a higher pool concentration
Distribution and determinants of QRS rotation of black and white persons in the general population.
Prineas, Ronald J; Zhang, Zhu-Ming; Stevens, Cladd E; Soliman, Elsayed Z
The prevalence and determinants of QRS transition zones are not well established. We examined the distributions of Normal, clockwise (CW) and counterclockwise (CCW)) QRS transition zones and their relations to disease, body size and demographics in 4624 black and white men and women free of cardiovascular disease and major ECG abnormalities enrolled in the NHANES-III survey. CW transition zones were least observed (6.2%) and CCW were most prevalent (60.1%) with Normal in an intermediate position (33.7%). In multivariable logistic regression analysis, the adjusted, significant predictors for CCW compared to Normal were a greater proportion of blacks and women, fewer thin people (BMI<20, thin), a greater ratio of chest depth to chest width, and an LVMass index <80g. By contrast, CW persons were older, had larger QRS/T angles, smaller ratio of chest depth to chest width, had a greater proportion of subjects with low voltage QRS, more pulmonary disease, a greater proportion with high heart rates, shorter QRS duration and were more obese (BMI≥30). Normal rather than being the most prevalent transition zone was intermediate in frequency between the most frequently encountered CCW and the least frequently encountered transition zone CW. Differences in the predictors of CW and CCW exist. This requires further investigation to examine how far these differences explain the differences in the published prognostic differences between CW and CCW. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Vimal Savsani
2017-01-01
Full Text Available Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS algorithm, which is based on the search technique of heat transfer search (HTS algorithm. MOHTS employs the elitist nondominated sorting and crowding distance approach of an elitist based nondominated sorting genetic algorithm-II (NSGA-II for obtaining different nondomination levels and to preserve the diversity among the optimal set of solutions, respectively. The capability in yielding a Pareto front as close as possible to the true Pareto front of MOHTS has been tested on the multiobjective optimization problem of the vehicle suspension design, which has a set of five second-order linear ordinary differential equations. Half car passive ride model with two different sets of five objectives is employed for optimizing the suspension parameters using MOHTS and NSGA-II. The optimization studies demonstrate that MOHTS achieves the better nondominated Pareto front with the widespread (diveresed set of optimal solutions as compared to NSGA-II, and further the comparison of the extreme points of the obtained Pareto front reveals the dominance of MOHTS over NSGA-II, multiobjective uniform diversity genetic algorithm (MUGA, and combined PSO-GA based MOEA.
International Nuclear Information System (INIS)
El Beiyad, M.; Pire, B.; Segond, M.; Szymanowski, L.; Wallon, S.
2010-01-01
The chiral-odd transversity generalized parton distributions (GPDs) of the nucleon can be accessed experimentally through the exclusive photoproduction process γ+N→π+ρ+N ' , in the kinematics where the meson pair has a large invariant mass and the final nucleon has a small transverse momentum, provided the vector meson is produced in a transversally polarized state. We calculate perturbatively the scattering amplitude at leading order in α s . We build a simple model for the dominant transversity GPD H T (x,ξ,t) based on the concept of double distribution. We estimate the unpolarized differential cross section for this process in the kinematics of the Jlab and Compass experiments. Counting rates show that the experiment looks feasible with the real photon beam characteristics expected at JLab-12 GeV, and with the quasi real photon beam in the Compass experiment.