Active learning of Pareto fronts.
Campigotto, Paolo; Passerini, Andrea; Battiti, Roberto
2014-03-01
This paper introduces the active learning of Pareto fronts (ALP) algorithm, a novel approach to recover the Pareto front of a multiobjective optimization problem. ALP casts the identification of the Pareto front into a supervised machine learning task. This approach enables an analytical model of the Pareto front to be built. The computational effort in generating the supervised information is reduced by an active learning strategy. In particular, the model is learned from a set of informative training objective vectors. The training objective vectors are approximated Pareto-optimal vectors obtained by solving different scalarized problem instances. The experimental results show that ALP achieves an accurate Pareto front approximation with a lower computational effort than state-of-the-art estimation of distribution algorithms and widely known genetic techniques.
Saborido, Rubén; Ruiz, Ana B; Luque, Mariano
2017-01-01
In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA ( global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.
Pareto fronts in clinical practice for pinnacle.
Janssen, Tomas; van Kesteren, Zdenko; Franssen, Gijs; Damen, Eugène; van Vliet, Corine
2013-03-01
Our aim was to develop a framework to objectively perform treatment planning studies using Pareto fronts. The Pareto front represents all optimal possible tradeoffs among several conflicting criteria and is an ideal tool with which to study the possibilities of a given treatment technique. The framework should require minimal user interaction and should resemble and be applicable to daily clinical practice. To generate the Pareto fronts, we used the native scripting language of Pinnacle(3) (Philips Healthcare, Andover, MA). The framework generates thousands of plans automatically from which the Pareto front is generated. As an example, the framework is applied to compare intensity modulated radiation therapy (IMRT) with volumetric modulated arc therapy (VMAT) for prostate cancer patients. For each patient and each technique, 3000 plans are generated, resulting in a total of 60,000 plans. The comparison is based on 5-dimensional Pareto fronts. Generating 3000 plans for 10 patients in parallel requires on average 96 h for IMRT and 483 hours for VMAT. Using VMAT, compared to IMRT, the maximum dose of the boost PTV was reduced by 0.4 Gy (P=.074), the mean dose in the anal sphincter by 1.6 Gy (P=.055), the conformity index of the 95% isodose (CI(95%)) by 0.02 (P=.005), and the rectal wall V(65 Gy) by 1.1% (P=.008). We showed the feasibility of automatically generating Pareto fronts with Pinnacle(3). Pareto fronts provide a valuable tool for performing objective comparative treatment planning studies. We compared VMAT with IMRT in prostate patients and found VMAT had a dosimetric advantage over IMRT. Copyright © 2013 Elsevier Inc. All rights reserved.
Pareto Fronts in Clinical Practice for Pinnacle
Janssen, Tomas; Kesteren, Zdenko van; Franssen, Gijs; Damen, Eugène; Vliet, Corine van
2013-01-01
Purpose: Our aim was to develop a framework to objectively perform treatment planning studies using Pareto fronts. The Pareto front represents all optimal possible tradeoffs among several conflicting criteria and is an ideal tool with which to study the possibilities of a given treatment technique. The framework should require minimal user interaction and should resemble and be applicable to daily clinical practice. Methods and Materials: To generate the Pareto fronts, we used the native scripting language of Pinnacle 3 (Philips Healthcare, Andover, MA). The framework generates thousands of plans automatically from which the Pareto front is generated. As an example, the framework is applied to compare intensity modulated radiation therapy (IMRT) with volumetric modulated arc therapy (VMAT) for prostate cancer patients. For each patient and each technique, 3000 plans are generated, resulting in a total of 60,000 plans. The comparison is based on 5-dimensional Pareto fronts. Results: Generating 3000 plans for 10 patients in parallel requires on average 96 h for IMRT and 483 hours for VMAT. Using VMAT, compared to IMRT, the maximum dose of the boost PTV was reduced by 0.4 Gy (P=.074), the mean dose in the anal sphincter by 1.6 Gy (P=.055), the conformity index of the 95% isodose (CI 95% ) by 0.02 (P=.005), and the rectal wall V 65 Gy by 1.1% (P=.008). Conclusions: We showed the feasibility of automatically generating Pareto fronts with Pinnacle 3 . Pareto fronts provide a valuable tool for performing objective comparative treatment planning studies. We compared VMAT with IMRT in prostate patients and found VMAT had a dosimetric advantage over IMRT
2016-12-21
The JMP Add-In TopN-PFS provides an automated tool for finding layered Pareto front to identify the top N solutions from an enumerated list of candidates subject to optimizing multiple criteria. The approach constructs the N layers of Pareto fronts, and then provides a suite of graphical tools to explore the alternatives based on different prioritizations of the criteria. The tool is designed to provide a set of alternatives from which the decision-maker can select the best option for their study goals.
Multiclass gene selection using Pareto-fronts.
Rajapakse, Jagath C; Mundra, Piyushkumar A
2013-01-01
Filter methods are often used for selection of genes in multiclass sample classification by using microarray data. Such techniques usually tend to bias toward a few classes that are easily distinguishable from other classes due to imbalances of strong features and sample sizes of different classes. It could therefore lead to selection of redundant genes while missing the relevant genes, leading to poor classification of tissue samples. In this manuscript, we propose to decompose multiclass ranking statistics into class-specific statistics and then use Pareto-front analysis for selection of genes. This alleviates the bias induced by class intrinsic characteristics of dominating classes. The use of Pareto-front analysis is demonstrated on two filter criteria commonly used for gene selection: F-score and KW-score. A significant improvement in classification performance and reduction in redundancy among top-ranked genes were achieved in experiments with both synthetic and real-benchmark data sets.
Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.
Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K
2010-03-21
We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.
Pareto front estimation for decision making.
Giagkiozis, Ioannis; Fleming, Peter J
2014-01-01
The set of available multi-objective optimisation algorithms continues to grow. This fact can be partially attributed to their widespread use and applicability. However, this increase also suggests several issues remain to be addressed satisfactorily. One such issue is the diversity and the number of solutions available to the decision maker (DM). Even for algorithms very well suited for a particular problem, it is difficult-mainly due to the computational cost-to use a population large enough to ensure the likelihood of obtaining a solution close to the DM's preferences. In this paper we present a novel methodology that produces additional Pareto optimal solutions from a Pareto optimal set obtained at the end run of any multi-objective optimisation algorithm for two-objective and three-objective problem instances.
Computing gap free Pareto front approximations with stochastic search algorithms.
Schütze, Oliver; Laumanns, Marco; Tantar, Emilia; Coello, Carlos A Coello; Talbi, El-Ghazali
2010-01-01
Recently, a convergence proof of stochastic search algorithms toward finite size Pareto set approximations of continuous multi-objective optimization problems has been given. The focus was on obtaining a finite approximation that captures the entire solution set in some suitable sense, which was defined by the concept of epsilon-dominance. Though bounds on the quality of the limit approximation-which are entirely determined by the archiving strategy and the value of epsilon-have been obtained, the strategies do not guarantee to obtain a gap free approximation of the Pareto front. That is, such approximations A can reveal gaps in the sense that points f in the Pareto front can exist such that the distance of f to any image point F(a), a epsilon A, is "large." Since such gap free approximations are desirable in certain applications, and the related archiving strategies can be advantageous when memetic strategies are included in the search process, we are aiming in this work for such methods. We present two novel strategies that accomplish this task in the probabilistic sense and under mild assumptions on the stochastic search algorithm. In addition to the convergence proofs, we give some numerical results to visualize the behavior of the different archiving strategies. Finally, we demonstrate the potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategy-multi-objective continuation methods-by showing that the concept of epsilon-dominance can be integrated into this approach in a suitable way.
Ottosson, Rickard O.; Sjoestroem, David; Behrens, Claus F.; Karlsson, Anna; Engstroem, Per E.; Knoeoes, Tommy; Ceberg, Crister
2009-01-01
Pareto optimality is a concept that formalises the trade-off between a given set of mutually contradicting objectives. A solution is said to be Pareto optimal when it is not possible to improve one objective without deteriorating at least one of the other. A set of Pareto optimal solutions constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics of a treatment planning system (TPS), treatment strategy or delivery technique, Pareto fronts for a given case are likely to differ. The aim of this study was to investigate the feasibility of using Pareto fronts as a comparative tool for TPSs, treatment strategies and delivery techniques. In order to sample Pareto fronts, multiple treatment plans with varying target conformity and dose sparing of OAR were created for a number of prostate and head and neck IMRT cases. The DVHs of each plan were evaluated with respect to target coverage and dose to relevant OAR. Pareto fronts were successfully created for all studied cases. The results did indeed follow the definition of the Pareto concept, i.e. dose sparing of the OAR could not be improved without target coverage being impaired or vice versa. Furthermore, various treatment techniques resulted in distinguished and well separated Pareto fronts. Pareto fronts may be used to evaluate a number of parameters within radiotherapy. Examples are TPS optimization algorithms, the variation between accelerators or delivery techniques and the degradation of a plan during the treatment planning process. The issue of designing a model for unbiased comparison of parameters with such large inherent discrepancies, e.g. different TPSs, is problematic and should be carefully considered
Ottosson, Rickard O; Engstrom, Per E; Sjöström, David; Behrens, Claus F; Karlsson, Anna; Knöös, Tommy; Ceberg, Crister
2009-01-01
Pareto optimality is a concept that formalises the trade-off between a given set of mutually contradicting objectives. A solution is said to be Pareto optimal when it is not possible to improve one objective without deteriorating at least one of the other. A set of Pareto optimal solutions constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics of a treatment planning system (TPS), treatment strategy or delivery technique, Pareto fronts for a given case are likely to differ. The aim of this study was to investigate the feasibility of using Pareto fronts as a comparative tool for TPSs, treatment strategies and delivery techniques. In order to sample Pareto fronts, multiple treatment plans with varying target conformity and dose sparing of OAR were created for a number of prostate and head & neck IMRT cases. The DVHs of each plan were evaluated with respect to target coverage and dose to relevant OAR. Pareto fronts were successfully created for all studied cases. The results did indeed follow the definition of the Pareto concept, i.e. dose sparing of the OAR could not be improved without target coverage being impaired or vice versa. Furthermore, various treatment techniques resulted in distinguished and well separated Pareto fronts. Pareto fronts may be used to evaluate a number of parameters within radiotherapy. Examples are TPS optimization algorithms, the variation between accelerators or delivery techniques and the degradation of a plan during the treatment planning process. The issue of designing a model for unbiased comparison of parameters with such large inherent discrepancies, e.g. different TPSs, is problematic and should be carefully considered.
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
The geometry of the Pareto front in biological phenotype space
Sheftel, Hila; Shoval, Oren; Mayo, Avi; Alon, Uri
2013-01-01
When organisms perform a single task, selection leads to phenotypes that maximize performance at that task. When organisms need to perform multiple tasks, a trade-off arises because no phenotype can optimize all tasks. Recent work addressed this question, and assumed that the performance at each task decays with distance in trait space from the best phenotype at that task. Under this assumption, the best-fitness solutions (termed the Pareto front) lie on simple low-dimensional shapes in trait space: line segments, triangles and other polygons. The vertices of these polygons are specialists at a single task. Here, we generalize this finding, by considering performance functions of general form, not necessarily functions that decay monotonically with distance from their peak. We find that, except for performance functions with highly eccentric contours, simple shapes in phenotype space are still found, but with mildly curving edges instead of straight ones. In a wide range of systems, complex data on multiple quantitative traits, which might be expected to fill a high-dimensional phenotype space, is predicted instead to collapse onto low-dimensional shapes; phenotypes near the vertices of these shapes are predicted to be specialists, and can thus suggest which tasks may be at play. PMID:23789060
A. Bouter (Anton); K. Pirpinia (Kleopatra); T. Alderliesten (Tanja); P.A.N. Bosman (Peter)
2017-01-01
textabstractA multi-objective optimization approach is o.en followed by an a posteriori decision-making process, during which the most appropriate solution of the Pareto set is selected by a professional in the .eld. Conventional visualization methods do not correct for Pareto fronts with
Ottosson, Rickard O; Engstrom, Per E; Sjöström, David
2008-01-01
constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics...
Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1
Langenbrunner, B.; Neelin, J. D.
2017-09-01
Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.
Diversity comparison of Pareto front approximations in many-objective optimization.
Li, Miqing; Yang, Shengxiang; Liu, Xiaohui
2014-12-01
Diversity assessment of Pareto front approximations is an important issue in the stochastic multiobjective optimization community. Most of the diversity indicators in the literature were designed to work for any number of objectives of Pareto front approximations in principle, but in practice many of these indicators are infeasible or not workable when the number of objectives is large. In this paper, we propose a diversity comparison indicator (DCI) to assess the diversity of Pareto front approximations in many-objective optimization. DCI evaluates relative quality of different Pareto front approximations rather than provides an absolute measure of distribution for a single approximation. In DCI, all the concerned approximations are put into a grid environment so that there are some hyperboxes containing one or more solutions. The proposed indicator only considers the contribution of different approximations to nonempty hyperboxes. Therefore, the computational cost does not increase exponentially with the number of objectives. In fact, the implementation of DCI is of quadratic time complexity, which is fully independent of the number of divisions used in grid. Systematic experiments are conducted using three groups of artificial Pareto front approximations and seven groups of real Pareto front approximations with different numbers of objectives to verify the effectiveness of DCI. Moreover, a comparison with two diversity indicators used widely in many-objective optimization is made analytically and empirically. Finally, a parametric investigation reveals interesting insights of the division number in grid and also offers some suggested settings to the users with different preferences.
Level Diagrams analysis of Pareto Front for multiobjective system redundancy allocation
Zio, E.; Bazzo, R.
2011-01-01
Reliability-based and risk-informed design, operation, maintenance and regulation lead to multiobjective (multicriteria) optimization problems. In this context, the Pareto Front and Set found in a multiobjective optimality search provide a family of solutions among which the decision maker has to look for the best choice according to his or her preferences. Efficient visualization techniques for Pareto Front and Set analyses are needed for helping decision makers in the selection task. In this paper, we consider the multiobjective optimization of system redundancy allocation and use the recently introduced Level Diagrams technique for graphically representing the resulting Pareto Front and Set. Each objective and decision variable is represented on separate diagrams where the points of the Pareto Front and Set are positioned according to their proximity to ideally optimal points, as measured by a metric of normalized objective values. All diagrams are synchronized across all objectives and decision variables. On the basis of the analysis of the Level Diagrams, we introduce a procedure for reducing the number of solutions in the Pareto Front; from the reduced set of solutions, the decision maker can more easily identify his or her preferred solution.
Kristoffer Petersson
2017-07-01
Full Text Available We present a clinical distance measure for Pareto front evaluation studies in radiotherapy, which we show strongly correlates (r = 0.74 and 0.90 with clinical plan quality evaluation. For five prostate cases, sub-optimal treatment plans located at a clinical distance value of >0.32 (0.28–0.35 from fronts of Pareto optimal plans, were assessed to be of lower plan quality by our (12 observers (p < .05. In conclusion, the clinical distance measure can be used to determine if the difference between a front and a given plan (or between different fronts corresponds to a clinically significant plan quality difference.
Ziaul Huque
2012-01-01
Full Text Available A Computational Fluid Dynamics (CFD and response surface-based multiobjective design optimization were performed for six different 2D airfoil profiles, and the Pareto optimal front of each airfoil is presented. FLUENT, which is a commercial CFD simulation code, was used to determine the relevant aerodynamic loads. The Lift Coefficient (CL and Drag Coefficient (CD data at a range of 0° to 12° angles of attack (α and at three different Reynolds numbers (Re=68,459, 479, 210, and 958, 422 for all the six airfoils were obtained. Realizable k-ε turbulence model with a second-order upwind solution method was used in the simulations. The standard least square method was used to generate response surface by the statistical code JMP. Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II was used to determine the Pareto optimal set based on the response surfaces. Each Pareto optimal solution represents a different compromise between design objectives. This gives the designer a choice to select a design compromise that best suits the requirements from a set of optimal solutions. The Pareto solution set is presented in the form of a Pareto optimal front.
Ranking of microRNA target prediction scores by Pareto front analysis.
Sahoo, Sudhakar; Albrecht, Andreas A
2010-12-01
Over the past ten years, a variety of microRNA target prediction methods has been developed, and many of the methods are constantly improved and adapted to recent insights into miRNA-mRNA interactions. In a typical scenario, different methods return different rankings of putative targets, even if the ranking is reduced to selected mRNAs that are related to a specific disease or cell type. For the experimental validation it is then difficult to decide in which order to process the predicted miRNA-mRNA bindings, since each validation is a laborious task and therefore only a limited number of mRNAs can be analysed. We propose a new ranking scheme that combines ranked predictions from several methods and - unlike standard thresholding methods - utilises the concept of Pareto fronts as defined in multi-objective optimisation. In the present study, we attempt a proof of concept by applying the new ranking scheme to hsa-miR-21, hsa-miR-125b, and hsa-miR-373 and prediction scores supplied by PITA and RNAhybrid. The scores are interpreted as a two-objective optimisation problem, and the elements of the Pareto front are ranked by the STarMir score with a subsequent re-calculation of the Pareto front after removal of the top-ranked mRNA from the basic set of prediction scores. The method is evaluated on validated targets of the three miRNA, and the ranking is compared to scores from DIANA-microT and TargetScan. We observed that the new ranking method performs well and consistent, and the first validated targets are elements of Pareto fronts at a relatively early stage of the recurrent procedure, which encourages further research towards a higher-dimensional analysis of Pareto fronts. Copyright © 2010 Elsevier Ltd. All rights reserved.
Spectral-Efficiency - Illumination Pareto Front for Energy Harvesting Enabled VLC System
Abdelhady, Amr Mohamed Abdelaziz
2017-12-13
The continuous improvement in optical energy harvesting devices motivates visible light communication (VLC) system developers to utilize such available free energy sources. An outdoor VLC system is considered where an optical base station sends data to multiple users that are capable of harvesting the optical energy. The proposed VLC system serves multiple users using time division multiple access (TDMA) with unequal time and power allocation, which are allocated to improve the system performance. The adopted optical system provides users with illumination and data communication services. The outdoor optical design objective is to maximize the illumination, while the communication design objective is to maximize the spectral efficiency (SE). The design objectives are shown to be conflicting, therefore, a multiobjective optimization problem is formulated to obtain the Pareto front performance curve for the proposed system. To this end, the marginal optimization problems are solved first using low complexity algorithms. Then, based on the proposed algorithms, a low complexity algorithm is developed to obtain an inner bound of the Pareto front for the illumination-SE tradeoff. The inner bound for the Pareto-front is shown to be close to the optimal Pareto-frontier via several simulation scenarios for different system parameters.
Langenbrunner, B.; Neelin, J. D.
2016-12-01
Despite increasing complexity and process representation in global climate models (GCMs), accurate climate simulation is limited by uncertainties in sub-grid scale model physics, where cloud processes and precipitation occur, and the interaction with large-scale dynamics. Identifying highly sensitive parameters and constraining them against observations is therefore a valuable step in narrowing uncertainty. However, changes in parameterizations often improve some variables or aspects of the simulation while degrading others. This analysis addresses means of improving GCM simulation of present-day tropical Pacific climate in the face of these tradeoffs. Focusing on the deep convection scheme in the fully coupled Community Earth System Model (CESM) version 1, four parameters were systematically sampled, and a metamodel or model emulator was used to reconstruct the parameter space of this perturbed physics ensemble. Using this metamodel, a Pareto front is constructed to visualize multiobjective tradeoffs in model performance, and results highlight the most important aspects of model physics as well as the most sensitive parameter ranges. For example, parameter tradeoffs arise in the tropical Pacific where precipitation cannot improve without sea surface temperature getting worse. Tropical precipitation sensitivity is found to be highly nonlinear for low values of entrainment in convecting plumes, though it is fairly insensitive at the high end of the plausible range. Increasing the adjustment timescale for convective closure causes the centroid of tropical precipitation to vary as much as two degrees latitude, highlighting the effect these physics can have on large-scale features of the hydrological cycle. The optimization procedure suggests that simultaneously increasing the maximum downdraft mass flux fraction and the adjustment timescale can yield improvements to surface temperature and column water vapor without degrading the simulation of precipitation. These
van de Schoot, A J A J; Visser, J; van Kesteren, Z; Janssen, T M; Rasch, C R N; Bel, A
2016-02-21
The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D(99%)) and OAR doses (rectum V30Gy; bladder V40Gy). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D(99%), rectum V(30Gy) and bladder V(40Gy) to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D(99%) on average by 0.2 Gy and decreased the median rectum V(30Gy) and median bladder V(40Gy) on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal
Van de Schoot, A J A J; Visser, J; Van Kesteren, Z; Rasch, C R N; Bel, A; Janssen, T M
2016-01-01
The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D 99% ) and OAR doses (rectum V 30Gy ; bladder V 40Gy ). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D 99% , rectum V 30Gy and bladder V 40Gy to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D 99% on average by 0.2 Gy and decreased the median rectum V 30Gy and median bladder V 40Gy on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal in
Patient feature based dosimetric Pareto front prediction in esophageal cancer radiotherapy.
Wang, Jiazhou; Jin, Xiance; Zhao, Kuaike; Peng, Jiayuan; Xie, Jiang; Chen, Junchao; Zhang, Zhen; Studenski, Matthew; Hu, Weigang
2015-02-01
To investigate the feasibility of the dosimetric Pareto front (PF) prediction based on patient's anatomic and dosimetric parameters for esophageal cancer patients. Eighty esophagus patients in the authors' institution were enrolled in this study. A total of 2928 intensity-modulated radiotherapy plans were obtained and used to generate PF for each patient. On average, each patient had 36.6 plans. The anatomic and dosimetric features were extracted from these plans. The mean lung dose (MLD), mean heart dose (MHD), spinal cord max dose, and PTV homogeneity index were recorded for each plan. Principal component analysis was used to extract overlap volume histogram (OVH) features between PTV and other organs at risk. The full dataset was separated into two parts; a training dataset and a validation dataset. The prediction outcomes were the MHD and MLD. The spearman's rank correlation coefficient was used to evaluate the correlation between the anatomical features and dosimetric features. The stepwise multiple regression method was used to fit the PF. The cross validation method was used to evaluate the model. With 1000 repetitions, the mean prediction error of the MHD was 469 cGy. The most correlated factor was the first principal components of the OVH between heart and PTV and the overlap between heart and PTV in Z-axis. The mean prediction error of the MLD was 284 cGy. The most correlated factors were the first principal components of the OVH between heart and PTV and the overlap between lung and PTV in Z-axis. It is feasible to use patients' anatomic and dosimetric features to generate a predicted Pareto front. Additional samples and further studies are required improve the prediction model.
Gollub, C; De Vivie-Riedle, R
2009-01-01
A multi-objective genetic algorithm is applied to optimize picosecond laser fields, driving vibrational quantum processes. Our examples are state-to-state transitions and unitary transformations. The approach allows features of the shaped laser fields and of the excitation mechanisms to be controlled simultaneously with the quantum yield. Within the parameter range accessible to the experiment, we focus on short pulse durations and low pulse energies to optimize preferably robust laser fields. Multidimensional Pareto fronts for these conflicting objectives could be constructed. Comparison with previous work showed that the solutions from Pareto optimizations and from optimal control theory match very well.
Jiang, Shouyong; Yang, Shengxiang
2016-02-01
The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.
Cilla, Savino; Ianiro, Anna; Deodato, Francesco; Macchia, Gabriella; Digesù, Cinzia; Valentini, Vincenzo; Morganti, Alessio G
2017-11-27
We explored the Pareto fronts mathematical strategy to determine the optimal block margin and prescription isodose for stereotactic body radiotherapy (SBRT) treatments of liver metastases using the volumetric-modulated arc therapy (VMAT) technique. Three targets (planning target volumes [PTVs] = 20, 55, and 101 cc) were selected. A single fraction dose of 26 Gy was prescribed (prescription dose [PD]). VMAT plans were generated for 3 different beam energies. Pareto fronts based on (1) different multileaf collimator (MLC) block margin around PTV and (2) different prescription isodose lines (IDL) were produced. For each block margin, the greatest IDL fulfilling the criteria (95% of PTV reached 100%) was considered as providing the optimal clinical plan for PTV coverage. Liver D mean , V7Gy, and V12Gy were used against the PTV coverage to generate the fronts. Gradient indexes (GI and mGI), homogeneity index (HI), and healthy liver irradiation in terms of D mean , V7Gy, and V12Gy were calculated to compare different plans. In addition, each target was also optimized with a full-inverse planning engine to obtain a direct comparison with anatomy-based treatment planning system (TPS) results. About 900 plans were calculated to generate the fronts. GI and mGI show a U-shaped behavior as a function of beam margin with minimal values obtained with a +1 mm MLC margin. For these plans, the IDL ranges from 74% to 86%. GI and mGI show also a V-shaped behavior with respect to HI index, with minimum values at 1 mm for all metrics, independent of tumor dimensions and beam energy. Full-inversed optimized plans reported worse results with respect to Pareto plans. In conclusion, Pareto fronts provide a rigorous strategy to choose clinical optimal plans in SBRT treatments. We show that a 1-mm MLC block margin provides the best results with regard to healthy liver tissue irradiation and steepness of dose fallout. Copyright © 2017 American Association of Medical Dosimetrists
Lechner, Wolfgang; Kragl, Gabriele; Georg, Dietmar
2013-12-01
To investigate the differences in treatment plan quality of IMRT and VMAT with and without flattening filter using Pareto optimal fronts, for two treatment sites of different anatomic complexity. Pareto optimal fronts (POFs) were generated for six prostate and head-and-neck cancer patients by stepwise reduction of the constraint (during the optimization process) of the primary organ-at-risk (OAR). 9-static field IMRT and 360°-single-arc VMAT plans with flattening filter (FF) and without flattening filter (FFF) were compared. The volume receiving 5 Gy or more (V5 Gy) was used to estimate the low dose exposure. Furthermore, the number of monitor units (MUs) and measurements of the delivery time (T) were used to assess the efficiency of the treatment plans. A significant increase in MUs was found when using FFF-beams while the treatment plan quality was at least equivalent to the FF-beams. T was decreased by 18% for prostate for IMRT with FFF-beams and by 4% for head-and-neck cases, but increased by 22% and 16% for VMAT. A reduction of up to 5% of V5 Gy was found for IMRT prostate cases with FFF-beams. The evaluation of the POFs showed an at least comparable treatment plan quality of FFF-beams compared to FF-beams for both treatment sites and modalities. For smaller targets the advantageous characteristics of FFF-beams could be better exploited. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Van Kesteren, Z; Janssen, T M; Damen, E; Van Vliet-Vroegindeweij, C
2012-01-01
To evaluate in an objective way the effect of leaf interdigitation and leaf width on volumetric modulated arc therapy plans in Pinnacle. Three multileaf collimators (MLCs) were modeled: two 10 mm leaf width MLCs, with and without interdigitating leafs, and a 5 mm leaf width MLC with interdigitating leafs. Three rectum patients and three prostate patients were used for the planning study. In order to compare treatment techniques in an objective way, a Pareto front comparison was carried out. 200 plans were generated in an automated way, per patient per MLC model, resulting in a total of 3600 plans. From these plans, Pareto-optimal plans were selected which were evaluated for various dosimetric variables. The capability of leaf interdigitation showed little dosimetric impact on the treatment plans, when comparing the 10 mm leaf width MLC with and without leaf interdigitation. When comparing the 10 mm leaf width MLC with the 5 mm leaf width MLC, both with interdigitating leafs, improvement in plan quality was observed. For both patient groups, the integral dose was reduced by 0.6 J for the thin MLC. For the prostate patients, the mean dose to the anal sphincter was reduced by 1.8 Gy and the conformity of the V 95% was reduced by 0.02 using the thin MLC. The V 65% of the rectum was reduced by 0.1% and the dose homogeneity with 1.5%. For rectum patients, the mean dose to the bowel was reduced by 1.4 Gy and the mean dose to the bladder with 0.8 Gy for the thin MLC. The conformity of the V 95% was equivalent for the 10 and 5 mm leaf width MLCs for the rectum patients. We have objectively compared three types of MLCs in a planning study for prostate and rectum patients by analyzing Pareto-optimal plans which were generated in an automated way. Interdigitation of MLC leafs does not generate better plans using the SmartArc algorithm in Pinnacle. Changing the MLC leaf width from 10 to 5 mm generates better treatment plans although the clinical relevance remains to be proven
van Kesteren, Z; Janssen, T M; Damen, E; van Vliet-Vroegindeweij, C
2012-05-21
To evaluate in an objective way the effect of leaf interdigitation and leaf width on volumetric modulated arc therapy plans in Pinnacle. Three multileaf collimators (MLCs) were modeled: two 10 mm leaf width MLCs, with and without interdigitating leafs, and a 5 mm leaf width MLC with interdigitating leafs. Three rectum patients and three prostate patients were used for the planning study. In order to compare treatment techniques in an objective way, a Pareto front comparison was carried out. 200 plans were generated in an automated way, per patient per MLC model, resulting in a total of 3600 plans. From these plans, Pareto-optimal plans were selected which were evaluated for various dosimetric variables. The capability of leaf interdigitation showed little dosimetric impact on the treatment plans, when comparing the 10 mm leaf width MLC with and without leaf interdigitation. When comparing the 10 mm leaf width MLC with the 5 mm leaf width MLC, both with interdigitating leafs, improvement in plan quality was observed. For both patient groups, the integral dose was reduced by 0.6 J for the thin MLC. For the prostate patients, the mean dose to the anal sphincter was reduced by 1.8 Gy and the conformity of the V(95%) was reduced by 0.02 using the thin MLC. The V(65%) of the rectum was reduced by 0.1% and the dose homogeneity with 1.5%. For rectum patients, the mean dose to the bowel was reduced by 1.4 Gy and the mean dose to the bladder with 0.8 Gy for the thin MLC. The conformity of the V(95%) was equivalent for the 10 and 5 mm leaf width MLCs for the rectum patients. We have objectively compared three types of MLCs in a planning study for prostate and rectum patients by analyzing Pareto-optimal plans which were generated in an automated way. Interdigitation of MLC leafs does not generate better plans using the SmartArc algorithm in Pinnacle. Changing the MLC leaf width from 10 to 5 mm generates better treatment plans although the clinical relevance remains
Zio, E.; Bazzo, R.
2010-01-01
In this paper, a framework is developed for identifying a limited number of representative solutions of a multiobjective optimization problem concerning the inspection intervals of the components of a safety system of a nuclear power plant. Pareto Front solutions are first clustered into 'families', which are then synthetically represented by a 'head of the family' solution. Three clustering methods are analyzed. Level Diagrams are then used to represent, analyse and interpret the Pareto Fronts reduced to their head-of-the-family solutions. Two decision situations are considered: without or with decision maker preferences, the latter implying the introduction of a scoring system to rank the solutions with respect to the different objectives: a fuzzy preference assignment is then employed to this purpose. The results of the application of the framework of analysis to the problem of optimizing the inspection intervals of a nuclear power plant safety system show that the clustering-based reduction maintains the Pareto Front shape and relevant characteristics, while making it easier for the decision maker to select the final solution.
Kirlik, G; Zhang, H [University of Maryland School of Medicine, Baltimore, MD (United States)
2015-06-15
Purpose: To present a novel multi-criteria optimization (MCO) solution approach that generates well-dispersed representation of the Pareto front for radiation treatment planning. Methods: Different algorithms have been proposed and implemented in commercial planning software to generate MCO plans for external-beam radiation therapy. These algorithms consider convex optimization problems. We propose a grid-based algorithm to generate well-dispersed treatment plans over Pareto front. Our method is able to handle nonconvexity in the problem to deal with dose-volume objectives/constraints, biological objectives, such as equivalent uniform dose (EUD), tumor control probability (TCP), normal tissue complication probability (NTCP), etc. In addition, our algorithm is able to provide single MCO plan when clinicians are targeting narrow bounds of objectives for patients. In this situation, usually none of the generated plans were within the bounds and a solution is difficult to identify via manual navigation. We use the subproblem formulation utilized in the grid-based algorithm to obtain a plan within the specified bounds. The subproblem aims to generate a solution that maps into the rectangle defined by the bounds. If such a solution does not exist, it generates the solution closest to the rectangle. We tested our method with 10 locally advanced head and neck cancer cases. Results: 8 objectives were used including 3 different objectives for primary target volume, high-risk and low-risk target volumes, and 5 objectives for each of the organs-at-risk (OARs) (two parotids, spinal cord, brain stem and oral cavity). Given tight bounds, uniform dose was achieved for all targets while as much as 26% improvement was achieved in OAR sparing comparing to clinical plans without MCO and previously proposed MCO method. Conclusion: Our method is able to obtain well-dispersed treatment plans to attain better approximation for convex and nonconvex Pareto fronts. Single treatment plan can
De Kerf, Geert; Van Gestel, Dirk; Mommaerts, Lobke; Van den Weyngaert, Danielle; Verellen, Dirk
2015-09-17
Modulation factor (MF) and pitch have an impact on Helical TomoTherapy (HT) plan quality and HT users mostly use vendor-recommended settings. This study analyses the effect of these two parameters on both plan quality and treatment time for plans made with TomoEdge planning software by using the concept of Pareto optimal fronts. More than 450 plans with different combinations of pitch [0.10-0.50] and MF [1.2-3.0] were produced. These HT plans, with a field width (FW) of 5 cm, were created for five head and neck patients and homogeneity index, conformity index, dose-near-maximum (D2), and dose-near-minimum (D98) were analysed for the planning target volumes, as well as the mean dose and D2 for most critical organs at risk. For every dose metric the median value will be plotted against treatment time. A Pareto-like method is used in the analysis which will show how pitch and MF influence both treatment time and plan quality. For small pitches (≤0.20), MF does not influence treatment time. The contrary is true for larger pitches (≥0.25) as lowering MF will both decrease treatment time and plan quality until maximum gantry speed is reached. At this moment, treatment time is saturated and only plan quality will further decrease. The Pareto front analysis showed optimal combinations of pitch [0.23-0.45] and MF > 2.0 for a FW of 5 cm. Outside this range, plans will become less optimal. As the vendor-recommended settings fall within this range, the use of these settings is validated.
Kangji Li
2017-02-01
Full Text Available This paper is concerned with the development of a high-resolution and control-friendly optimization framework in enclosed environments that helps improve thermal comfort, indoor air quality (IAQ, and energy costs of heating, ventilation and air conditioning (HVAC system simultaneously. A computational fluid dynamics (CFD-based optimization method which couples algorithms implemented in Matlab with CFD simulation is proposed. The key part of this method is a data interactive mechanism which efficiently passes parameters between CFD simulations and optimization functions. A two-person office room is modeled for the numerical optimization. The multi-objective evolutionary algorithm—non-dominated-and-crowding Sorting Genetic Algorithm II (NSGA-II—is realized to explore the environment/energy Pareto front of the enclosed space. Performance analysis will demonstrate the effectiveness of the presented optimization method.
Ikefuji, M.; Laeven, R.J.A.; Magnus, J.R.; Muris, C.H.M.
2013-01-01
In searching for an appropriate utility function in the expected utility framework, we formulate four properties that we want the utility function to satisfy. We conduct a search for such a function, and we identify Pareto utility as a function satisfying all four desired properties. Pareto utility
2011-01-01
Itaalia majandusteadlase Vilfredo Pareto jõudmisest oma kuulsa printsiibini ja selle printsiibi mõjust tänapäevasele juhtimisele. Pareto printsiibi kohaselt ei aita suurem osa tegevusest meid tulemuseni jõuda, vaid on aja raiskamine. Diagramm
Petersson, Kristoffer; Ceberg, Crister; Engström, Per; Benedek, Hunor; Nilsson, Per; Knöös, Tommy
2011-06-01
small. The evaluation of the head-and-neck cases also showed that the plans generated in SharePlan were improved when more beams were used. The SharePlan Pareto front came close to the front for the TomoTherapy system when a sufficient number of beams were added. The results for plans generated with varied number of beams and segments demonstrated that the number of segments could be minimized with maintained agreement between SharePlan and TomoTherapy plans when 10-19 beams were used. This study showed (using Pareto front evaluation) that the plans generated in Share-Plan are comparable to plans generated in other TPSs. The evaluation also showed that the plans generated in SharePlan could be improved with the use of more beams. To minimize the number of segments needed in a plan with maintained agreement between the converted IMRT plans and the original TomoTherapy plans, 10-19 beams should be used, depending on target complexity. SharePlan has proved to be useful and should thereby be a time-saving complement as a backup system for clinics with a single TomoTherapy system installed alongside conventional C-arm linacs.
Pareto optimization in algebraic dynamic programming.
Saule, Cédric; Giegerich, Robert
2015-01-01
Pareto optimization combines independent objectives by computing the Pareto front of its search space, defined as the set of all solutions for which no other candidate solution scores better under all objectives. This gives, in a precise sense, better information than an artificial amalgamation of different scores into a single objective, but is more costly to compute. Pareto optimization naturally occurs with genetic algorithms, albeit in a heuristic fashion. Non-heuristic Pareto optimization so far has been used only with a few applications in bioinformatics. We study exact Pareto optimization for two objectives in a dynamic programming framework. We define a binary Pareto product operator [Formula: see text] on arbitrary scoring schemes. Independent of a particular algorithm, we prove that for two scoring schemes A and B used in dynamic programming, the scoring scheme [Formula: see text] correctly performs Pareto optimization over the same search space. We study different implementations of the Pareto operator with respect to their asymptotic and empirical efficiency. Without artificial amalgamation of objectives, and with no heuristics involved, Pareto optimization is faster than computing the same number of answers separately for each objective. For RNA structure prediction under the minimum free energy versus the maximum expected accuracy model, we show that the empirical size of the Pareto front remains within reasonable bounds. Pareto optimization lends itself to the comparative investigation of the behavior of two alternative scoring schemes for the same purpose. For the above scoring schemes, we observe that the Pareto front can be seen as a composition of a few macrostates, each consisting of several microstates that differ in the same limited way. We also study the relationship between abstract shape analysis and the Pareto front, and find that they extract information of a different nature from the folding space and can be meaningfully combined.
Carlos Pozo
Full Text Available Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study
Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Sorribas, Albert; Jiménez, Laureano
2012-01-01
Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the
Pareto optimality in organelle energy metabolism analysis.
Angione, Claudio; Carapezza, Giovanni; Costanza, Jole; Lió, Pietro; Nicosia, Giuseppe
2013-01-01
In low and high eukaryotes, energy is collected or transformed in compartments, the organelles. The rich variety of size, characteristics, and density of the organelles makes it difficult to build a general picture. In this paper, we make use of the Pareto-front analysis to investigate the optimization of energy metabolism in mitochondria and chloroplasts. Using the Pareto optimality principle, we compare models of organelle metabolism on the basis of single- and multiobjective optimization, approximation techniques (the Bayesian Automatic Relevance Determination), robustness, and pathway sensitivity analysis. Finally, we report the first analysis of the metabolic model for the hydrogenosome of Trichomonas vaginalis, which is found in several protozoan parasites. Our analysis has shown the importance of the Pareto optimality for such comparison and for insights into the evolution of the metabolism from cytoplasmic to organelle bound, involving a model order reduction. We report that Pareto fronts represent an asymptotic analysis useful to describe the metabolism of an organism aimed at maximizing concurrently two or more metabolite concentrations.
Southern Philippines and the policy of the second front in the global war on terrorism
Choi, Jihoon P.
2009-01-01
Approved for public release, distribution unlimited This thesis analyzes the effects of the United States' policy of the second front in the global war on terrorism (GWOT) on the conflict in the southern Philippines. The policy's reliance on intervention measures that are both "preemptive" and "direct" by military means echoes Mearsheimer's argument that "simply put, great powers are primed for offense." The question may be asked: how effective is the second front policy in terms of resolv...
Pareto joint inversion of 2D magnetotelluric and gravity data
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where
Kareema Abed Al-Kadim
2017-12-01
Full Text Available In this paper Rayleigh Pareto distribution have introduced denote by( R_PD. We stated some useful functions. Therefor we give some of its properties like the entropy function, mean, mode, median , variance , the r-th moment about the mean, the rth moment about the origin, reliability, hazard functions, coefficients of variation, of sekeness and of kurtosis. Finally, we estimate the parameters so the aim of this search is to introduce a new distribution
GENERALIZED DOUBLE PARETO SHRINKAGE.
Armagan, Artin; Dunson, David B; Lee, Jaeyong
2013-01-01
We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferences in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, forming a bridge between the Laplace and Normal-Jeffreys' priors. While it has a spike at zero like the Laplace density, it also has a Student's t -like tail behavior. Bayesian computation is straightforward via a simple Gibbs sampling algorithm. We investigate the properties of the maximum a posteriori estimator, as sparse estimation plays an important role in many problems, reveal connections with some well-established regularization procedures, and show some asymptotic results. The performance of the prior is tested through simulations and an application.
Pareto-depth for multiple-query image retrieval.
Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O
2015-02-01
Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.
Lina Yang
2018-02-01
Full Text Available Land-use allocation is of great significance in urban development. This type of allocation is usually considered to be a complex multi-objective spatial optimization problem, whose optimized result is a set of Pareto-optimal solutions (Pareto front reflecting different tradeoffs in several objectives. However, obtaining a Pareto front is a challenging task, and the Pareto front obtained by state-of-the-art algorithms is still not sufficient. To achieve better Pareto solutions, taking the grid-representative land-use allocation problem with two objectives as an example, an artificial bee colony optimization algorithm for multi-objective land-use allocation (ABC-MOLA is proposed. In this algorithm, the traditional ABC’s search direction guiding scheme and solution maintaining process are modified. In addition, a knowledge-informed neighborhood search strategy, which utilizes the auxiliary knowledge of natural geography and spatial structures to facilitate the neighborhood spatial search around each solution, is developed to further improve the Pareto front’s quality. A series of comparison experiments (a simulated experiment with small data volume and a real-world data experiment for a large area shows that all the Pareto fronts obtained by ABC-MOLA totally dominate the Pareto fronts by other algorithms, which demonstrates ABC-MOLA’s effectiveness in achieving Pareto fronts of high quality.
Bligaard, Thomas; Johannesson, Gisli Holmar; Ruban, Andrei
2003-01-01
Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties and the ......Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties...... and the cost. In this letter we present a database consisting of the lattice parameters, bulk moduli, and heats of formation for over 64 000 ordered metallic alloys, which has been established by direct first-principles density-functional-theory calculations. Furthermore, we use a concept from economic theory......, the Pareto-optimal set, to determine optimal alloy solutions for the compromise between low compressibility, high stability, and cost....
Pareto optimal pairwise sequence alignment.
DeRonne, Kevin W; Karypis, George
2013-01-01
Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.
Global response to pandemic flu: more research needed on a critical front
Lim Meng-Kin
2006-10-01
Full Text Available Abstract If and when sustained human-to-human transmission of H5N1 becomes a reality, the world will no longer be dealing with sporadic avian flu borne along migratory flight paths of birds, but aviation flu – winged at subsonic speed along commercial air conduits to every corner of planet Earth. Given that air transportation is the one feature that most differentiates present day transmission scenarios from those in 1918, our present inability to prevent spread of influenza by international air travel, as reckoned by the World Health Organization, constitutes a major weakness in the current global preparedness plan against pandemic flu. Despite the lessons of SARS, it is surprising that aviation-related health policy options have not been more rigorously evaluated, or scientific research aimed at strengthening public health measures on the air transportation front, more energetically pursued.
Naud, Catherine M.; Posselt, Derek J.; van den Heever, Susan C.
2015-01-01
The distribution of cloud and precipitation properties across oceanic extratropical cyclone cold fronts is examined using four years of combined CloudSat radar and CALIPSO lidar retrievals. The global annual mean cloud and precipitation distributions show that low-level clouds are ubiquitous in the post frontal zone while higher-level cloud frequency and precipitation peak in the warm sector along the surface front. Increases in temperature and moisture within the cold front region are associated with larger high-level but lower mid-/low level cloud frequencies and precipitation decreases in the cold sector. This behavior seems to be related to a shift from stratiform to convective clouds and precipitation. Stronger ascent in the warm conveyor belt tends to enhance cloudiness and precipitation across the cold front. A strong temperature contrast between the warm and cold sectors also encourages greater post-cold-frontal cloud occurrence. While the seasonal contrasts in environmental temperature, moisture, and ascent strength are enough to explain most of the variations in cloud and precipitation across cold fronts in both hemispheres, they do not fully explain the differences between Northern and Southern Hemisphere cold fronts. These differences are better explained when the impact of the contrast in temperature across the cold front is also considered. In addition, these large-scale parameters do not explain the relatively large frequency in springtime post frontal precipitation.
On the Truncated Pareto Distribution with applications
Zaninetti, Lorenzo; Ferraro, Mario
2008-01-01
The Pareto probability distribution is widely applied in different fields such us finance, physics, hydrology, geology and astronomy. This note deals with an application of the Pareto distribution to astrophysics and more precisely to the statistical analysis of mass of stars and of diameters of asteroids. In particular a comparison between the usual Pareto distribution and its truncated version is presented. Finally a possible physical mechanism that produces Pareto tails for the distributio...
Turner, Stephen J.; Langmuir, Charles H.
2015-07-01
Petrogenetic models for convergent margins should be consistent with the global systematics of convergent margin volcanic compositions. A newly developed tool for compiling and screening data from the GEOROC database was used to generate a global dataset of whole rock chemical analyses from arc front stratovolcano samples. Data from 227 volcanoes within 31 volcanic arc segments were first averaged by volcano and then by arc to explore global systematics. Three different methods of data normalization produce consistent results that persist across a wide range of Mg# [Mg# =Mg / (Mg +Fe) ]. Remarkably coherent systematics are present among major and trace element concentrations and ratios, with the exception of three arcs influenced by mantle plumes and Peru/N. Chile, which is built on exceptionally thick crust. Chemical parameters also correlate with the thickness of the overlying arc crust. In addition to previously established correlations of Na6.0 with Ca6.0 and crustal thickness, correlations are observed among major elements, trace elements, and trace element ratios (e.g. La/Yb, Dy/Yb, Zr/Sm, Zr/Ti). Positive correlations include "fluid mobile," "high field strength," and "large ion lithophile" element groups, with concentrations that vary by a factor of five in all groups. Incompatible element enrichments also correlate well with crustal thickness, with the greatest enrichment found at arcs with the thickest crust. Intra-crustal processes, however, do not reproduce the global variations. High pressure fractionation produces intermediate magmas enriched in aluminum, but such magmas are rare. Furthermore, differences among magma compositions at various volcanic arcs persist from primitive to evolved compositions, which is inconsistent with the possibility that global variations are produced by crystal fractionation at any pressure. Linear relationships among elements appear to be consistent with mixing between depleted primary magma and an enriched contaminant
Record Values of a Pareto Distribution.
Ahsanullah, M.
The record values of the Pareto distribution, labelled Pareto (II) (alpha, beta, nu), are reviewed. The best linear unbiased estimates of the parameters in terms of the record values are provided. The prediction of the sth record value based on the first m (s>m) record values are obtained. A classical Pareto distribution provides reasonably…
Pareto law and Pareto index in the income distribution of Japanese companies
Ishikawa, Atushi
2004-01-01
In order to study the phenomenon in detail that income distribution follows Pareto law, we analyze the database of high income companies in Japan. We find a quantitative relation between the average capital of the companies and the Pareto index. The larger the average capital becomes, the smaller the Pareto index becomes. From this relation, we can possibly explain that the Pareto index of company income distribution hardly changes, while the Pareto index of personal income distribution chang...
Tractable Pareto Optimization of Temporal Preferences
Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent
2003-01-01
This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.
Wu Shiliang; Li Wantong
2009-01-01
This paper deals with the global asymptotic stability and uniqueness (up to translation) of bistable traveling fronts in a class of reaction-diffusion systems. The known results do not apply in solving these problems because the reaction terms do not satisfy the required monotone condition. To overcome the difficulty, a weak monotone condition is proposed for the reaction terms, which is called interval monotone condition. Under such a weak monotone condition, the existence and comparison theorem of solutions is first established for reaction-diffusion systems on R by appealing to the theory of abstract differential equations. The global asymptotic stability and uniqueness (up to translation) of bistable traveling fronts are then proved by the elementary super- and sub-solution comparison and squeezing methods for nonlinear evolution equations. Finally, these abstract results are applied to a two species competition-diffusion model and a system modeling man-environment-man epidemics.
Existence of pareto equilibria for multiobjective games without compactness
Shiraishi, Yuya; Kuroiwa, Daishi
2013-01-01
In this paper, we investigate the existence of Pareto and weak Pareto equilibria for multiobjective games without compactness. By employing an existence theorem of Pareto equilibria due to Yu and Yuan([10]), several existence theorems of Pareto and weak Pareto equilibria for the multiobjective games are established in a similar way to Flores-B´azan.
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.
The exponentiated generalized Pareto distribution | Adeyemi | Ife ...
Recently Gupta et al. (1998) introduced the exponentiated exponential distribution as a generalization of the standard exponential distribution. In this paper, we introduce a three-parameter generalized Pareto distribution, the exponentiated generalized Pareto distribution (EGP). We present a comprehensive treatment of the ...
Boubendir, Yassine; Mendez, Vicenc; Rotstein, Horacio G.
2010-01-01
We study the evolution of fronts in a bistable equation with time-delayed global feedback in the fast reaction and slow diffusion regime. This equation generalizes the Hodgkin-Grafstein and Allen-Cahn equations. We derive a nonlinear equation governing the motion of fronts, which includes a term with delay. In the one-dimensional case this equation is linear. We study the motion of one- and two-dimensional fronts, finding a much richer dynamics than for the previously studied cases (without time-delayed global feedback). We explain the mechanism by which localized fronts created by inhibitory global coupling loose stability in a Hopf bifurcation as the delay time increases. We show that for certain delay times, the prevailing phase is different from that corresponding to the system in the absence of global coupling. Numerical simulations of the partial differential equation are in agreement with the analytical predictions.
Decomposition and Simplification of Multivariate Data using Pareto Sets.
Huettenberger, Lars; Heine, Christian; Garth, Christoph
2014-12-01
Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.
Benazzo, Piero
2010-01-01
The hypothesis is that Pareto and Kaldor-Hicks Efficiency have an aspect of sustainability in relation to inequality. The analysis finds efficient situations reached increasing inequality as diminishing in the long term effective demand in a larger measure than counterbalancing increases thanks to total factor productivity growth. Equity and efficiency in welfare economics, rather than being quite contrasting objectives, are as such related and mutually necessary. As such countries are called...
Phase transitions in Pareto optimal complex networks.
Seoane, Luís F; Solé, Ricard
2015-09-01
The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem, finding phase transitions of different kinds. Distinct phases are associated with different arrangements of the connections, but the need of drastic topological changes does not determine the presence or the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.
Multiobjective Optimization of Linear Cooperative Spectrum Sensing: Pareto Solutions and Refinement.
Yuan, Wei; You, Xinge; Xu, Jing; Leung, Henry; Zhang, Tianhang; Chen, Chun Lung Philip
2016-01-01
In linear cooperative spectrum sensing, the weights of secondary users and detection threshold should be optimally chosen to minimize missed detection probability and to maximize secondary network throughput. Since these two objectives are not completely compatible, we study this problem from the viewpoint of multiple-objective optimization. We aim to obtain a set of evenly distributed Pareto solutions. To this end, here, we introduce the normal constraint (NC) method to transform the problem into a set of single-objective optimization (SOO) problems. Each SOO problem usually results in a Pareto solution. However, NC does not provide any solution method to these SOO problems, nor any indication on the optimal number of Pareto solutions. Furthermore, NC has no preference over all Pareto solutions, while a designer may be only interested in some of them. In this paper, we employ a stochastic global optimization algorithm to solve the SOO problems, and then propose a simple method to determine the optimal number of Pareto solutions under a computational complexity constraint. In addition, we extend NC to refine the Pareto solutions and select the ones of interest. Finally, we verify the effectiveness and efficiency of the proposed methods through computer simulations.
He, Lu; Friedman, Alan M.; Bailey-Kellogg, Chris
2016-01-01
In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability vs. novelty, affinity vs. specificity, activity vs. immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not “dominated”; i.e., no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), in order to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, PEPFR (Protein Engineering Pareto FRontier), that hierarchically subdivides the objective space, employing appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. PMID:22180081
Fernández Caballero, Juan Carlos; Martínez, Francisco José; Hervás, César; Gutiérrez, Pedro Antonio
2010-05-01
This paper proposes a multiclassification algorithm using multilayer perceptron neural network models. It tries to boost two conflicting main objectives of multiclassifiers: a high correct classification rate level and a high classification rate for each class. This last objective is not usually optimized in classification, but is considered here given the need to obtain high precision in each class in real problems. To solve this machine learning problem, we use a Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm. We consider a memetic Pareto evolutionary approach based on the NSGA2 evolutionary algorithm (MPENSGA2). Once the Pareto front is built, two strategies or automatic individual selection are used: the best model in accuracy and the best model in sensitivity (extremes in the Pareto front). These methodologies are applied to solve 17 classification benchmark problems obtained from the University of California at Irvine (UCI) repository and one complex real classification problem. The models obtained show high accuracy and a high classification rate for each class.
Vimal Savsani
2017-01-01
Full Text Available Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS algorithm, which is based on the search technique of heat transfer search (HTS algorithm. MOHTS employs the elitist nondominated sorting and crowding distance approach of an elitist based nondominated sorting genetic algorithm-II (NSGA-II for obtaining different nondomination levels and to preserve the diversity among the optimal set of solutions, respectively. The capability in yielding a Pareto front as close as possible to the true Pareto front of MOHTS has been tested on the multiobjective optimization problem of the vehicle suspension design, which has a set of five second-order linear ordinary differential equations. Half car passive ride model with two different sets of five objectives is employed for optimizing the suspension parameters using MOHTS and NSGA-II. The optimization studies demonstrate that MOHTS achieves the better nondominated Pareto front with the widespread (diveresed set of optimal solutions as compared to NSGA-II, and further the comparison of the extreme points of the obtained Pareto front reveals the dominance of MOHTS over NSGA-II, multiobjective uniform diversity genetic algorithm (MUGA, and combined PSO-GA based MOEA.
Spam nation the inside story of organized cybercrime-from global epidemic to your front door
Krebs, Brian
2014-01-01
In Spam Nation, investigative journalist and cybersecurity expert Brian Krebs unmasks the criminal masterminds driving some of the biggest spam and hacker operations targeting Americans and their bank accounts. Tracing the rise, fall, and alarming resurrection of the digital mafia behind the two largest spam pharmacies-and countless viruses, phishing, and spyware attacks-he delivers the first definitive narrative of the global spam problem and its threat to consumers everywhere. Blending cutting-edge research, investigative reporting, and firsthand interviews, this terrifying true story reveals how we unwittingly invite these digital thieves into our lives every day. From unassuming computer programmers right next door to digital mobsters like "Cosma"-who unleashed a massive malware attack that has stolen thousands of Americans' logins and passwords-Krebs uncovers the shocking lengths to which these people will go to profit from our data and our wallets. Not only are hundreds of thousands of Americans expos...
Post Pareto optimization-A case
Popov, Stoyan; Baeva, Silvia; Marinova, Daniela
2017-12-01
Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Determination of Pareto frontier in multi-objective maintenance optimization
Certa, Antonella; Galante, Giacomo; Lupo, Toni; Passannanti, Gianfranco
2011-01-01
The objective of a maintenance policy generally is the global maintenance cost minimization that involves not only the direct costs for both the maintenance actions and the spare parts, but also those ones due to the system stop for preventive maintenance and the downtime for failure. For some operating systems, the failure event can be dangerous so that they are asked to operate assuring a very high reliability level between two consecutive fixed stops. The present paper attempts to individuate the set of elements on which performing maintenance actions so that the system can assure the required reliability level until the next fixed stop for maintenance, minimizing both the global maintenance cost and the total maintenance time. In order to solve the previous constrained multi-objective optimization problem, an effective approach is proposed to obtain the best solutions (that is the Pareto optimal frontier) among which the decision maker will choose the more suitable one. As well known, describing the whole Pareto optimal frontier generally is a troublesome task. The paper proposes an algorithm able to rapidly overcome this problem and its effectiveness is shown by an application to a case study regarding a complex series-parallel system.
Robust bayesian inference of generalized Pareto distribution ...
En utilisant une etude exhaustive de Monte Carlo, nous prouvons que, moyennant une fonction perte generalisee adequate, on peut construire un estimateur Bayesien robuste du modele. Key words: Bayesian estimation; Extreme value; Generalized Fisher information; Gener- alized Pareto distribution; Monte Carlo; ...
Axiomatizations of Pareto Equilibria in Multicriteria Games
Voorneveld, M.; Vermeulen, D.; Borm, P.E.M.
1997-01-01
We focus on axiomatizations of the Pareto equilibrium concept in multicriteria games based on consistency.Axiomatizations of the Nash equilibrium concept by Peleg and Tijs (1996) and Peleg, Potters, and Tijs (1996) have immediate generalizations.The axiomatization of Norde et al.(1996) cannot be
How Well Do We Know Pareto Optimality?
Mathur, Vijay K.
1991-01-01
Identifies sources of ambiguity in economics textbooks' discussion of the condition for efficient output mix. Points out that diverse statements without accompanying explanations create confusion among students. Argues that conflicting views concerning the concept of Pareto optimality as one source of ambiguity. Suggests clarifying additions to…
Performance-based Pareto optimal design
Sariyildiz, I.S.; Bittermann, M.S.; Ciftcioglu, O.
2008-01-01
A novel approach for performance-based design is presented, where Pareto optimality is pursued. Design requirements may contain linguistic information, which is difficult to bring into computation or make consistent their impartial estimations from case to case. Fuzzy logic and soft computing are
I. K. Romanova
2015-01-01
Full Text Available The article research concerns the multi-criteria optimization (MCO, which assumes that operation quality criteria of the system are independent and specifies a way to improve values of these criteria. Mutual contradiction of some criteria is a major problem in MCO. One of the most important areas of research is to obtain the so-called Pareto - optimal options.The subject of research is Pareto front, also called the Pareto frontier. The article discusses front classifications by its geometric representation for the case of two-criterion task. It presents a mathematical description of the front characteristics using the gradients and their projections. A review of current domestic and foreign literature has revealed that the aim of works in constructing the Pareto frontier is to conduct research in conditions of uncertainty, in the stochastic statement, with no restrictions. A topology both in two- and in three-dimensional case is under consideration. The targets of modern applications are multi-agent systems and groups of players in differential games. However, all considered works have no task to provide an active management of the front.The objective of this article is to discuss the research problem the Pareto frontier in a new production, namely, with the active co-developers of the systems and (or the decision makers (DM in the management of the Pareto frontier. It notes that such formulation differs from the traditionally accepted approach based on the analysis of already existing solutions.The article discusses three ways to describe a quality of the object management system. The first way is to use the direct quality criteria for the model of a closed system as the vibrational level of the General form. The second one is to study a specific two-loop system of an aircraft control using the angular velocity and normal acceleration loops. The third is the use of the integrated quality criteria. In all three cases, the selected criteria are
van Zyl, J. Martin
2012-01-01
Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good wit...
Pareto-Optimal Estimates of California Precipitation Change
Langenbrunner, Baird; Neelin, J. David
2017-12-01
In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.
Evaluation of Preanalytical Quality Indicators by Six Sigma and Pareto`s Principle.
Kulkarni, Sweta; Ramesh, R; Srinivasan, A R; Silvia, C R Wilma Delphine
2018-01-01
Preanalytical steps are the major sources of error in clinical laboratory. The analytical errors can be corrected by quality control procedures but there is a need for stringent quality checks in preanalytical area as these processes are done outside the laboratory. Sigma value depicts the performance of laboratory and its quality measures. Hence in the present study six sigma and Pareto principle was applied to preanalytical quality indicators to evaluate the clinical biochemistry laboratory performance. This observational study was carried out for a period of 1 year from November 2015-2016. A total of 1,44,208 samples and 54,265 test requisition forms were screened for preanalytical errors like missing patient information, sample collection details in forms and hemolysed, lipemic, inappropriate, insufficient samples and total number of errors were calculated and converted into defects per million and sigma scale. Pareto`s chart was drawn using total number of errors and cumulative percentage. In 75% test requisition forms diagnosis was not mentioned and sigma value of 0.9 was obtained and for other errors like sample receiving time, stat and type of sample sigma values were 2.9, 2.6, and 2.8 respectively. For insufficient sample and improper ratio of blood to anticoagulant sigma value was 4.3. Pareto`s chart depicts out of 80% of errors in requisition forms, 20% is contributed by missing information like diagnosis. The development of quality indicators, application of six sigma and Pareto`s principle are quality measures by which not only preanalytical, the total testing process can be improved.
Pareto vs Simmel: residui ed emozioni
Silvia Fornari
2017-08-01
Full Text Available A cento anni dalla pubblicazione del Trattato di sociologia generale (Pareto 1988 siamo a mantenere vivo ed attuale lo studio paretiano con una rilettura contemporanea del suo pensiero. Ricordato per la grande versatilità intellettuale dagli economisti, rimane lo scienziato rigoroso ed analitico i cui contributi sono ancora discussi a livello internazionale. Noi ne analizzeremo gli aspetti che l’hanno portato ad avvicinarsi all’approccio sociologico, con l’introduzione della nota distinzione dell’azione sociale: logica e non-logica. Una dicotomia utilizzata per dare conto dei cambiamenti sociali riguardanti le modalità d’azione degli uomini e delle donne. Com’è noto le azioni logiche sono quelle che riguardano comportamenti mossi da logicità e raziocinio, in cui vi è una diretta relazione causa-effetto, azioni oggetto di studio degli economisti, e di cui non si occupano i sociologi. Le azioni non-logiche riguardano tutte le tipologie di agire umano che rientrano nel novero delle scienze sociali, e che rappresentano la parte più ampia dell’agire sociale. Sono le azioni guidate dai sentimenti, dall’emotività, dalla superstizione, ecc., illustrate da Pareto nel Trattato di sociologia generale e in saggi successivi, dove riprende anche il concetto di eterogenesi dei fini, formulato per la prima volta da Giambattista Vico. Concetto secondo il quale la storia umana, pur conservando in potenza la realizzazione di certi fini, non è lineare e lungo il suo percorso evolutivo può accadere che l’uomo nel tentativo di raggiungere una finalità arrivi a conclusioni opposte. Pareto collega la definizione del filosofo napoletano alle tipologie di azione sociale e alla loro distinzione (logiche, non-logiche. L’eterogenesi dei fini per Pareto è dunque l’esito di un particolare tipo di azione non-logica dell’essere umano e della collettività.
Monopoly, Pareto and Ramsey mark-ups
Ten Raa, T.
2009-01-01
Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear demand or, even within the constant elasticity framework, dependence is introduced. The analysis provides a single Generalized Inverse Elasticity Rule for the problems of monopoly, Pareto and Ramsey.
He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris
2012-03-01
In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability versus novelty, affinity versus specificity, activity versus immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not "dominated"; that is, no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, Protein Engineering Pareto FRontier (PEPFR), that hierarchically subdivides the objective space, using appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. Copyright © 2011 Wiley Periodicals, Inc.
RNA-Pareto: interactive analysis of Pareto-optimal RNA sequence-structure alignments.
Schnattinger, Thomas; Schöning, Uwe; Marchfelder, Anita; Kestler, Hans A
2013-12-01
Incorporating secondary structure information into the alignment process improves the quality of RNA sequence alignments. Instead of using fixed weighting parameters, sequence and structure components can be treated as different objectives and optimized simultaneously. The result is not a single, but a Pareto-set of equally optimal solutions, which all represent different possible weighting parameters. We now provide the interactive graphical software tool RNA-Pareto, which allows a direct inspection of all feasible results to the pairwise RNA sequence-structure alignment problem and greatly facilitates the exploration of the optimal solution set.
Pardo-Montero, Juan; Fenwick, John D
2010-06-01
The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape
Pareto Optimal Design for Synthetic Biology.
Patanè, Andrea; Santoro, Andrea; Costanza, Jole; Carapezza, Giovanni; Nicosia, Giuseppe
2015-08-01
Recent advances in synthetic biology call for robust, flexible and efficient in silico optimization methodologies. We present a Pareto design approach for the bi-level optimization problem associated to the overproduction of specific metabolites in Escherichia coli. Our method efficiently explores the high dimensional genetic manipulation space, finding a number of trade-offs between synthetic and biological objectives, hence furnishing a deeper biological insight to the addressed problem and important results for industrial purposes. We demonstrate the computational capabilities of our Pareto-oriented approach comparing it with state-of-the-art heuristics in the overproduction problems of i) 1,4-butanediol, ii) myristoyl-CoA, i ii) malonyl-CoA , iv) acetate and v) succinate. We show that our algorithms are able to gracefully adapt and scale to more complex models and more biologically-relevant simulations of the genetic manipulations allowed. The Results obtained for 1,4-butanediol overproduction significantly outperform results previously obtained, in terms of 1,4-butanediol to biomass formation ratio and knock-out costs. In particular overproduction percentage is of +662.7%, from 1.425 mmolh⁻¹gDW⁻¹ (wild type) to 10.869 mmolh⁻¹gDW⁻¹, with a knockout cost of 6. Whereas, Pareto-optimal designs we have found in fatty acid optimizations strictly dominate the ones obtained by the other methodologies, e.g., biomass and myristoyl-CoA exportation improvement of +21.43% (0.17 h⁻¹) and +5.19% (1.62 mmolh⁻¹gDW⁻¹), respectively. Furthermore CPU time required by our heuristic approach is more than halved. Finally we implement pathway oriented sensitivity analysis, epsilon-dominance analysis and robustness analysis to enhance our biological understanding of the problem and to improve the optimization algorithm capabilities.
A Pareto-Improving Minimum Wage
Eliav Danziger; Leif Danziger
2014-01-01
This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...
Improving predicted protein loop structure ranking using a Pareto-optimality consensus method.
Li, Yaohang; Rata, Ionel; Chiu, See-wing; Jakobsson, Eric
2010-07-20
Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of approximately 20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set.
Chapman, N.; McKinley, I.; Shea, M.; Smellie, J.
1993-01-01
This article describes the investigations of redox fronts performed at the Osamu Utsumi mine. Results obtained by modelling groups on the rate of movement of the redox fronts and on the chemical reactions involved are discussed. Some of the most important rockwater interactions which occur at redox fronts can be modelled reasonably well but the complex redox chemistry of elements like sulphur is poorly simulated. The observed enrichment of many trace elements close to the redox fronts could be of significance for high-level waste repositories, but cannot be quantified by existing models. (author) 6 figs., 1 tab
Pareto Improving Price Regulation when the Asset Market is Incomplete
Herings, P.J.J.; Polemarchakis, H.M.
1999-01-01
When the asset market is incomplete, competitive equilibria are constrained suboptimal, which provides a scope for pareto improving interventions. Price regulation can be such a pareto improving policy, even when the welfare effects of rationing are taken into account. An appealing aspect of price
Pareto optimality in infinite horizon linear quadratic differential games
Reddy, P.V.; Engwerda, J.C.
2013-01-01
In this article we derive conditions for the existence of Pareto optimal solutions for linear quadratic infinite horizon cooperative differential games. First, we present a necessary and sufficient characterization for Pareto optimality which translates to solving a set of constrained optimal
Pareto 80/20 Law: Derivation via Random Partitioning
Lipovetsky, Stan
2009-01-01
The Pareto 80/20 Rule, also known as the Pareto principle or law, states that a small number of causes (20%) is responsible for a large percentage (80%) of the effect. Although widely recognized as a heuristic rule, this proportion has not been theoretically based. The article considers derivation of this 80/20 rule and some other standard…
The exponential age distribution and the Pareto firm size distribution
Coad, Alex
2008-01-01
Recent work drawing on data for large and small firms has shown a Pareto distribution of firm size. We mix a Gibrat-type growth process among incumbents with an exponential distribution of firm’s age, to obtain the empirical Pareto distribution.
Mukhopadhyay, Somparna; Hazra, Lakshminarayan
2015-11-01
Resolution capability of an optical imaging system can be enhanced by reducing the width of the central lobe of the point spread function. Attempts to achieve the same by pupil plane filtering give rise to a concomitant increase in sidelobe intensity. The mutual exclusivity between these two objectives may be considered as a multiobjective optimization problem that does not have a unique solution; rather, a class of trade-off solutions called Pareto optimal solutions may be generated. Pareto fronts in the synthesis of lossless phase-only pupil plane filters to achieve superresolution with prespecified lower limits for the Strehl ratio are explored by using the particle swarm optimization technique.
Projections onto the Pareto surface in multicriteria radiation therapy optimization.
Bokrantz, Rasmus; Miettinen, Kaisa
2015-10-01
To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose-volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose-volume histogram constraints were used. No consistent improvements in target homogeneity were observed. There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.
Projections onto the Pareto surface in multicriteria radiation therapy optimization
Bokrantz, Rasmus; Miettinen, Kaisa
2015-01-01
Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose–volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose–volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan
Analysis of a Pareto Mixture Distribution for Maritime Surveillance Radar
Graham V. Weinberg
2012-01-01
Full Text Available The Pareto distribution has been shown to be an excellent model for X-band high-resolution maritime surveillance radar clutter returns. Given the success of mixture distributions in radar, it is thus of interest to consider the effect of Pareto mixture models. This paper introduces a formulation of a Pareto intensity mixture distribution and investigates coherent multilook radar detector performance using this new clutter model. Clutter parameter estimates are derived from data sets produced by the Defence Science and Technology Organisation's Ingara maritime surveillance radar.
Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.
Otero-Muras, Irene; Banga, Julio R
2017-07-21
In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.
A Pareto Optimal Auction Mechanism for Carbon Emission Rights
Mingxi Wang
2014-01-01
Full Text Available The carbon emission rights do not fit well into the framework of existing multi-item auction mechanisms because of their own unique features. This paper proposes a new auction mechanism which converges to a unique Pareto optimal equilibrium in a finite number of periods. In the proposed auction mechanism, the assignment outcome is Pareto efficient and the carbon emission rights’ resources are efficiently used. For commercial application and theoretical completeness, both discrete and continuous markets—represented by discrete and continuous bid prices, respectively—are examined, and the results show the existence of a Pareto optimal equilibrium under the constraint of individual rationality. With no ties, the Pareto optimal equilibrium can be further proven to be unique.
Kullback-Leibler divergence and the Pareto-Exponential approximation.
Weinberg, G V
2016-01-01
Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation.
Approximating convex Pareto surfaces in multiobjective radiotherapy planning
Craft, David L.; Halabi, Tarek F.; Shih, Helen A.; Bortfeld, Thomas R.
2006-01-01
Radiotherapy planning involves inherent tradeoffs: the primary mission, to treat the tumor with a high, uniform dose, is in conflict with normal tissue sparing. We seek to understand these tradeoffs on a case-to-case basis, by computing for each patient a database of Pareto optimal plans. A treatment plan is Pareto optimal if there does not exist another plan which is better in every measurable dimension. The set of all such plans is called the Pareto optimal surface. This article presents an algorithm for computing well distributed points on the (convex) Pareto optimal surface of a multiobjective programming problem. The algorithm is applied to intensity-modulated radiation therapy inverse planning problems, and results of a prostate case and a skull base case are presented, in three and four dimensions, investigating tradeoffs between tumor coverage and critical organ sparing
Pareto-optimal estimates that constrain mean California precipitation change
Langenbrunner, B.; Neelin, J. D.
2017-12-01
Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.
Kinetics of wealth and the Pareto law.
Boghosian, Bruce M
2014-04-01
An important class of economic models involve agents whose wealth changes due to transactions with other agents. Several authors have pointed out an analogy with kinetic theory, which describes molecules whose momentum and energy change due to interactions with other molecules. We pursue this analogy and derive a Boltzmann equation for the time evolution of the wealth distribution of a population of agents for the so-called Yard-Sale Model of wealth exchange. We examine the solutions to this equation by a combination of analytical and numerical methods and investigate its long-time limit. We study an important limit of this equation for small transaction sizes and derive a partial integrodifferential equation governing the evolution of the wealth distribution in a closed economy. We then describe how this model can be extended to include features such as inflation, production, and taxation. In particular, we show that the model with taxation exhibits the basic features of the Pareto law, namely, a lower cutoff to the wealth density at small values of wealth, and approximate power-law behavior at large values of wealth.
Pareto-optimal phylogenetic tree reconciliation.
Libeskind-Hadas, Ran; Wu, Yi-Chieh; Bansal, Mukul S; Kellis, Manolis
2014-06-15
Phylogenetic tree reconciliation is a widely used method for reconstructing the evolutionary histories of gene families and species, hosts and parasites and other dependent pairs of entities. Reconciliation is typically performed using maximum parsimony, in which each evolutionary event type is assigned a cost and the objective is to find a reconciliation of minimum total cost. It is generally understood that reconciliations are sensitive to event costs, but little is understood about the relationship between event costs and solutions. Moreover, choosing appropriate event costs is a notoriously difficult problem. We address this problem by giving an efficient algorithm for computing Pareto-optimal sets of reconciliations, thus providing the first systematic method for understanding the relationship between event costs and reconciliations. This, in turn, results in new techniques for computing event support values and, for cophylogenetic analyses, performing robust statistical tests. We provide new software tools and demonstrate their use on a number of datasets from evolutionary genomic and cophylogenetic studies. Our Python tools are freely available at www.cs.hmc.edu/∼hadas/xscape. . © The Author 2014. Published by Oxford University Press.
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
Arkell, Karolina; Knutson, Hans-Kristian; Frederiksen, Søren S; Breil, Martin P; Nilsson, Bernt
2018-01-12
With the shift of focus of the regulatory bodies, from fixed process conditions towards flexible ones based on process understanding, model-based optimization is becoming an important tool for process development within the biopharmaceutical industry. In this paper, a multi-objective optimization study of separation of three insulin variants by reversed-phase chromatography (RPC) is presented. The decision variables were the load factor, the concentrations of ethanol and KCl in the eluent, and the cut points for the product pooling. In addition to the purity constraints, a solubility constraint on the total insulin concentration was applied. The insulin solubility is a function of the ethanol concentration in the mobile phase, and the main aim was to investigate the effect of this constraint on the maximal productivity. Multi-objective optimization was performed with and without the solubility constraint, and visualized as Pareto fronts, showing the optimal combinations of the two objectives productivity and yield for each case. Comparison of the constrained and unconstrained Pareto fronts showed that the former diverges when the constraint becomes active, because the increase in productivity with decreasing yield is almost halted. Consequently, we suggest the operating point at which the total outlet concentration of insulin reaches the solubility limit as the most suitable one. According to the results from the constrained optimizations, the maximal productivity on the C 4 adsorbent (0.41 kg/(m 3 column h)) is less than half of that on the C 18 adsorbent (0.87 kg/(m 3 column h)). This is partly caused by the higher selectivity between the insulin variants on the C 18 adsorbent, but the main reason is the difference in how the solubility constraint affects the processes. Since the optimal ethanol concentration for elution on the C 18 adsorbent is higher than for the C 4 one, the insulin solubility is also higher, allowing a higher pool concentration
Denio Dias Arrais
2009-03-01
Full Text Available This article examines the history which influenced the marketing activities in the late 1980s. There were considerations of the relationship between advertising with social, cultural, economic and communication events among them. Try to permeate the identification of the effects on the development of production and marketing strategies, as the speeches were conducted this work. It was not intended to question whether this environment has brought benefits or harm, but to point its effects on communication and its effects particularly in the form of dissemination of products, services or ideas. In the 1980s, regarded by economists as the lost decade, to present results of economic development, propensity in Brazil showed even then, boiling considerable cultural communication. This period of "maturity" and the consolidation of the global village, whose scope of such an evolutionary stage of interaction between peoples due to the mass communication and, why not say, with considerable support from propaganda. Craftsmen of his time professionals in marketing and advertising created "intoxicate" and inspiration in this environment. Conventionally it is this scenario talk of globalization.
A Pareto-based multi-objective optimization algorithm to design energy-efficient shading devices
Khoroshiltseva, Marina; Slanzi, Debora; Poli, Irene
2016-01-01
Highlights: • We present a multi-objective optimization algorithm for shading design. • We combine Harmony search and Pareto-based procedures. • Thermal and daylighting performances of external shading were considered. • We applied the optimization process to a residential social housing in Madrid. - Abstract: In this paper we address the problem of designing new energy-efficient static daylight devices that will surround the external windows of a residential building in Madrid. Shading devices can in fact largely influence solar gains in a building and improve thermal and lighting comforts by selectively intercepting the solar radiation and by reducing the undesirable glare. A proper shading device can therefore significantly increase the thermal performance of a building by reducing its energy demand in different climate conditions. In order to identify the set of optimal shading devices that allow a low energy consumption of the dwelling while maintaining high levels of thermal and lighting comfort for the inhabitants we derive a multi-objective optimization methodology based on Harmony Search and Pareto front approaches. The results show that the multi-objective approach here proposed is an effective procedure in designing energy efficient shading devices when a large set of conflicting objectives characterizes the performance of the proposed solutions.
Othman, Muhammad Murtadha; Abd Rahman, Nurulazmi; Musirin, Ismail; Fotuhi-Firuzabad, Mahmud; Rajabi-Ghahnavieh, Abbas
2015-01-01
This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.
Muhammad Murtadha Othman
2015-01-01
Full Text Available This paper introduces a novel multiobjective approach for capacity benefit margin (CBM assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE in various conditions. Eventually, the power transfer based available transfer capability (ATC is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.
Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization
Na Tian
2015-01-01
Full Text Available A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.
A Pareto upper tail for capital income distribution
Oancea, Bogdan; Pirjol, Dan; Andrei, Tudorel
2018-02-01
We present a study of the capital income distribution and of its contribution to the total income (capital income share) using individual tax income data in Romania, for 2013 and 2014. Using a parametric representation we show that the capital income is Pareto distributed in the upper tail, with a Pareto coefficient α ∼ 1 . 44 which is much smaller than the corresponding coefficient for wage- and non-wage-income (excluding capital income), of α ∼ 2 . 53. Including the capital income contribution has the effect of increasing the overall inequality measures.
PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning
Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew
2011-01-01
Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows
PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.
Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew
2011-09-01
In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number
Designing Pareto-superior demand-response rate options
Horowitz, I.; Woo, C.K.
2006-01-01
We explore three voluntary service options-real-time pricing, time-of-use pricing, and curtailable/interruptible service-that a local distribution company might offer its customers in order to encourage them to alter their electricity usage in response to changes in the electricity-spot-market price. These options are simple and practical, and make minimal information demands. We show that each of the options is Pareto-superior ex ante, in that it benefits both the participants and the company offering it, while not affecting the non-participants. The options are shown to be Pareto-superior ex post as well, except under certain exceptional circumstances. (author)
Pareto-Zipf law in growing systems with multiplicative interactions
Ohtsuki, Toshiya; Tanimoto, Satoshi; Sekiyama, Makoto; Fujihara, Akihiro; Yamamoto, Hiroshi
2018-06-01
Numerical simulations of multiplicatively interacting stochastic processes with weighted selections were conducted. A feedback mechanism to control the weight w of selections was proposed. It becomes evident that when w is moderately controlled around 0, such systems spontaneously exhibit the Pareto-Zipf distribution. The simulation results are universal in the sense that microscopic details, such as parameter values and the type of control and weight, are irrelevant. The central ingredient of the Pareto-Zipf law is argued to be the mild control of interactions.
Pareto Distribution of Firm Size and Knowledge Spillover Process as a Network
Tomohiko Konno
2013-01-01
The firm size distribution is considered as Pareto distribution. In the present paper, we show that the Pareto distribution of firm size results from the spillover network model which was introduced in Konno (2010).
Can we reach Pareto optimal outcomes using bottom-up approaches?
V. Sanchez-Anguix (Victor); R. Aydoğan (Reyhan); T. Baarslag (Tim); C.M. Jonker (Catholijn)
2016-01-01
textabstractClassically, disciplines like negotiation and decision making have focused on reaching Pareto optimal solutions due to its stability and efficiency properties. Despite the fact that many practical and theoretical algorithms have successfully attempted to provide Pareto optimal solutions,
A Pareto scale-inflated outlier model and its Bayesian analysis
Scollnik, David P. M.
2016-01-01
This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three wor...
Spectral-Efficiency - Illumination Pareto Front for Energy Harvesting Enabled VLC System
Abdelhady, Amr Mohamed Abdelaziz; Amin, Osama; Chaaban, Anas; Alouini, Mohamed-Slim
2017-01-01
. The adopted optical system provides users with illumination and data communication services. The outdoor optical design objective is to maximize the illumination, while the communication design objective is to maximize the spectral efficiency (SE). The design
Multi-agent Pareto appointment exchanging in hospital patient scheduling
I.B. Vermeulen (Ivan); S.M. Bohte (Sander); D.J.A. Somefun (Koye); J.A. La Poutré (Han)
2007-01-01
htmlabstractWe present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment
Multi-agent Pareto appointment exchanging in hospital patient scheduling
Vermeulen, I.B.; Bohté, S.M.; Somefun, D.J.A.; Poutré, La J.A.
2007-01-01
We present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment exchanging algorithm:
Word frequencies: A comparison of Pareto type distributions
Wiegand, Martin; Nadarajah, Saralees; Si, Yuancheng
2018-03-01
Mehri and Jamaati (2017) [18] used Zipf's law to model word frequencies in Holy Bible translations for one hundred live languages. We compare the fit of Zipf's law to a number of Pareto type distributions. The latter distributions are shown to provide the best fit, as judged by a number of comparative plots and error measures. The fit of Zipf's law appears generally poor.
Robustness analysis of bogie suspension components Pareto optimised values
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
An Evolutionary Efficiency Alternative to the Notion of Pareto Efficiency
I.P. van Staveren (Irene)
2012-01-01
textabstractThe paper argues that the notion of Pareto efficiency builds on two normative assumptions: the more general consequentialist norm of any efficiency criterion, and the strong no-harm principle of the prohibition of any redistribution during the economic process that hurts at least one
Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing
Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.
2006-01-01
The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval
Efficient approximation of black-box functions and Pareto sets
Rennen, G.
2009-01-01
In the case of time-consuming simulation models or other so-called black-box functions, we determine a metamodel which approximates the relation between the input- and output-variables of the simulation model. To solve multi-objective optimization problems, we approximate the Pareto set, i.e. the
Tsallis-Pareto like distributions in hadron-hadron collisions
Barnafoeldi, G G; Uermoessy, K; Biro, T S
2011-01-01
Non-extensive thermodynamics is a novel approach in high energy physics. In high-energy heavy-ion, and especially in proton-proton collisions we are far from a canonical thermal state, described by the Boltzmann-Gibbs statistic. In these reactions low and intermediate transverse momentum spectra are extremely well reproduced by the Tsallis-Pareto distribution, but the physical origin of Tsallis parameters is still an unsettled question. Here, we analyze whether Tsallis-Pareto energy distribution do overlap with hadron spectra at high-pT. We fitted data, measured in proton-proton (proton-antiproton) collisions in wide center of mass energy range from 200 GeV RHIC up to 7 TeV LHC energies. Furthermore, our test is extended to an investigation of a possible √s-dependence of the power in the Tsallis-Pareto distribution, motivated by QCD evolution equations. We found that Tsallis-Pareto distributions fit well high-pT data, in the wide center of mass energy range. Deviance from the fits appears at p T > 20-30 GeV/c, especially on CDF data. Introducing a pT-scaling ansatz, the fits at low and intermediate transverse momenta still remain good, and the deviations tend to disappear at the highest-pT data.
COMPROMISE, OPTIMAL AND TRACTIONAL ACCOUNTS ON PARETO SET
V. V. Lahuta
2010-11-01
Full Text Available The problem of optimum traction calculations is considered as a problem about optimum distribution of a resource. The dynamic programming solution is based on a step-by-step calculation of set of points of Pareto-optimum values of a criterion function (energy expenses and a resource (time.
Geochemistry of Natural Redox Fronts
Hofmann, B.A.
1999-05-01
Redox fronts are important geochemical boundaries which need to be considered in safety assessment of deep repositories for radioactive waste. In most cases, selected host-rock formations will be reducing due to the presence of ferrous minerals, sulphides, etc. During construction and operation of the repository, air will be introduced into the formation. After repository closure, oxidising conditions may persist locally until all oxygen is consumed. In the case of high-level waste, radiolysis of water may provide an additional source of oxidants. Oxidising conditions within a repository are thus possible and potentially have a strong influence on the mobility of many elements. The rate of movement of redox fronts, the boundary between oxidising and reducing environments, and their influence on migrating radionuclides are thus important factors influencing repository performance. The present report is a review of elemental behaviour at natural redox fronts, based on published information and work of the author. Redox fronts are geochemically and geometrically variable manifestations of a global interface between generally oxidising geochemical milieux in contact with the atmosphere and generally reducing milieux in contact with rocks containing ferrous iron, sulphide and/or organic carbon. A classification of redox fronts based on a subdivision into continental near-surface, marine near-surface, and deep environments is proposed. The global redox interface is often located close to the surface of rocks and sediments and, sometimes, within bodies of water. Temperature conditions are close to ambient. A deeper penetration of the global redox front to depths of several kilometres is found in basins containing oxidised sediments (red beds) and in some hydrothermal circulation systems. Temperatures at such deep redox fronts may reach 200 o C. Both near-surface and deep redox fronts are sites of formation of economic deposits of redox-sensitive elements, particularly of
Mapping the Pareto optimal design space for a functionally deimmunized biotherapeutic candidate.
Salvat, Regina S; Parker, Andrew S; Choi, Yoonjoo; Bailey-Kellogg, Chris; Griswold, Karl E
2015-01-01
The immunogenicity of biotherapeutics can bottleneck development pipelines and poses a barrier to widespread clinical application. As a result, there is a growing need for improved deimmunization technologies. We have recently described algorithms that simultaneously optimize proteins for both reduced T cell epitope content and high-level function. In silico analysis of this dual objective design space reveals that there is no single global optimum with respect to protein deimmunization. Instead, mutagenic epitope deletion yields a spectrum of designs that exhibit tradeoffs between immunogenic potential and molecular function. The leading edge of this design space is the Pareto frontier, i.e. the undominated variants for which no other single design exhibits better performance in both criteria. Here, the Pareto frontier of a therapeutic enzyme has been designed, constructed, and evaluated experimentally. Various measures of protein performance were found to map a functional sequence space that correlated well with computational predictions. These results represent the first systematic and rigorous assessment of the functional penalty that must be paid for pursuing progressively more deimmunized biotherapeutic candidates. Given this capacity to rapidly assess and design for tradeoffs between protein immunogenicity and functionality, these algorithms may prove useful in augmenting, accelerating, and de-risking experimental deimmunization efforts.
Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi
2017-07-01
The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.
Income inequality in Romania: The exponential-Pareto distribution
Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan
2017-03-01
We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.
[Origination of Pareto distribution in complex dynamic systems].
Chernavskiĭ, D S; Nikitin, A P; Chernavskaia, O D
2008-01-01
The Pareto distribution, whose probability density function can be approximated at sufficiently great chi as rho(chi) - chi(-alpha), where alpha > or = 2, is of crucial importance from both the theoretical and practical point of view. The main reason is its qualitative distinction from the normal (Gaussian) distribution. Namely, the probability of high deviations appears to be significantly higher. The conception of the universal applicability of the Gauss law remains to be widely distributed despite the lack of objective confirmation of this notion in a variety of application areas. The origin of the Pareto distribution in dynamic systems located in the gaussian noise field is considered. A simple one-dimensional model is discussed where the system response in a rather wide interval of the variable can be quite precisely approximated by this distribution.
Using the Pareto Distribution to Improve Estimates of Topcoded Earnings
Philip Armour; Richard V. Burkhauser; Jeff Larrimore
2014-01-01
Inconsistent censoring in the public-use March Current Population Survey (CPS) limits its usefulness in measuring labor earnings trends. Using Pareto estimation methods with less-censored internal CPS data, we create an enhanced cell-mean series to capture top earnings in the public-use CPS. We find that previous approaches for imputing topcoded earnings systematically understate top earnings. Annual earnings inequality trends since 1963 using our series closely approximate those found by Kop...
Accelerated life testing design using geometric process for pareto distribution
Mustafa Kamal; Shazia Zarrin; Arif Ul Islam
2013-01-01
In this paper the geometric process is used for the analysis of accelerated life testing under constant stress for Pareto Distribution. Assuming that the lifetimes under increasing stress levels form a geometric process, estimates of the parameters are obtained by using the maximum likelihood method for complete data. In addition, asymptotic interval estimates of the parameters of the distribution using Fisher information matrix are also obtained. The statistical properties of the parameters ...
Small Sample Robust Testing for Normality against Pareto Tails
Stehlík, M.; Fabián, Zdeněk; Střelec, L.
2012-01-01
Roč. 41, č. 7 (2012), s. 1167-1194 ISSN 0361-0918 Grant - others:Aktion(CZ-AT) 51p7, 54p21, 50p14, 54p13 Institutional research plan: CEZ:AV0Z10300504 Keywords : consistency * Hill estimator * t-Hill estimator * location functional * Pareto tail * power comparison * returns * robust tests for normality Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.295, year: 2012
Pareto optimal design of sectored toroidal superconducting magnet for SMES
Bhunia, Uttam, E-mail: ubhunia@vecc.gov.in; Saha, Subimal; Chakrabarti, Alok
2014-10-15
Highlights: • The optimization approach minimizes both the magnet size and necessary cable length of a sectored toroidal SMES unit. • Design approach is suitable for low temperature superconducting cable suitable for medium size SMES unit. • It investigates coil parameters with respect to practical engineering aspects. - Abstract: A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium–titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy.
Multicriteria Similarity-Based Anomaly Detection Using Pareto Depth Analysis.
Hsiao, Ko-Jen; Xu, Kevin S; Calder, Jeff; Hero, Alfred O
2016-06-01
We consider the problem of identifying patterns in a data set that exhibits anomalous behavior, often referred to as anomaly detection. Similarity-based anomaly detection algorithms detect abnormally large amounts of similarity or dissimilarity, e.g., as measured by the nearest neighbor Euclidean distances between a test sample and the training samples. In many application domains, there may not exist a single dissimilarity measure that captures all possible anomalous patterns. In such cases, multiple dissimilarity measures can be defined, including nonmetric measures, and one can test for anomalies by scalarizing using a nonnegative linear combination of them. If the relative importance of the different dissimilarity measures are not known in advance, as in many anomaly detection applications, the anomaly detection algorithm may need to be executed multiple times with different choices of weights in the linear combination. In this paper, we propose a method for similarity-based anomaly detection using a novel multicriteria dissimilarity measure, the Pareto depth. The proposed Pareto depth analysis (PDA) anomaly detection algorithm uses the concept of Pareto optimality to detect anomalies under multiple criteria without having to run an algorithm multiple times with different choices of weights. The proposed PDA approach is provably better than using linear combinations of the criteria, and shows superior performance on experiments with synthetic and real data sets.
Pareto optimal design of sectored toroidal superconducting magnet for SMES
Bhunia, Uttam; Saha, Subimal; Chakrabarti, Alok
2014-01-01
Highlights: • The optimization approach minimizes both the magnet size and necessary cable length of a sectored toroidal SMES unit. • Design approach is suitable for low temperature superconducting cable suitable for medium size SMES unit. • It investigates coil parameters with respect to practical engineering aspects. - Abstract: A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium–titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy
Pareto optimal design of sectored toroidal superconducting magnet for SMES
Bhunia, Uttam; Saha, Subimal; Chakrabarti, Alok
2014-10-01
A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium-titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy.
Generalized Pareto optimum and semi-classical spinors
Rouleux, M.
2018-02-01
In 1971, S. Smale presented a generalization of Pareto optimum he called the critical Pareto set. The underlying motivation was to extend Morse theory to several functions, i.e. to find a Morse theory for m differentiable functions defined on a manifold M of dimension ℓ. We use this framework to take a 2 × 2 Hamiltonian ℋ = ℋ(p) ∈ 2 C ∞(T * R 2) to its normal form near a singular point of the Fresnel surface. Namely we say that ℋ has the Pareto property if it decomposes, locally, up to a conjugation with regular matrices, as ℋ(p) = u ‧(p)C(p)(u ‧(p))*, where u : R 2 → R 2 has singularities of codimension 1 or 2, and C(p) is a regular Hermitian matrix (“integrating factor”). In particular this applies in certain cases to the matrix Hamiltonian of Elasticity theory and its (relative) perturbations of order 3 in momentum at the origin.
A. P. Karpenko
2015-01-01
Full Text Available We consider the relatively new and rapidly developing class of methods to solve a problem of multi-objective optimization, based on the preliminary built finite-dimensional approximation of the set, and thereby, the Pareto front of this problem as well. The work investigates the efficiency of several modifications of the method of adaptive weighted sum (AWS. This method proposed in the paper of Ryu and Kim Van (JH. Ryu, S. Kim, H. Wan is intended to build Pareto approximation of the multi-objective optimization problem.The AWS method uses quadratic approximation of the objective functions in the current sub-domain of the search space (the area of trust based on the gradient and Hessian matrix of the objective functions. To build the (quadratic meta objective functions this work uses methods of the experimental design theory, which involves calculating the values of these functions in the grid nodes covering the area of trust (a sensing method of the search domain. There are two groups of the sensing methods under consideration: hypercube- and hyper-sphere-based methods. For each of these groups, a number of test multi-objective optimization tasks has been used to study the efficiency of the following grids: "Latin Hypercube"; grid, which is uniformly random for each measurement; grid, based on the LP sequences.
On the size distribution of cities: an economic interpretation of the Pareto coefficient.
Suh, S H
1987-01-01
"Both the hierarchy and the stochastic models of size distribution of cities are analyzed in order to explain the Pareto coefficient by economic variables. In hierarchy models, it is found that the rate of variation in the productivity of cities and that in the probability of emergence of cities can explain the Pareto coefficient. In stochastic models, the productivity of cities is found to explain the Pareto coefficient. New city-size distribution functions, in which the Pareto coefficient is decomposed by economic variables, are estimated." excerpt
The Pareto Analysis for Establishing Content Criteria in Surgical Training.
Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N
2016-01-01
Current surgical training is still highly dependent on expensive operating room (OR) experience. Although there have been many attempts to transfer more training to the skills laboratory, little research is focused on which technical behaviors can lead to the highest profit when they are trained outside the OR. The Pareto principle states that in any population that contributes to a common effect, a few account for the bulk of the effect. This principle has been widely used in business management to increase company profits. This study uses the Pareto principle for establishing content criteria for more efficient surgical training. A retrospective study was conducted to assess verbal guidance provided by 9 supervising surgeons to 12 trainees performing 64 laparoscopic cholecystectomies in the OR. The verbal corrections were documented, tallied, and clustered according to the aimed change in novice behavior. The corrections were rank ordered, and a cumulative distribution curve was used to calculate which corrections accounted for 80% of the total number of verbal corrections. In total, 253 different verbal corrections were uttered 1587 times and were categorized into 40 different clusters of aimed changes in novice behaviors. The 35 highest-ranking verbal corrections (14%) and the 11 highest-ranking clusters (28%) accounted for 80% of the total number of given verbal corrections. Following the Pareto principle, we were able to identify the aspects of trainee behavior that account for most corrections given by supervisors during a laparoscopic cholecystectomy on humans. This strategy can be used for the development of new training programs to prepare the trainee in advance for the challenges encountered in the clinical setting in an OR. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Pareto-Optimal Model Selection via SPRINT-Race.
Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2018-02-01
In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.
Income dynamics with a stationary double Pareto distribution.
Toda, Alexis Akira
2011-04-01
Once controlled for the trend, the distribution of personal income appears to be double Pareto, a distribution that obeys the power law exactly in both the upper and the lower tails. I propose a model of income dynamics with a stationary distribution that is consistent with this fact. Using US male wage data for 1970-1993, I estimate the power law exponent in two ways--(i) from each cross section, assuming that the distribution has converged to the stationary distribution, and (ii) from a panel directly estimating the parameters of the income dynamics model--and obtain the same value of 8.4.
Bayesian modeling to paired comparison data via the Pareto distribution
Nasir Abbas
2017-12-01
Full Text Available A probabilistic approach to build models for paired comparison experiments based on the comparison of two Pareto variables is considered. Analysis of the proposed model is carried out in classical as well as Bayesian frameworks. Informative and uninformative priors are employed to accommodate the prior information. Simulation study is conducted to assess the suitablily and performance of the model under theoretical conditions. Appropriateness of fit of the is also carried out. Entire inferential procedure is illustrated by comparing certain cricket teams using real dataset.
Towards a seascape typology. I. Zipf versus Pareto laws
Seuront, Laurent; Mitchell, James G.
Two data analysis methods, referred to as the Zipf and Pareto methods, initially introduced in economics and linguistics two centuries ago and subsequently used in a wide range of fields (word frequency in languages and literature, human demographics, finance, city formation, genomics and physics), are described and proposed here as a potential tool to classify space-time patterns in marine ecology. The aim of this paper is, first, to present the theoretical bases of Zipf and Pareto laws, and to demonstrate that they are strictly equivalent. In that way, we provide a one-to-one correspondence between their characteristic exponents and argue that the choice of technique is a matter of convenience. Second, we argue that the appeal of this technique is that it is assumption-free for the distribution of the data and regularity of sampling interval, as well as being extremely easy to implement. Finally, in order to allow marine ecologists to identify and classify any structure in their data sets, we provide a step by step overview of the characteristic shapes expected for Zipf's law for the cases of randomness, power law behavior, power law behavior contaminated by internal and external noise, and competing power laws illustrated on the basis of typical ecological situations such as mixing processes involving non-interacting and interacting species, phytoplankton growth processes and differential grazing by zooplankton.
PARETO OPTIMAL SOLUTIONS FOR MULTI-OBJECTIVE GENERALIZED ASSIGNMENT PROBLEM
S. Prakash
2012-01-01
Full Text Available
ENGLISH ABSTRACT: The Multi-Objective Generalized Assignment Problem (MGAP with two objectives, where one objective is linear and the other one is non-linear, has been considered, with the constraints that a job is assigned to only one worker – though he may be assigned more than one job, depending upon the time available to him. An algorithm is proposed to find the set of Pareto optimal solutions of the problem, determining assignments of jobs to workers with two objectives without setting priorities for them. The two objectives are to minimise the total cost of the assignment and to reduce the time taken to complete all the jobs.
AFRIKAANSE OPSOMMING: ‘n Multi-doelwit veralgemeende toekenningsprobleem (“multi-objective generalised assignment problem – MGAP” met twee doelwitte, waar die een lineêr en die ander nielineêr is nie, word bestudeer, met die randvoorwaarde dat ‘n taak slegs toegedeel word aan een werker – alhoewel meer as een taak aan hom toegedeel kan word sou die tyd beskikbaar wees. ‘n Algoritme word voorgestel om die stel Pareto-optimale oplossings te vind wat die taaktoedelings aan werkers onderhewig aan die twee doelwitte doen sonder dat prioriteite toegeken word. Die twee doelwitte is om die totale koste van die opdrag te minimiseer en om die tyd te verminder om al die take te voltooi.
Evolutionary tradeoffs, Pareto optimality and the morphology of ammonite shells.
Tendler, Avichai; Mayo, Avraham; Alon, Uri
2015-03-07
Organisms that need to perform multiple tasks face a fundamental tradeoff: no design can be optimal at all tasks at once. Recent theory based on Pareto optimality showed that such tradeoffs lead to a highly defined range of phenotypes, which lie in low-dimensional polyhedra in the space of traits. The vertices of these polyhedra are called archetypes- the phenotypes that are optimal at a single task. To rigorously test this theory requires measurements of thousands of species over hundreds of millions of years of evolution. Ammonoid fossil shells provide an excellent model system for this purpose. Ammonoids have a well-defined geometry that can be parameterized using three dimensionless features of their logarithmic-spiral-shaped shells. Their evolutionary history includes repeated mass extinctions. We find that ammonoids fill out a pyramid in morphospace, suggesting five specific tasks - one for each vertex of the pyramid. After mass extinctions, surviving species evolve to refill essentially the same pyramid, suggesting that the tasks are unchanging. We infer putative tasks for each archetype, related to economy of shell material, rapid shell growth, hydrodynamics and compactness. These results support Pareto optimality theory as an approach to study evolutionary tradeoffs, and demonstrate how this approach can be used to infer the putative tasks that may shape the natural selection of phenotypes.
An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index
Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle
2013-01-01
We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...
Strong Convergence Bound of the Pareto Index Estimator under Right Censoring
Peng Zuoxiang
2010-01-01
Full Text Available Let be a sequence of positive independent and identically distributed random variables with common Pareto-type distribution function as , where represents a slowly varying function at infinity. In this note we study the strong convergence bound of a kind of right censored Pareto index estimator under second-order regularly varying conditions.
E. SCHNEIDER
2014-07-01
Full Text Available The article is part of a special issue on occasion of the publication of the entire scientific correspondence of Vilfredo Pareto with Maffeo Pantaleoni. The author reconstructs the beginning of their correspondence, the debate in pure mathematical economics and draws main conclusions on the different views of Pareto with respect to Marshal, Edgeworth and Fisher.JEL: B16, B31, C02, C60
GAO Hongying; WU Kangping
2007-01-01
This paper estimates the Pareto exponent of the city size (population size and economy size) distribution, all provinces, and three regions in China in 1997, 2000 and 2003 by OLS, comparatively analyzes the Pareto exponent cross section and times, and empirically analyzes the factors which impacts on the Pareto exponents of provinces. Our analyses show that the size distributions of cities in China follow the Pareto distribution and are of structural features. Variations in the value of the P...
Dictatorship, liberalism and the Pareto rule: Possible and impossible
Boričić Branislav
2009-01-01
Full Text Available The current economic crisis has shaken belief in the capacity of neoliberal 'free market' policies. Numerous supports of state intervention have arisen, and the interest for social choice theory has revived. In this paper we consider three standard properties for aggregating individual into social preferences: dictatorship, liberalism and the Pareto rule, and their formal negations. The context of the pure first-order classical logic makes it possible to show how some combinations of the above mentioned conditions, under the hypothesis of unrestricted domain, form simple and reasonable examples of possible or impossible social choice systems. Due to their simplicity, these examples, including the famous 'liberal paradox', could have a particular didactic value.
Optimal PMU Placement with Uncertainty Using Pareto Method
A. Ketabi
2012-01-01
Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.
Pareto analysis of critical factors affecting technical institution evaluation
Victor Gambhir
2012-08-01
Full Text Available With the change of education policy in 1991, more and more technical institutions are being set up in India. Some of these institutions provide quality education, but others are merely concentrating on quantity. These stakeholders are in a state of confusion about decision to select the best institute for their higher educational studies. Although various agencies including print media provide ranking of these institutions every year, but their results are controversial and biased. In this paper, the authors have made an endeavor to find the critical factors for technical institution evaluation from literature survey. A Pareto analysis has also been performed to find the intensity of these critical factors in evaluation. This will not only help the stake holders in taking right decisions but will also help the management of institutions in benchmarking for identifying the most important critical areas to improve the existing system. This will in turn help Indian economy.
Origin of Pareto-like spatial distributions in ecosystems.
Manor, Alon; Shnerb, Nadav M
2008-12-31
Recent studies of cluster distribution in various ecosystems revealed Pareto statistics for the size of spatial colonies. These results were supported by cellular automata simulations that yield robust criticality for endogenous pattern formation based on positive feedback. We show that this patch statistics is a manifestation of the law of proportionate effect. Mapping the stochastic model to a Markov birth-death process, the transition rates are shown to scale linearly with cluster size. This mapping provides a connection between patch statistics and the dynamics of the ecosystem; the "first passage time" for different colonies emerges as a powerful tool that discriminates between endogenous and exogenous clustering mechanisms. Imminent catastrophic shifts (such as desertification) manifest themselves in a drastic change of the stability properties of spatial colonies.
Pareto optimization of an industrial ecosystem: sustainability maximization
J. G. M.-S. Monteiro
2010-09-01
Full Text Available This work investigates a procedure to design an Industrial Ecosystem for sequestrating CO2 and consuming glycerol in a Chemical Complex with 15 integrated processes. The Complex is responsible for the production of methanol, ethylene oxide, ammonia, urea, dimethyl carbonate, ethylene glycol, glycerol carbonate, β-carotene, 1,2-propanediol and olefins, and is simulated using UNISIM Design (Honeywell. The process environmental impact (EI is calculated using the Waste Reduction Algorithm, while Profit (P is estimated using classic cost correlations. MATLAB (The Mathworks Inc is connected to UNISIM to enable optimization. The objective is granting maximum process sustainability, which involves finding a compromise between high profitability and low environmental impact. Sustainability maximization is therefore understood as a multi-criteria optimization problem, addressed by means of the Pareto optimization methodology for trading off P vs. EI.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
Hoffmann, Aswin L; Siem, Alex Y D; Hertog, Dick den; Kaanders, Johannes H A M; Huizenga, Henk
2006-01-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning
Giller, C A
2011-12-01
The use of conformity indices to optimize Gamma Knife planning is common, but does not address important tradeoffs between dose to tumor and normal tissue. Pareto analysis has been used for this purpose in other applications, but not for Gamma Knife (GK) planning. The goal of this work is to use computer models to show that Pareto analysis may be feasible for GK planning to identify dosimetric tradeoffs. We define a GK plan A to be Pareto dominant to B if the prescription isodose volume of A covers more tumor but not more normal tissue than B, or if A covers less normal tissue but not less tumor than B. A plan is Pareto optimal if it is not dominated by any other plan. Two different Pareto optimal plans represent different tradeoffs between dose to tumor and normal tissue, because neither plan dominates the other. 'GK simulator' software calculated dose distributions for GK plans, and was called repetitively by a genetic algorithm to calculate Pareto dominant plans. Three irregular tumor shapes were tested in 17 trials using various combinations of shots. The mean number of Pareto dominant plans/trial was 59 ± 17 (sd). Different planning strategies were identified by large differences in shot positions, and 70 of the 153 coordinate plots (46%) showed differences of 5mm or more. The Pareto dominant plans dominated other nearby plans. Pareto dominant plans represent different dosimetric tradeoffs and can be systematically calculated using genetic algorithms. Automatic identification of non-intuitive planning strategies may be feasible with these methods.
Accident investigation of construction sites in Qom city using Pareto chart (2009-2012
M. H. Beheshti
2015-07-01
.Conclusions: Employing Pareto charts as a method for analyzing and identification of accident causes can have an effective role in the management of work-related accidents, proper allocation of funds and time.
Computing the Pareto-Nash equilibrium set in finite multi-objective mixed-strategy games
Victoria Lozan
2013-10-01
Full Text Available The Pareto-Nash equilibrium set (PNES is described as intersection of graphs of efficient response mappings. The problem of PNES computing in finite multi-objective mixed-strategy games (Pareto-Nash games is considered. A method for PNES computing is studied. Mathematics Subject Classification 2010: 91A05, 91A06, 91A10, 91A43, 91A44.
Strong Convergence Bound of the Pareto Index Estimator under Right Censoring
Bao Tao
2010-01-01
Full Text Available Let {Xn,n≥1} be a sequence of positive independent and identically distributed random variables with common Pareto-type distribution function F(x=1−x−1/γlF(x as γ>0, where lF(x represents a slowly varying function at infinity. In this note we study the strong convergence bound of a kind of right censored Pareto index estimator under second-order regularly varying conditions.
Pareto Efficient Solutions of Attack-Defence Trees
Aslanyan, Zaruhi; Nielson, Flemming
2015-01-01
Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes, such as proba......Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes......, such as probability or cost of attacks and defences. In case of multiple parameters most analytical methods optimise one parameter at a time, e.g., minimise cost or maximise probability of an attack. Such methods may lead to sub-optimal solutions when optimising conflicting parameters, e.g., minimising cost while...... maximising probability. In order to tackle this challenge, we devise automated techniques that optimise all parameters at once. Moreover, in the case of conflicting parameters our techniques compute the set of all optimal solutions, defined in terms of Pareto efficiency. The developments are carried out...
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
Using Pareto points for model identification in predictive toxicology
2013-01-01
Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649
Classification as clustering: a Pareto cooperative-competitive GP approach.
McIntyre, Andrew R; Heywood, Malcolm I
2011-01-01
Intuitively population based algorithms such as genetic programming provide a natural environment for supporting solutions that learn to decompose the overall task between multiple individuals, or a team. This work presents a framework for evolving teams without recourse to prespecifying the number of cooperating individuals. To do so, each individual evolves a mapping to a distribution of outcomes that, following clustering, establishes the parameterization of a (Gaussian) local membership function. This gives individuals the opportunity to represent subsets of tasks, where the overall task is that of classification under the supervised learning domain. Thus, rather than each team member representing an entire class, individuals are free to identify unique subsets of the overall classification task. The framework is supported by techniques from evolutionary multiobjective optimization (EMO) and Pareto competitive coevolution. EMO establishes the basis for encouraging individuals to provide accurate yet nonoverlaping behaviors; whereas competitive coevolution provides the mechanism for scaling to potentially large unbalanced datasets. Benchmarking is performed against recent examples of nonlinear SVM classifiers over 12 UCI datasets with between 150 and 200,000 training instances. Solutions from the proposed coevolutionary multiobjective GP framework appear to provide a good balance between classification performance and model complexity, especially as the dataset instance count increases.
Pareto-Optimal Multi-objective Inversion of Geophysical Data
Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham
2018-01-01
In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.
Igor Kaganovich
2000-01-01
Negative ions tend to stratify in electronegative plasmas with hot electrons (electron temperature Te much larger than ion temperature Ti, Te > Ti ). The boundary separating a plasma containing negative ions, and a plasma, without negative ions, is usually thin, so that the negative ion density falls rapidly to zero-forming a negative ion density front. We review theoretical, experimental and numerical results giving the spatio-temporal evolution of negative ion density fronts during plasma ignition, the steady state, and extinction (afterglow). During plasma ignition, negative ion fronts are the result of the break of smooth plasma density profiles during nonlinear convection. In a steady-state plasma, the fronts are boundary layers with steepening of ion density profiles due to nonlinear convection also. But during plasma extinction, the ion fronts are of a completely different nature. Negative ions diffuse freely in the plasma core (no convection), whereas the negative ion front propagates towards the chamber walls with a nearly constant velocity. The concept of fronts turns out to be very effective in analysis of plasma density profile evolution in strongly non-isothermal plasmas
Global stability-based design optimization of truss structures using ...
The quality of current pareto front obtained in the end of a whole genetic search is assessed according to its closeness to the ...... better optimal designation with a lower displacement value of 0.3075 in. satisfying the service- .... Internal force. R.
Tulio Rosembuj
2006-12-01
Full Text Available There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.
Tulio Rosembuj
2006-01-01
There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.
Birds shed RNA-viruses according to the pareto principle.
Jankowski, Mark D; Williams, Christopher J; Fair, Jeanne M; Owen, Jennifer C
2013-01-01
A major challenge in disease ecology is to understand the role of individual variation of infection load on disease transmission dynamics and how this influences the evolution of resistance or tolerance mechanisms. Such information will improve our capacity to understand, predict, and mitigate pathogen-associated disease in all organisms. In many host-pathogen systems, particularly macroparasites and sexually transmitted diseases, it has been found that approximately 20% of the population is responsible for approximately 80% of the transmission events. Although host contact rates can account for some of this pattern, pathogen transmission dynamics also depend upon host infectiousness, an area that has received relatively little attention. Therefore, we conducted a meta-analysis of pathogen shedding rates of 24 host (avian) - pathogen (RNA-virus) studies, including 17 bird species and five important zoonotic viruses. We determined that viral count data followed the Weibull distribution, the mean Gini coefficient (an index of inequality) was 0.687 (0.036 SEM), and that 22.0% (0.90 SEM) of the birds shed 80% of the virus across all studies, suggesting an adherence of viral shedding counts to the Pareto Principle. The relative position of a bird in a distribution of viral counts was affected by factors extrinsic to the host, such as exposure to corticosterone and to a lesser extent reduced food availability, but not to intrinsic host factors including age, sex, and migratory status. These data provide a quantitative view of heterogeneous virus shedding in birds that may be used to better parameterize epidemiological models and understand transmission dynamics.
Birds shed RNA-viruses according to the pareto principle.
Mark D Jankowski
Full Text Available A major challenge in disease ecology is to understand the role of individual variation of infection load on disease transmission dynamics and how this influences the evolution of resistance or tolerance mechanisms. Such information will improve our capacity to understand, predict, and mitigate pathogen-associated disease in all organisms. In many host-pathogen systems, particularly macroparasites and sexually transmitted diseases, it has been found that approximately 20% of the population is responsible for approximately 80% of the transmission events. Although host contact rates can account for some of this pattern, pathogen transmission dynamics also depend upon host infectiousness, an area that has received relatively little attention. Therefore, we conducted a meta-analysis of pathogen shedding rates of 24 host (avian - pathogen (RNA-virus studies, including 17 bird species and five important zoonotic viruses. We determined that viral count data followed the Weibull distribution, the mean Gini coefficient (an index of inequality was 0.687 (0.036 SEM, and that 22.0% (0.90 SEM of the birds shed 80% of the virus across all studies, suggesting an adherence of viral shedding counts to the Pareto Principle. The relative position of a bird in a distribution of viral counts was affected by factors extrinsic to the host, such as exposure to corticosterone and to a lesser extent reduced food availability, but not to intrinsic host factors including age, sex, and migratory status. These data provide a quantitative view of heterogeneous virus shedding in birds that may be used to better parameterize epidemiological models and understand transmission dynamics.
Sánchez, M S; Sarabia, L A; Ortiz, M C
2012-11-19
Experimental designs for a given task should be selected on the base of the problem being solved and of some criteria that measure their quality. There are several such criteria because there are several aspects to be taken into account when making a choice. The most used criteria are probably the so-called alphabetical optimality criteria (for example, the A-, E-, and D-criteria related to the joint estimation of the coefficients, or the I- and G-criteria related to the prediction variance). Selecting a proper design to solve a problem implies finding a balance among these several criteria that measure the performance of the design in different aspects. Technically this is a problem of multi-criteria optimization, which can be tackled from different views. The approach presented here addresses the problem in its real vector nature, so that ad hoc experimental designs are generated with an algorithm based on evolutionary algorithms to find the Pareto-optimal front. There is not theoretical limit to the number of criteria that can be studied and, contrary to other approaches, no just one experimental design is computed but a set of experimental designs all of them with the property of being Pareto-optimal in the criteria needed by the user. Besides, the use of an evolutionary algorithm makes it possible to search in both continuous and discrete domains and avoid the need of having a set of candidate points, usual in exchange algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.
Song, Q Chelsea; Wee, Serena; Newman, Daniel A
2017-12-01
To reduce adverse impact potential and improve diversity outcomes from personnel selection, one promising technique is De Corte, Lievens, and Sackett's (2007) Pareto-optimal weighting strategy. De Corte et al.'s strategy has been demonstrated on (a) a composite of cognitive and noncognitive (e.g., personality) tests (De Corte, Lievens, & Sackett, 2008) and (b) a composite of specific cognitive ability subtests (Wee, Newman, & Joseph, 2014). Both studies illustrated how Pareto-weighting (in contrast to unit weighting) could lead to substantial improvement in diversity outcomes (i.e., diversity improvement), sometimes more than doubling the number of job offers for minority applicants. The current work addresses a key limitation of the technique-the possibility of shrinkage, especially diversity shrinkage, in the Pareto-optimal solutions. Using Monte Carlo simulations, sample size and predictor combinations were varied and cross-validated Pareto-optimal solutions were obtained. Although diversity shrinkage was sizable for a composite of cognitive and noncognitive predictors when sample size was at or below 500, diversity shrinkage was typically negligible for a composite of specific cognitive subtest predictors when sample size was at least 100. Diversity shrinkage was larger when the Pareto-optimal solution suggested substantial diversity improvement. When sample size was at least 100, cross-validated Pareto-optimal weights typically outperformed unit weights-suggesting that diversity improvement is often possible, despite diversity shrinkage. Implications for Pareto-optimal weighting, adverse impact, sample size of validation studies, and optimizing the diversity-job performance tradeoff are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Ottosson, R O; Hauer, Anna Karlsson; Behrens, C.F.
2010-01-01
The pencil beam dose calculation method is frequently used in modern radiation therapy treatment planning regardless of the fact that it is documented inaccurately for cases involving large density variations. The inaccuracies are larger for higher beam energies. As a result, low energy beams are...
Ottosson, R O; Karlsson, A; Behrens, C F
2010-08-21
The pencil beam dose calculation method is frequently used in modern radiation therapy treatment planning regardless of the fact that it is documented inaccurately for cases involving large density variations. The inaccuracies are larger for higher beam energies. As a result, low energy beams are conventionally used for lung treatments. The aim of this study was to analyze the advantages and disadvantages of dynamic IMRT treatment planning for high and low photon energy in order to assess if deviating from the conventional low energy approach could be favorable in some cases. Furthermore, the influence of motion on the dose distribution was investigated. Four non-small cell lung cancer cases were selected for this study. Inverse planning was conducted using Varian Eclipse. A total number of 31 dynamic IMRT plans, distributed amongst the four cases, were created ranging from PTV conformity weighted to normal tissue sparing weighted. All optimized treatment plans were calculated using three different calculation algorithms (PBC, AAA and MC). In order to study the influence of motion, two virtual lung phantoms were created. The idea was to mimic two different situations: one where the GTV is located centrally in the PTV and another where the GTV was close to the edge of the PTV. PBC is in poor agreement with MC and AAA for all cases and treatment plans. AAA overestimates the dose, compared to MC. This effect is more pronounced for 15 than 6 MV. AAA and MC both predict similar perturbations in dose distributions when moving the GTV to the edge of the PTV. PBC, however, predicts results contradicting those of AAA and MC. This study shows that PB-based dose calculation algorithms are clinically insufficient for patient geometries involving large density inhomogeneities. AAA is in much better agreement with MC, but even a small overestimation of the dose level by the algorithm might lead to a large part of the PTV being underdosed. It is advisable to use low energy as a default for tumor sites involving lungs. However, there might be situations where it is favorable to use high energy. In order to deviate from the recommended low energy convention, an accurate dose calculation algorithm (e.g. MC) should be consulted. The study underlines the inaccuracies introduced when calculating dose using a PB-based algorithm in geometries involving large density variations. PBC, in contrast to other algorithms (AAA and MC), predicts a decrease in dose when the density is increased.
Ottosson, R O; Karlsson, A; Behrens, C F, E-mail: riolot01@heh.regionh.d [Department of Oncology (R), Division of Radiophysics (52AA), Copenhagen University Hospital Herlev, Herlev Ringvej 75, DK-2730 Herlev (Denmark)
2010-08-21
The pencil beam dose calculation method is frequently used in modern radiation therapy treatment planning regardless of the fact that it is documented inaccurately for cases involving large density variations. The inaccuracies are larger for higher beam energies. As a result, low energy beams are conventionally used for lung treatments. The aim of this study was to analyze the advantages and disadvantages of dynamic IMRT treatment planning for high and low photon energy in order to assess if deviating from the conventional low energy approach could be favorable in some cases. Furthermore, the influence of motion on the dose distribution was investigated. Four non-small cell lung cancer cases were selected for this study. Inverse planning was conducted using Varian Eclipse. A total number of 31 dynamic IMRT plans, distributed amongst the four cases, were created ranging from PTV conformity weighted to normal tissue sparing weighted. All optimized treatment plans were calculated using three different calculation algorithms (PBC, AAA and MC). In order to study the influence of motion, two virtual lung phantoms were created. The idea was to mimic two different situations: one where the GTV is located centrally in the PTV and another where the GTV was close to the edge of the PTV. PBC is in poor agreement with MC and AAA for all cases and treatment plans. AAA overestimates the dose, compared to MC. This effect is more pronounced for 15 than 6 MV. AAA and MC both predict similar perturbations in dose distributions when moving the GTV to the edge of the PTV. PBC, however, predicts results contradicting those of AAA and MC. This study shows that PB-based dose calculation algorithms are clinically insufficient for patient geometries involving large density inhomogeneities. AAA is in much better agreement with MC, but even a small overestimation of the dose level by the algorithm might lead to a large part of the PTV being underdosed. It is advisable to use low energy as a default for tumor sites involving lungs. However, there might be situations where it is favorable to use high energy. In order to deviate from the recommended low energy convention, an accurate dose calculation algorithm (e.g. MC) should be consulted. The study underlines the inaccuracies introduced when calculating dose using a PB-based algorithm in geometries involving large density variations. PBC, in contrast to other algorithms (AAA and MC), predicts a decrease in dose when the density is increased.
Ottosson, R O; Hauer, Anna Karlsson; Behrens, C.F.
2010-01-01
to normal tissue sparing weighted. All optimized treatment plans were calculated using three different calculation algorithms (PBC, AAA and MC). In order to study the influence of motion, two virtual lung phantoms were created. The idea was to mimic two different situations: one where the GTV is located...... centrally in the PTV and another where the GTV was close to the edge of the PTV. PBC is in poor agreement with MC and AAA for all cases and treatment plans. AAA overestimates the dose, compared to MC. This effect is more pronounced for 15 than 6MV. AAA and MC both predict similar perturbations in dose...... distributions when moving the GTV to the edge of the PTV. PBC, however, predicts results contradicting those of AAA and MC. This study shows that PB-based dose calculation algorithms are clinically insufficient for patient geometries involving large density inhomogeneities. AAA is in much better agreement...
Ottosson, R O; Karlsson, A; Behrens, C F
2010-01-01
The pencil beam dose calculation method is frequently used in modern radiation therapy treatment planning regardless of the fact that it is documented inaccurately for cases involving large density variations. The inaccuracies are larger for higher beam energies. As a result, low energy beams are conventionally used for lung treatments. The aim of this study was to analyze the advantages and disadvantages of dynamic IMRT treatment planning for high and low photon energy in order to assess if deviating from the conventional low energy approach could be favorable in some cases. Furthermore, the influence of motion on the dose distribution was investigated. Four non-small cell lung cancer cases were selected for this study. Inverse planning was conducted using Varian Eclipse. A total number of 31 dynamic IMRT plans, distributed amongst the four cases, were created ranging from PTV conformity weighted to normal tissue sparing weighted. All optimized treatment plans were calculated using three different calculation algorithms (PBC, AAA and MC). In order to study the influence of motion, two virtual lung phantoms were created. The idea was to mimic two different situations: one where the GTV is located centrally in the PTV and another where the GTV was close to the edge of the PTV. PBC is in poor agreement with MC and AAA for all cases and treatment plans. AAA overestimates the dose, compared to MC. This effect is more pronounced for 15 than 6 MV. AAA and MC both predict similar perturbations in dose distributions when moving the GTV to the edge of the PTV. PBC, however, predicts results contradicting those of AAA and MC. This study shows that PB-based dose calculation algorithms are clinically insufficient for patient geometries involving large density inhomogeneities. AAA is in much better agreement with MC, but even a small overestimation of the dose level by the algorithm might lead to a large part of the PTV being underdosed. It is advisable to use low energy as a default for tumor sites involving lungs. However, there might be situations where it is favorable to use high energy. In order to deviate from the recommended low energy convention, an accurate dose calculation algorithm (e.g. MC) should be consulted. The study underlines the inaccuracies introduced when calculating dose using a PB-based algorithm in geometries involving large density variations. PBC, in contrast to other algorithms (AAA and MC), predicts a decrease in dose when the density is increased.
Andru?cã Maria Carmen
2013-01-01
The field of globalization has highlighted an interdependence implied by a more harmonious understanding determined by the daily interaction between nations through the inducement of peace and the management of streamlining and the effectiveness of the global economy. For the functioning of the globalization, the developing countries that can be helped by the developed ones must be involved. The international community can contribute to the institution of the development environment of the gl...
Efficiently approximating the Pareto frontier: Hydropower dam placement in the Amazon basin
Wu, Xiaojian; Gomes-Selman, Jonathan; Shi, Qinru; Xue, Yexiang; Garcia-Villacorta, Roosevelt; Anderson, Elizabeth; Sethi, Suresh; Steinschneider, Scott; Flecker, Alexander; Gomes, Carla P.
2018-01-01
Real–world problems are often not fully characterized by a single optimal solution, as they frequently involve multiple competing objectives; it is therefore important to identify the so-called Pareto frontier, which captures solution trade-offs. We propose a fully polynomial-time approximation scheme based on Dynamic Programming (DP) for computing a polynomially succinct curve that approximates the Pareto frontier to within an arbitrarily small > 0 on treestructured networks. Given a set of objectives, our approximation scheme runs in time polynomial in the size of the instance and 1/. We also propose a Mixed Integer Programming (MIP) scheme to approximate the Pareto frontier. The DP and MIP Pareto frontier approaches have complementary strengths and are surprisingly effective. We provide empirical results showing that our methods outperform other approaches in efficiency and accuracy. Our work is motivated by a problem in computational sustainability concerning the proliferation of hydropower dams throughout the Amazon basin. Our goal is to support decision-makers in evaluating impacted ecosystem services on the full scale of the Amazon basin. Our work is general and can be applied to approximate the Pareto frontier of a variety of multiobjective problems on tree-structured networks.
Distributed approximation of Pareto surfaces in multicriteria radiation therapy treatment planning
Bokrantz, Rasmus
2013-01-01
We consider multicriteria radiation therapy treatment planning by navigation over the Pareto surface, implemented by interpolation between discrete treatment plans. Current state of the art for calculation of a discrete representation of the Pareto surface is to sandwich this set between inner and outer approximations that are updated one point at a time. In this paper, we generalize this sequential method to an algorithm that permits parallelization. The principle of the generalization is to apply the sequential method to an approximation of an inexpensive model of the Pareto surface. The information gathered from the model is sub-sequently used for the calculation of points from the exact Pareto surface, which are processed in parallel. The model is constructed according to the current inner and outer approximations, and given a shape that is difficult to approximate, in order to avoid that parts of the Pareto surface are incorrectly disregarded. Approximations of comparable quality to those generated by the sequential method are demonstrated when the degree of parallelization is up to twice the number of dimensions of the objective space. For practical applications, the number of dimensions is typically at least five, so that a speed-up of one order of magnitude is obtained. (paper)
Distributed approximation of Pareto surfaces in multicriteria radiation therapy treatment planning.
Bokrantz, Rasmus
2013-06-07
We consider multicriteria radiation therapy treatment planning by navigation over the Pareto surface, implemented by interpolation between discrete treatment plans. Current state of the art for calculation of a discrete representation of the Pareto surface is to sandwich this set between inner and outer approximations that are updated one point at a time. In this paper, we generalize this sequential method to an algorithm that permits parallelization. The principle of the generalization is to apply the sequential method to an approximation of an inexpensive model of the Pareto surface. The information gathered from the model is sub-sequently used for the calculation of points from the exact Pareto surface, which are processed in parallel. The model is constructed according to the current inner and outer approximations, and given a shape that is difficult to approximate, in order to avoid that parts of the Pareto surface are incorrectly disregarded. Approximations of comparable quality to those generated by the sequential method are demonstrated when the degree of parallelization is up to twice the number of dimensions of the objective space. For practical applications, the number of dimensions is typically at least five, so that a speed-up of one order of magnitude is obtained.
Mahmoodabadi, M J; Taherkhorsandi, M; Bagheri, A
2014-01-01
An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot.
Nouiri, Issam
2017-11-01
This paper presents the development of multi-objective Genetic Algorithms to optimize chlorination design and management in drinking water networks (DWN). Three objectives have been considered: the improvement of the chlorination uniformity (healthy objective), the minimization of chlorine booster stations number, and the injected chlorine mass (economic objectives). The problem has been dissociated in medium and short terms ones. The proposed methodology was tested on hypothetical and real DWN. Results proved the ability of the developed optimization tool to identify relationships between the healthy and economic objectives as Pareto fronts. The proposed approach was efficient in computing solutions ensuring better chlorination uniformity while requiring the weakest injected chlorine mass when compared to other approaches. For the real DWN studied, chlorination optimization has been crowned by great improvement of free-chlorine-dosing uniformity and by a meaningful chlorine mass reduction, in comparison with the conventional chlorination.
Sun, Kaibiao; Kasperski, Andrzej; Tian, Yuan
2014-10-01
The aim of this study is the optimization of a product-driven self-cycling bioprocess and presentation of a way to determine the best possible decision variables out of a set of alternatives based on the designed model. Initially, a product-driven generalized kinetic model, which allows a flexible choice of the most appropriate kinetics is designed and analysed. The optimization problem is given as the bi-objective one, where maximization of biomass productivity and minimization of unproductive loss of substrate are the objective functions. Then, the Pareto fronts are calculated for exemplary kinetics. It is found that in the designed bioprocess, a decrease of emptying/refilling fraction and an increase of substrate feeding concentration cause an increase of the biomass productivity. An increase of emptying/refilling fraction and a decrease of substrate feeding concentration cause a decrease of unproductive loss of substrate. The preferred solutions are calculated using the minimum distance from an ideal solution method, while giving proposals of their modifications derived from a decision maker's reactions to the generated solutions.
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto
Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks
José Raúl Machado-Fernández
2016-12-01
Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.
The Burr X Pareto Distribution: Properties, Applications and VaR Estimation
Mustafa Ç. Korkmaz
2017-12-01
Full Text Available In this paper, a new three-parameter Pareto distribution is introduced and studied. We discuss various mathematical and statistical properties of the new model. Some estimation methods of the model parameters are performed. Moreover, the peaks-over-threshold method is used to estimate Value-at-Risk (VaR by means of the proposed distribution. We compare the distribution with a few other models to show its versatility in modelling data with heavy tails. VaR estimation with the Burr X Pareto distribution is presented using time series data, and the new model could be considered as an alternative VaR model against the generalized Pareto model for financial institutions.
Prediction in Partial Duration Series With Generalized Pareto-Distributed Exceedances
Rosbjerg, Dan; Madsen, Henrik; Rasmussen, Peter Funder
1992-01-01
As a generalization of the common assumption of exponential distribution of the exceedances in Partial duration series the generalized Pareto distribution has been adopted. Estimators for the parameters are presented using estimation by both method of moments and probability-weighted moments......-weighted moments. Maintaining the generalized Pareto distribution as the parent exceedance distribution the T-year event is estimated assuming the exceedances to be exponentially distributed. For moderately long-tailed exceedance distributions and small to moderate sample sizes it is found, by comparing mean...... square errors of the T-year event estimators, that the exponential distribution is preferable to the correct generalized Pareto distribution despite the introduced model error and despite a possible rejection of the exponential hypothesis by a test of significance. For moderately short-tailed exceedance...
A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.
Brusco, Michael J; Steinley, Douglas
2012-02-01
There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.
Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.
Elhossini, Ahmed; Areibi, Shawki; Dony, Robert
2010-01-01
This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.
Jarosław Rudy
2015-01-01
Full Text Available In this paper the job shop scheduling problem (JSP with minimizing two criteria simultaneously is considered. JSP is frequently used model in real world applications of combinatorial optimization. Multi-objective job shop problems (MOJSP were rarely studied. We implement and compare two multi-agent nature-based methods, namely ant colony optimization (ACO and genetic algorithm (GA for MOJSP. Both of those methods employ certain technique, taken from the multi-criteria decision analysis in order to establish ranking of solutions. ACO and GA differ in a method of keeping information about previously found solutions and their quality, which affects the course of the search. In result, new features of Pareto approximations provided by said algorithms are observed: aside from the slight superiority of the ACO method the Pareto frontier approximations provided by both methods are disjoint sets. Thus, both methods can be used to search mutually exclusive areas of the Pareto frontier.
Brodsky, S.
2004-11-30
In these lectures, I survey a number of applications of light-front methods to hadron and nuclear physics phenomenology and dynamics, including light-front statistical physics. Light-front Fock-state wavefunctions provide a frame-independent representation of hadrons in terms of their fundamental quark and gluon degrees of freedom. Nonperturbative methods for computing LFWFs in QCD are discussed, including string/gauge duality which predicts the power-law fall-off at high momentum transfer of light-front Fock-state hadronic wavefunctions with an arbitrary number of constituents and orbital angular momentum. The AdS/CFT correspondence has important implications for hadron phenomenology in the conformal limit, including an all-orders derivation of counting rules for exclusive processes. One can also compute the hadronic spectrum of near-conformal QCD assuming a truncated AdS/CFT space. Given the LFWFs, one can compute form factors, heavy hadron decay amplitudes, hadron distribution amplitudes, and the generalized parton distributions underlying deeply virtual Compton scattering. The quantum fluctuations represented by the light-front Fock expansion leads to novel QCD phenomena such as color transparency, intrinsic heavy quark distributions, diffractive dissociation, and hidden-color components of nuclear wavefunctions. A new test of hidden color in deuteron photodisintegration is proposed. The origin of leading-twist phenomena such as the diffractive component of deep inelastic scattering, single-spin asymmetries, nuclear shadowing and antishadowing is also discussed; these phenomena cannot be described by light-front wavefunctions of the target computed in isolation. Part of the anomalous NuTeV results for the weak mixing angle {theta}{sub W} could be due to the non-universality of nuclear antishadowing for charged and neutral currents.
Yan Sun
2015-09-01
Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.
Comparative analysis of Pareto surfaces in multi-criteria IMRT planning
Teichert, K; Suess, P; Serna, J I; Monz, M; Kuefer, K H [Department of Optimization, Fraunhofer Institute for Industrial Mathematics (ITWM), Fraunhofer Platz 1, 67663 Kaiserslautern (Germany); Thieke, C, E-mail: katrin.teichert@itwm.fhg.de [Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)
2011-06-21
In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g. photons versus protons) than with the classical method of comparing single treatment plans.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Comparative analysis of Pareto surfaces in multi-criteria IMRT planning.
Teichert, K; Süss, P; Serna, J I; Monz, M; Küfer, K H; Thieke, C
2011-06-21
In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g., photons versus protons) than with the classical method of comparing single treatment plans.
Comparative analysis of Pareto surfaces in multi-criteria IMRT planning
Teichert, K; Suess, P; Serna, J I; Monz, M; Kuefer, K H; Thieke, C
2011-01-01
In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g. photons versus protons) than with the classical method of comparing single treatment plans.
Reiser, M.
1982-01-01
An intense relativistic electron beam cannot propagate in a metal drift tube when the current exceeds the space charge limit. Very high charge density and electric field gradients (10 2 to 10 3 MV/m) develop at the beam front and the electrons are reflected. When a neutral gas or a plasma is present, collective acceleration of positive ions occur, and the resulting charge neutralization enables the beam to propagate. Experimental results, theoretical understanding, and schemes to achieve high ion energies by external control of the beam front velocity will be reviewed
A New Generalization of the Pareto Distribution and Its Application to Insurance Data
Mohamed E. Ghitany
2018-02-01
Full Text Available The Pareto classical distribution is one of the most attractive in statistics and particularly in the scenario of actuarial statistics and finance. For example, it is widely used when calculating reinsurance premiums. In the last years, many alternative distributions have been proposed to obtain better adjustments especially when the tail of the empirical distribution of the data is very long. In this work, an alternative generalization of the Pareto distribution is proposed and its properties are studied. Finally, application of the proposed model to the earthquake insurance data set is presented.
Andersen, Kurt Munk; Sandqvist, Allan
1997-01-01
We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution.......We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution....
On Usage of Pareto curves to Select Wind Turbine Controller Tunings to the Wind Turbulence Level
Odgaard, Peter Fogh
2015-01-01
Model predictive control has in recently publications shown its potential for lowering of cost of energy of modern wind turbines. Pareto curves can be used to evaluate performance of these controllers with multiple conflicting objectives of power and fatigue loads. In this paper an approach...... to update an model predictive wind turbine controller tuning as the wind turbulence increases, as increased turbulence levels results in higher loads for the same controller tuning. In this paper the Pareto curves are computed using an industrial high fidelity aero-elastic model. Simulations show...
Agterberg, Frits
2017-01-01
Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that
Agterberg, Frits, E-mail: agterber@nrcan.gc.ca [Geological Survey of Canada (Canada)
2017-07-01
Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that
K. Gawdzińska
2011-04-01
Full Text Available This author discusses the use of selected quality management tools, i.e. the Pareto chart and Ishikawa fishbone diagram, for the descriptionof composite casting defects. The Pareto chart allows to determine defect priority related with metallic composite castings, while theIshikawa diagram indicates the causes of defect formation and enables calculating defect weights.
K. Gawdzińska
2011-01-01
This author discusses the use of selected quality management tools, i.e. the Pareto chart and Ishikawa fishbone diagram, for the descriptionof composite casting defects. The Pareto chart allows to determine defect priority related with metallic composite castings, while theIshikawa diagram indicates the causes of defect formation and enables calculating defect weights.
Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.
2007-01-01
Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
Dipolarization Fronts from Reconnection Onset
Sitnov, M. I.; Swisdak, M. M.; Merkin, V. G.; Buzulukova, N.; Moore, T. E.
2012-12-01
Dipolarization fronts observed in the magnetotail are often viewed as signatures of bursty magnetic reconnection. However, until recently spontaneous reconnection was considered to be fully prohibited in the magnetotail geometry because of the linear stability of the ion tearing mode. Recent theoretical studies showed that spontaneous reconnection could be possible in the magnetotail geometries with the accumulation of magnetic flux at the tailward end of the thin current sheet, a distinctive feature of the magnetotail prior to substorm onset. That result was confirmed by open-boundary full-particle simulations of 2D current sheet equilibria, where two magnetotails were separated by an equilibrium X-line and weak external electric field was imposed to nudge the system toward the instability threshold. To investigate the roles of the equilibrium X-line, driving electric field and other parameters in the reconnection onset process we performed a set of 2D PIC runs with different initial settings. The investigated parameter space includes the critical current sheet thickness, flux tube volume per unit magnetic flux and the north-south component of the magnetic field. Such an investigation is critically important for the implementation of kinetic reconnection onset criteria into global MHD codes. The results are compared with Geotail visualization of the magnetotail during substorms, as well as Cluster and THEMIS observations of dipolarization fronts.
Plum, Maja
Globalization is often referred to as external to education - a state of affair facing the modern curriculum with numerous challenges. In this paper it is examined as internal to curriculum; analysed as a problematization in a Foucaultian sense. That is, as a complex of attentions, worries, ways...... of reasoning, producing curricular variables. The analysis is made through an example of early childhood curriculum in Danish Pre-school, and the way the curricular variable of the pre-school child comes into being through globalization as a problematization, carried forth by the comparative practices of PISA...
F. Gerard Adams
2008-01-01
The rapid globalization of the world economy is causing fundamental changes in patterns of trade and finance. Some economists have argued that globalization has arrived and that the world is â€œflatâ€ . While the geographic scope of markets has increased, the author argues that new patterns of trade and finance are a result of the discrepancies between â€œoldâ€ countries and â€œnewâ€ . As the differences are gradually wiped out, particularly if knowledge and technology spread worldwide, the t...
TU-C-17A-01: A Data-Based Development for Pratical Pareto Optimality Assessment and Identification
Ruan, D; Qi, S; DeMarco, J; Kupelian, P; Low, D
2014-01-01
Purpose: To develop an efficient Pareto optimality assessment scheme to support plan comparison and practical determination of best-achievable practical treatment plan goals. Methods: Pareto efficiency reflects the tradeoffs among competing target coverage and normal tissue sparing in multi-criterion optimization (MCO) based treatment planning. Assessing and understanding Pareto optimality provides insightful guidance for future planning. However, current MCO-driven Pareto estimation makes relaxed assumptions about the Pareto structure and insufficiently account for practical limitations in beam complexity, leading to performance upper bounds that may be unachievable. This work proposed an alternative data-driven approach that implicitly incorporates the practical limitations, and identifies the Pareto frontier subset by eliminating dominated plans incrementally using the Edgeworth Pareto hull (EPH). The exactness of this elimination process also permits the development of a hierarchical procedure for speedup when the plan cohort size is large, by partitioning the cohort and performing elimination in each subset before a final aggregated elimination. The developed algorithm was first tested on 2D and 3D where accuracy can be reliably assessed. As a specific application, the algorithm was applied to compare systematic plan quality for lower head-and-neck, amongst 4 competing treatment modalities. Results: The algorithm agrees exactly with brute-force pairwise comparison and visual inspection in low dimensions. The hierarchical algorithm shows sqrt(k) folds speedup with k being the number of data points in the plan cohort, demonstrating good efficiency enhancement for heavy testing tasks. Application to plan performance comparison showed superiority of tomotherapy plans for the lower head-and-neck, and revealed a potential nonconvex Pareto frontier structure. Conclusion: An accurate and efficient scheme to identify Pareto frontier from a plan cohort has been
TU-C-17A-01: A Data-Based Development for Pratical Pareto Optimality Assessment and Identification
Ruan, D; Qi, S; DeMarco, J; Kupelian, P; Low, D [UCLA Department of Radiation Oncology, Los Angeles, CA (United States)
2014-06-15
Purpose: To develop an efficient Pareto optimality assessment scheme to support plan comparison and practical determination of best-achievable practical treatment plan goals. Methods: Pareto efficiency reflects the tradeoffs among competing target coverage and normal tissue sparing in multi-criterion optimization (MCO) based treatment planning. Assessing and understanding Pareto optimality provides insightful guidance for future planning. However, current MCO-driven Pareto estimation makes relaxed assumptions about the Pareto structure and insufficiently account for practical limitations in beam complexity, leading to performance upper bounds that may be unachievable. This work proposed an alternative data-driven approach that implicitly incorporates the practical limitations, and identifies the Pareto frontier subset by eliminating dominated plans incrementally using the Edgeworth Pareto hull (EPH). The exactness of this elimination process also permits the development of a hierarchical procedure for speedup when the plan cohort size is large, by partitioning the cohort and performing elimination in each subset before a final aggregated elimination. The developed algorithm was first tested on 2D and 3D where accuracy can be reliably assessed. As a specific application, the algorithm was applied to compare systematic plan quality for lower head-and-neck, amongst 4 competing treatment modalities. Results: The algorithm agrees exactly with brute-force pairwise comparison and visual inspection in low dimensions. The hierarchical algorithm shows sqrt(k) folds speedup with k being the number of data points in the plan cohort, demonstrating good efficiency enhancement for heavy testing tasks. Application to plan performance comparison showed superiority of tomotherapy plans for the lower head-and-neck, and revealed a potential nonconvex Pareto frontier structure. Conclusion: An accurate and efficient scheme to identify Pareto frontier from a plan cohort has been
Pareto-Efficiency, Hayek’s Marvel, and the Invisible Executor
Kakarot-Handtke, Egmont
2014-01-01
This non-technical contribution to the RWER-Blog deals with the interrelations of market clearing, efficient information processing through the price system, and distribution. The point of entry is a transparent example of Pareto-efficiency taken from the popular book How Markets Fail.
Barmby, Tim; Smith, Nina
1996-01-01
This paper analyses the labour supply behaviour of households in Denmark and Britain. It employs models in which the preferences of individuals within the household are explicitly represented. The households are then assumed to decide on their labour supply in a Pareto-Optimal fashion. Describing...
Yang Sun
2018-01-01
Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.
Approximating the Pareto set of multiobjective linear programs via robust optimization
Gorissen, B.L.; den Hertog, D.
2012-01-01
We consider problems with multiple linear objectives and linear constraints and use adjustable robust optimization and polynomial optimization as tools to approximate the Pareto set with polynomials of arbitrarily large degree. The main difference with existing techniques is that we optimize a
Reddy, P.V.; Engwerda, J.C.
2011-01-01
In this article we derive necessary and sufficient conditions for the existence of Pareto optimal solutions for infinite horizon cooperative differential games. We consider games defined by non autonomous and discounted autonomous systems. The obtained results are used to analyze the regular
Kyroudi, Archonteia; Petersson, Kristoffer; Ghandour, Sarah; Pachoud, Marc; Matzinger, Oscar; Ozsahin, Mahmut; Bourhis, Jean; Bochud, François; Moeckli, Raphaël
2016-08-01
Multi-criteria optimization provides decision makers with a range of clinical choices through Pareto plans that can be explored during real time navigation and then converted into deliverable plans. Our study shows that dosimetric differences can arise between the two steps, which could compromise the clinical choices made during navigation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Huang, Hui; Ning, Jixian
2017-01-01
Prederivatives play an important role in the research of set optimization problems. First, we establish several existence theorems of prederivatives for γ -paraconvex set-valued mappings in Banach spaces with [Formula: see text]. Then, in terms of prederivatives, we establish both necessary and sufficient conditions for the existence of Pareto minimal solution of set optimization problems.
Gökhan Gökdere
2014-05-01
Full Text Available In this paper, closed form expressions for the moments of the truncated Pareto order statistics are obtained by using conditional distribution. We also derive some results for the moments which will be useful for moment computations based on ordered data.
Approximating the Pareto Set of Multiobjective Linear Programs via Robust Optimization
Gorissen, B.L.; den Hertog, D.
2012-01-01
Abstract: The Pareto set of a multiobjective optimization problem consists of the solutions for which one or more objectives can not be improved without deteriorating one or more other objectives. We consider problems with linear objectives and linear constraints and use Adjustable Robust
Searching for the Pareto frontier in multi-objective protein design.
Nanda, Vikas; Belure, Sandeep V; Shir, Ofer M
2017-08-01
The goal of protein engineering and design is to identify sequences that adopt three-dimensional structures of desired function. Often, this is treated as a single-objective optimization problem, identifying the sequence-structure solution with the lowest computed free energy of folding. However, many design problems are multi-state, multi-specificity, or otherwise require concurrent optimization of multiple objectives. There may be tradeoffs among objectives, where improving one feature requires compromising another. The challenge lies in determining solutions that are part of the Pareto optimal set-designs where no further improvement can be achieved in any of the objectives without degrading one of the others. Pareto optimality problems are found in all areas of study, from economics to engineering to biology, and computational methods have been developed specifically to identify the Pareto frontier. We review progress in multi-objective protein design, the development of Pareto optimization methods, and present a specific case study using multi-objective optimization methods to model the tradeoff between three parameters, stability, specificity, and complexity, of a set of interacting synthetic collagen peptides.
Karanikas, Nektarios
2016-01-01
Although reengineering is strategically advantageous for organisations in order to keep functional and sustainable, safety must remain a priority and respective efforts need to be maintained. This paper suggests the combination of soft system methodology (SSM) and Pareto analysis on the scope of
A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.
Carreau, Julie; Bengio, Yoshua
2009-07-01
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.
Model-based problem solving through symbolic regression via pareto genetic programming
Vladislavleva, E.
2008-01-01
Pareto genetic programming methodology is extended by additional generic model selection and generation strategies that (1) drive the modeling engine to creation of models of reduced non-linearity and increased generalization capabilities, and (2) improve the effectiveness of the search for robust
The application of analytical methods to the study of Pareto - optimal control systems
I. K. Romanova
2014-01-01
Full Text Available The subject of research articles - - methods of multicriteria optimization and their application for parametric synthesis of double-circuit control systems in conditions of inconsistency of individual criteria. The basis for solving multicriteria problems is a fundamental principle of a multi-criteria choice - the principle of the Edgeworth - Pareto. Getting Pareto - optimal variants due to inconsistency of individual criteria does not mean reaching a final decision. Set these options only offers the designer (DM.An important issue when using traditional numerical methods is their computational cost. An example is the use of methods of sounding the parameter space, including with use of uniform grids and uniformly distributed sequences. Very complex computational task is the application of computer methods of approximation bounds of Pareto.The purpose of this work is the development of a fairly simple search methods of Pareto - optimal solutions for the case of the criteria set out in the analytical form.The proposed solution is based on the study of the properties of the analytical dependences of criteria. The case is not covered so far in the literature, namely, the topology of the task, in which no touch of indifference curves (lines level. It is shown that for such tasks may be earmarked for compromise solutions. Prepositional use of the angular position of antigradient to the indifference curves in the parameter space relative to the coordinate axes. Formulated propositions on the characteristics of comonotonicity and contramonotonicity and angular characteristics of antigradient to determine Pareto optimal solutions. Considers the General algorithm of calculation: determine the scope of permissible values of parameters; investigates properties comonotonicity and contraventanas; to build an equal level (indifference curves; determined touch type: single sided (task is not strictly multicriteria or bilateral (objective relates to the Pareto
Pelce, Pierre
1989-01-01
In recent years, much progress has been made in the understanding of interface dynamics of various systems: hydrodynamics, crystal growth, chemical reactions, and combustion. Dynamics of Curved Fronts is an important contribution to this field and will be an indispensable reference work for researchers and graduate students in physics, applied mathematics, and chemical engineering. The book consist of a 100 page introduction by the editor and 33 seminal articles from various disciplines.
Trade-off bounds for the Pareto surface approximation in multi-criteria IMRT planning
Serna, J I; Monz, M; Kuefer, K H; Thieke, C
2009-01-01
One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide on the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to ensure that all clinically meaningful plans are covered and that as many clinically irrelevant plans as possible are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that constitutes the first phase. It is possible that two plans on the Pareto surface have a small, clinically insignificant difference in one criterion and a significant difference in another criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, the so-called trade-off bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects. The proposed scheme is applied to two artificial cases and one clinical case of a paraspinal tumor. For all cases, the quality of the Pareto surface approximation is measured with respect to the number of computed plans, and the range of values occurring in the approximation for different criteria is compared. Through enforcing trade-off bounds, the scheme disregards clinically irrelevant plans during the approximation. Thereby, the number of plans necessary to achieve a good approximation quality can be significantly reduced. Thus, trade-off bounds are an effective tool to focus the planning and to reduce computation time.
Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions
Liu, C.; Charpentier, R.R.; Su, J.
2011-01-01
Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.
Trade-off bounds for the Pareto surface approximation in multi-criteria IMRT planning.
Serna, J I; Monz, M; Küfer, K H; Thieke, C
2009-10-21
One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide on the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to ensure that all clinically meaningful plans are covered and that as many clinically irrelevant plans as possible are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that constitutes the first phase. It is possible that two plans on the Pareto surface have a small, clinically insignificant difference in one criterion and a significant difference in another criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, the so-called trade-off bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects. The proposed scheme is applied to two artificial cases and one clinical case of a paraspinal tumor. For all cases, the quality of the Pareto surface approximation is measured with respect to the number of computed plans, and the range of values occurring in the approximation for different criteria is compared. Through enforcing trade-off bounds, the scheme disregards clinically irrelevant plans during the approximation. Thereby, the number of plans necessary to achieve a good approximation quality can be significantly reduced. Thus, trade-off bounds are an effective tool to focus the planning and to reduce computation time.
Pareto navigation: algorithmic foundation of interactive multi-criteria IMRT planning.
Monz, M; Küfer, K H; Bortfeld, T R; Thieke, C
2008-02-21
Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection' and 'restriction'. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle -- a set constructed from pre-computed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions. Pareto navigation was implemented in the MIRA navigator software and allows real-time manipulation of the current plan and the set of considered plans. The changes are triggered by simple mouse operations on the so-called navigation star and lead to real-time updates of the navigation star and the dose visualizations. Since any Pareto-optimal plan in the plan bundle can be found with just a few navigation operations the MIRA navigator allows a fast and directed plan determination. Besides, the concept allows for a refinement of the plan bundle, thus offering a middle course between single plan computation and multi-criteria optimization. Pareto navigation offers so far unmatched real-time interactions, ease of use and plan variety, setting it apart from the multi-criteria IMRT planning methods proposed so far.
Pareto navigation-algorithmic foundation of interactive multi-criteria IMRT planning
Monz, M; Kuefer, K H; Bortfeld, T R; Thieke, C
2008-01-01
Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection' and 'restriction'. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle-a set constructed from pre-computed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions. Pareto navigation was implemented in the MIRA navigator software and allows real-time manipulation of the current plan and the set of considered plans. The changes are triggered by simple mouse operations on the so-called navigation star and lead to real-time updates of the navigation star and the dose visualizations. Since any Pareto-optimal plan in the plan bundle can be found with just a few navigation operations the MIRA navigator allows a fast and directed plan determination. Besides, the concept allows for a refinement of the plan bundle, thus offering a middle course between single plan computation and multi-criteria optimization. Pareto navigation offers so far unmatched real-time interactions, ease of use and plan variety, setting it apart from the multi-criteria IMRT planning methods proposed so far
Radiative thermal conduction fronts
Borkowski, K.J.; Balbus, S.A.; Fristrom, C.C.
1990-01-01
The discovery of the O VI interstellar absorption lines in our Galaxy by the Copernicus observatory was a turning point in our understanding of the Interstellar Medium (ISM). It implied the presence of widespread hot (approx. 10 to the 6th power K) gas in disk galaxies. The detection of highly ionized species in quasi-stellar objects' absorption spectra may be the first indirect observation of this hot phase in external disk galaxies. Previous efforts to understand extensive O VI absorption line data from our Galaxy were not very successful in locating the regions where this absorption originates. The location at interfaces between evaporating ISM clouds and hot gas was favored, but recent studies of steady-state conduction fronts in spherical clouds by Ballet, Arnaud, and Rothenflug (1986) and Bohringer and Hartquist (1987) rejected evaporative fronts as the absorption sites. Researchers report here on time-dependent nonequilibrium calculations of planar conductive fronts whose properties match well with observations, and suggest reasons for the difference between the researchers' results and the above. They included magnetic fields in additional models, not reported here, and the conclusions are not affected by their presence
Craft, David
2010-10-01
A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Reinhold Steinacker
2016-12-01
Full Text Available In 1906, the Austrian scientist Max Margules published a paper on temperature stratification in resting and non-accelerated moving air. The paper derives conditions for stationary slopes of air mass boundaries and was an important forerunner of frontal theories. Its formulation of relations between changes in density and geostrophic wind across the front is basically a discrete version of the thermal wind balance equation. The paper was highly influential and is still being cited to the present day. This paper accompanies an English translation of Margules’ seminal paper. We conclude here our “Classic Papers” series of the Meteorologische Zeitschrift.
Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, Michiel; Allkim, T.P.; van Arem, Bart
2010-01-01
Multi objective optimization of externalities of traffic is performed solving a network design problem in which Dynamic Traffic Management measures are used. The resulting Pareto optimal set is determined by employing the SPEA2+ evolutionary algorithm.
A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...
Akbar A. Tabriz
2011-07-01
Full Text Available Concurrent engineering (CE is one of the widest known techniques for simultaneous planning of product and process design. In concurrent engineering, design processes are often complicated with multiple conflicting criteria and discrete sets of feasible alternatives. Thus multi-criteria decision making (MCDM techniques are integrated into CE to perform concurrent design. This paper proposes a design framework governed by MCDM technique, which are in conflict in the sense of competing for common resources to achieve variously different performance objectives such as financial, functional, environmental, etc. The Pareto MCDM model is applied to polyethylene pipe concurrent design governed by four criteria to determine the best alternative design to Pareto-compromise design.
Pareto evolution of gene networks: an algorithm to optimize multiple fitness objectives
Warmflash, Aryeh; Siggia, Eric D; Francois, Paul
2012-01-01
The computational evolution of gene networks functions like a forward genetic screen to generate, without preconceptions, all networks that can be assembled from a defined list of parts to implement a given function. Frequently networks are subject to multiple design criteria that cannot all be optimized simultaneously. To explore how these tradeoffs interact with evolution, we implement Pareto optimization in the context of gene network evolution. In response to a temporal pulse of a signal, we evolve networks whose output turns on slowly after the pulse begins, and shuts down rapidly when the pulse terminates. The best performing networks under our conditions do not fall into categories such as feed forward and negative feedback that also encode the input–output relation we used for selection. Pareto evolution can more efficiently search the space of networks than optimization based on a single ad hoc combination of the design criteria. (paper)
Houghton, J.C.
1988-01-01
The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.
A Pareto Algorithm for Efficient De Novo Design of Multi-functional Molecules.
Daeyaert, Frits; Deem, Micheal W
2017-01-01
We have introduced a Pareto sorting algorithm into Synopsis, a de novo design program that generates synthesizable molecules with desirable properties. We give a detailed description of the algorithm and illustrate its working in 2 different de novo design settings: the design of putative dual and selective FGFR and VEGFR inhibitors, and the successful design of organic structure determining agents (OSDAs) for the synthesis of zeolites. We show that the introduction of Pareto sorting not only enables the simultaneous optimization of multiple properties but also greatly improves the performance of the algorithm to generate molecules with hard-to-meet constraints. This in turn allows us to suggest approaches to address the problem of false positive hits in de novo structure based drug design by introducing structural and physicochemical constraints in the designed molecules, and by forcing essential interactions between these molecules and their target receptor. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pareto evolution of gene networks: an algorithm to optimize multiple fitness objectives.
Warmflash, Aryeh; Francois, Paul; Siggia, Eric D
2012-10-01
The computational evolution of gene networks functions like a forward genetic screen to generate, without preconceptions, all networks that can be assembled from a defined list of parts to implement a given function. Frequently networks are subject to multiple design criteria that cannot all be optimized simultaneously. To explore how these tradeoffs interact with evolution, we implement Pareto optimization in the context of gene network evolution. In response to a temporal pulse of a signal, we evolve networks whose output turns on slowly after the pulse begins, and shuts down rapidly when the pulse terminates. The best performing networks under our conditions do not fall into categories such as feed forward and negative feedback that also encode the input-output relation we used for selection. Pareto evolution can more efficiently search the space of networks than optimization based on a single ad hoc combination of the design criteria.
Jordanova, P.; Dušek, Jiří; Stehlík, M.
2013-01-01
Roč. 128, OCT 15 (2013), s. 124-134 ISSN 0169-7439 R&D Projects: GA ČR(CZ) GAP504/11/1151; GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : environmental chemistry * ebullition of methane * mixed poisson processes * renewal process * pareto distribution * moving average process * robust statistics * sedge–grass marsh Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013
A new mechanism for maintaining diversity of Pareto archive in multi-objective optimization
Hájek, J.; Szöllös, A.; Šístek, Jakub
2010-01-01
Roč. 41, 7-8 (2010), s. 1031-1057 ISSN 0965-9978 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : multi-objective optimization * micro-genetic algorithm * diversity * Pareto archive Subject RIV: BA - General Mathematics Impact factor: 1.004, year: 2010 http://www.sciencedirect.com/science/article/pii/S0965997810000451
A new mechanism for maintaining diversity of Pareto archive in multi-objective optimization
Hájek, J.; Szöllös, A.; Šístek, Jakub
2010-01-01
Roč. 41, 7-8 (2010), s. 1031-1057 ISSN 0965-9978 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : multi-objective optimization * micro- genetic algorithm * diversity * Pareto archive Subject RIV: BA - General Mathematics Impact factor: 1.004, year: 2010 http://www.sciencedirect.com/science/article/pii/S0965997810000451
The Forbes 400, the Pareto power-law and efficient markets
Klass, O. S.; Biham, O.; Levy, M.; Malcai, O.; Solomon, S.
2007-01-01
Statistical regularities at the top end of the wealth distribution in the United States are examined using the Forbes 400 lists of richest Americans, published between 1988 and 2003. It is found that the wealths are distributed according to a power-law (Pareto) distribution. This result is explained using a simple stochastic model of multiple investors that incorporates the efficient market hypothesis as well as the multiplicative nature of financial market fluctuations.
Liu, Xian
2010-02-10
This paper shows that optical signal transmission over intersatellite links with swaying transmitters can be described as an equivalent fading model. In this model, the instantaneous signal-to-noise ratio is stochastic and follows the reciprocal Pareto distribution. With this model, we show that the transmitter power can be minimized, subject to a specified outage probability, by appropriately adjusting some system parameters, such as the transmitter gain.
An Investigation of the Pareto Distribution as a Model for High Grazing Angle Clutter
2011-03-01
radar detection schemes under controlled conditions. Complicated clutter models result in mathematical difficulties in the determination of optimal and...a population [7]. It has been used in the modelling of actuarial data; an example is in excess of loss quotations in insurance [8]. Its usefulness as...UNCLASSIFIED modified Bessel functions, making it difficult to employ in radar detection schemes. The Pareto Distribution is amenable to mathematical
Prediction in Partial Duration Series With Generalized Pareto-Distributed Exceedances
Rosbjerg, Dan; Madsen, Henrik; Rasmussen, Peter Funder
1992-01-01
As a generalization of the common assumption of exponential distribution of the exceedances in Partial duration series the generalized Pareto distribution has been adopted. Estimators for the parameters are presented using estimation by both method of moments and probability-weighted moments...... distributions (with physically justified upper limit) the correct exceedance distribution should be applied despite a possible acceptance of the exponential assumption by a test of significance....
Optimal Reinsurance Design for Pareto Optimum: From the Perspective of Multiple Reinsurers
Xing Rong
2016-01-01
Full Text Available This paper investigates optimal reinsurance strategies for an insurer which cedes the insured risk to multiple reinsurers. Assume that the insurer and every reinsurer apply the coherent risk measures. Then, we find out the necessary and sufficient conditions for the reinsurance market to achieve Pareto optimum; that is, every ceded-loss function and the retention function are in the form of “multiple layers reinsurance.”
Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah
2017-04-20
This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele's (ZDT's) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.
Pareto-optimal multi-objective design of airplane control systems
Schy, A. A.; Johnson, K. G.; Giesy, D. P.
1980-01-01
A constrained minimization algorithm for the computer aided design of airplane control systems to meet many requirements over a set of flight conditions is generalized using the concept of Pareto-optimization. The new algorithm yields solutions on the boundary of the achievable domain in objective space in a single run, whereas the older method required a sequence of runs to approximate such a limiting solution. However, Pareto-optimality does not guarantee a satisfactory design, since such solutions may emphasize some objectives at the expense of others. The designer must still interact with the program to obtain a well-balanced set of objectives. Using the example of a fighter lateral stability augmentation system (SAS) design over five flight conditions, several effective techniques are developed for obtaining well-balanced Pareto-optimal solutions. For comparison, one of these techniques is also used in a recently developed algorithm of Kreisselmeier and Steinhauser, which replaces the hard constraints with soft constraints, using a special penalty function. It is shown that comparable results can be obtained.
Application of the Pareto principle to identify and address drug-therapy safety issues.
Müller, Fabian; Dormann, Harald; Pfistermeister, Barbara; Sonst, Anja; Patapovas, Andrius; Vogler, Renate; Hartmann, Nina; Plank-Kiegele, Bettina; Kirchner, Melanie; Bürkle, Thomas; Maas, Renke
2014-06-01
Adverse drug events (ADE) and medication errors (ME) are common causes of morbidity in patients presenting at emergency departments (ED). Recognition of ADE as being drug related and prevention of ME are key to enhancing pharmacotherapy safety in ED. We assessed the applicability of the Pareto principle (~80 % of effects result from 20 % of causes) to address locally relevant problems of drug therapy. In 752 cases consecutively admitted to the nontraumatic ED of a major regional hospital, ADE, ME, contributing drugs, preventability, and detection rates of ADE by ED staff were investigated. Symptoms, errors, and drugs were sorted by frequency in order to apply the Pareto principle. In total, 242 ADE were observed, and 148 (61.2 %) were assessed as preventable. ADE contributed to 110 inpatient hospitalizations. The ten most frequent symptoms were causally involved in 88 (80.0 %) inpatient hospitalizations. Only 45 (18.6 %) ADE were recognized as drug-related problems until discharge from the ED. A limited set of 33 drugs accounted for 184 (76.0 %) ADE; ME contributed to 57 ADE. Frequency-based listing of ADE, ME, and drugs involved allowed identification of the most relevant problems and development of easily to implement safety measures, such as wall and pocket charts. The Pareto principle provides a method for identifying the locally most relevant ADE, ME, and involved drugs. This permits subsequent development of interventions to increase patient safety in the ED admission process that best suit local needs.
Ajibade Oluwaseyi Ayodele
2016-01-01
Full Text Available In this attempt, which is a second part of discussions on tapped density optimisation for four agricultural wastes (particles of coconut, periwinkle, palm kernel and egg shells, performance analysis for comparative basis is made. This paper pioneers a study direction in which optimisation of process variables are pursued using Taguchi method integrated with the Pareto 80-20 rule. Negative percentage improvements resulted when the optimal tapped density was compared with the average tapped density. However, the performance analysis between optimal tapped density and the peak tapped density values yielded positive percentage improvements for the four filler particles. The performance analysis results validate the effectiveness of using the Taguchi method in improving the tapped density properties of the filler particles. The application of the Pareto 80-20 rule to the table of parameters and levels produced revised tables of parameters and levels which helped to identify the factor-levels position of each parameter that is economical to optimality. The Pareto 80-20 rule also produced revised S/N response tables which were used to know the relevant S/N ratios that are relevant to optimality.
Front propagation in flipping processes
Antal, T; Ben-Avraham, D; Ben-Naim, E; Krapivsky, P L
2008-01-01
We study a directed flipping process that underlies the performance of the random edge simplex algorithm. In this stochastic process, which takes place on a one-dimensional lattice whose sites may be either occupied or vacant, occupied sites become vacant at a constant rate and simultaneously cause all sites to the right to change their state. This random process exhibits rich phenomenology. First, there is a front, defined by the position of the leftmost occupied site, that propagates at a nontrivial velocity. Second, the front involves a depletion zone with an excess of vacant sites. The total excess Δ k increases logarithmically, Δ k ≅ ln k, with the distance k from the front. Third, the front exhibits ageing-young fronts are vigorous but old fronts are sluggish. We investigate these phenomena using a quasi-static approximation, direct solutions of small systems and numerical simulations
Photoionization effects in ionization fronts
Arrayas, Manuel; Fontelos, Marco A; Trueba, Jose L
2006-01-01
In this paper we study the effects of photoionization processes on the propagation of both negative and positive ionization fronts in streamer discharge. We show that negative fronts accelerate in the presence of photoionization events. The appearance and propagation of positive ionization fronts travelling with constant velocity is explained as the result of the combined effects of photoionization and electron diffusion. The photoionization range plays an important role in the selection of the velocity of the ionization front as we show in this work
Photoionization effects in ionization fronts
Arrayas, Manuel [Departamento de Electromagnetismo, Universidad Rey Juan Carlos, Tulipan s/n, 28933 Mostoles, Madrid (Spain); Fontelos, Marco A [Departamento de Matematicas, Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones CientIficas, C/Serrano 123, 28006 Madrid (Spain); Trueba, Jose L [Departamento de Electromagnetismo, Universidad Rey Juan Carlos, Tulipan s/n, 28933 Mostoles, Madrid (Spain)
2006-12-21
In this paper we study the effects of photoionization processes on the propagation of both negative and positive ionization fronts in streamer discharge. We show that negative fronts accelerate in the presence of photoionization events. The appearance and propagation of positive ionization fronts travelling with constant velocity is explained as the result of the combined effects of photoionization and electron diffusion. The photoionization range plays an important role in the selection of the velocity of the ionization front as we show in this work.
Juan Carlos Osorio
2012-12-01
Full Text Available El problema del scheduling es uno de los problemas más ampliamente tratados en la literatura; sin embargo, es un problema complejo NP hard. Cuando, además, se involucra más de un objetivo, este problema se convierte en uno de los más complejos en el campo de la investigación de operaciones. Se presenta entonces un modelo biobjetivo para el job shop scheduling que incluye el makespan y el tiempo de flujo medio. Para resolver el modelo se ha utilizado una propuesta que incluye el uso del meta-heurístico Recocido Simulado (SA y el enfoque de Pareto. Este modelo es evaluado en tres problemas presentados en la literatura de tamaños 6x6, 10x5 y 10x10. Los resultados del modelo se comparan con otros meta-heurísticos y se encuentra que este modelo presenta buenos resultados en los tres problemas evaluados.The scheduling problem is one of the most widely treated problems in literature; however, it is an NP hard complex problem. Also, when more than one objective is involved, this problem becomes one of the most complex ones in the field of operations research. A bio-objective model is then emerged for the Job-Shop Scheduling, including makespan and mean flow time. For solving the model a proposal which includes the use of Simulated Annealing (SA metaheuristic and Pareto Principle. This model is evaluated in three problems described in literature with the following sizes: 6x6, 10x5 and 10x10. Results of the model are compared to other metaheuristics and it has been found that this model shows good results in the three problems evaluated.
La narrazione dell’azione sociale: spunti dal Trattato di Vilfredo Pareto
Ilaria Riccioni
2017-08-01
Full Text Available La rilettura dei classici porta con sé sempre una duplice operazione: da una parte un ritorno a riflessioni, ritmi, storicità che spesso sembrano già superate; dall’altra la riscoperta delle origini di fenomeni contemporanei da punti di vista che ne delineano le interconnessioni profonde, non più visibili allo stato di avanzamento in cui le osserviamo oggi. Tale maggiore chiarezza è forse dovuta al fatto che ogni fenomeno nella sua fase aurorale è più chiaramente identificabile rispetto alle sue fasi successive, dove le caratteristiche primarie tendono a stemperarsi nelle cifre dominanti della contemporaneità, perdendosi nelle pratiche quotidiane che ne celano la provenienza. Se la sociologia è un processo di conoscenza della realtà dei fenomeni, il punto centrale della scienza sociale va distinto tra quelle scienze che schematizzano il reale in equazioni formali funzionali e funzionanti, il sistema economico, normativo, e le scienze sociali che si occupano della realtà e della sua complessità, che in quanto scienze si devono occupare non tanto di ciò che la realtà deve essere, bensì di ciò che la realtà è, di come si pone e di come manifesta i movimenti desideranti e profondi del vivere collettivo oltre il sistema che ne gestisce il funzionamento. Il punto che Pareto sembra scorgere, con estrema lucidità, è la necessità di ribaltare l’importanza della logica economica nell’organizzazione sociale da scienza che detta la realtà a scienza che propone uno schema di gestione di essa: da essa si cerca di dettare la realtà, ma l’economia, dal greco moderno Oikòs, Oikòsgeneia (casa e generazione, il termine utilizzato per definire l’unità famigliare non è di fatto “la realtà”, sembra dirci Pareto in più digressioni, bensì l’arte e la scienza della gestione di unità familiari e produttive. La realtà rimane in ombra e non può che essere “avvicinata” da una scienza che ne registri, ed eventualmente
Computing the Distribution of Pareto Sums Using Laplace Transformation and Stehfest Inversion
Harris, C. K.; Bourne, S. J.
2017-05-01
In statistical seismology, the properties of distributions of total seismic moment are important for constraining seismological models, such as the strain partitioning model (Bourne et al. J Geophys Res Solid Earth 119(12): 8991-9015, 2014). This work was motivated by the need to develop appropriate seismological models for the Groningen gas field in the northeastern Netherlands, in order to address the issue of production-induced seismicity. The total seismic moment is the sum of the moments of individual seismic events, which in common with many other natural processes, are governed by Pareto or "power law" distributions. The maximum possible moment for an induced seismic event can be constrained by geomechanical considerations, but rather poorly, and for Groningen it cannot be reliably inferred from the frequency distribution of moment magnitude pertaining to the catalogue of observed events. In such cases it is usual to work with the simplest form of the Pareto distribution without an upper bound, and we follow the same approach here. In the case of seismicity, the exponent β appearing in the power-law relation is small enough for the variance of the unbounded Pareto distribution to be infinite, which renders standard statistical methods concerning sums of statistical variables, based on the central limit theorem, inapplicable. Determinations of the properties of sums of moderate to large numbers of Pareto-distributed variables with infinite variance have traditionally been addressed using intensive Monte Carlo simulations. This paper presents a novel method for accurate determination of the properties of such sums that is accurate, fast and easily implemented, and is applicable to Pareto-distributed variables for which the power-law exponent β lies within the interval [0, 1]. It is based on shifting the original variables so that a non-zero density is obtained exclusively for non-negative values of the parameter and is identically zero elsewhere, a property
The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space
Szekely, Pablo; Korem, Yael; Moran, Uri; Mayo, Avi; Alon, Uri
2015-01-01
When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes—phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass. PMID:26465336
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space.
Szekely, Pablo; Korem, Yael; Moran, Uri; Mayo, Avi; Alon, Uri
2015-10-01
When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes--phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Analysis of extreme drinking in patients with alcohol dependence using Pareto regression.
Das, Sourish; Harel, Ofer; Dey, Dipak K; Covault, Jonathan; Kranzler, Henry R
2010-05-20
We developed a novel Pareto regression model with an unknown shape parameter to analyze extreme drinking in patients with Alcohol Dependence (AD). We used the generalized linear model (GLM) framework and the log-link to include the covariate information through the scale parameter of the generalized Pareto distribution. We proposed a Bayesian method based on Ridge prior and Zellner's g-prior for the regression coefficients. Simulation study indicated that the proposed Bayesian method performs better than the existing likelihood-based inference for the Pareto regression.We examined two issues of importance in the study of AD. First, we tested whether a single nucleotide polymorphism within GABRA2 gene, which encodes a subunit of the GABA(A) receptor, and that has been associated with AD, influences 'extreme' alcohol intake and second, the efficacy of three psychotherapies for alcoholism in treating extreme drinking behavior. We found an association between extreme drinking behavior and GABRA2. We also found that, at baseline, men with a high-risk GABRA2 allele had a significantly higher probability of extreme drinking than men with no high-risk allele. However, men with a high-risk allele responded to the therapy better than those with two copies of the low-risk allele. Women with high-risk alleles also responded to the therapy better than those with two copies of the low-risk allele, while women who received the cognitive behavioral therapy had better outcomes than those receiving either of the other two therapies. Among men, motivational enhancement therapy was the best for the treatment of the extreme drinking behavior. Copyright 2010 John Wiley & Sons, Ltd.
The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space.
Pablo Szekely
2015-10-01
Full Text Available When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes--phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass.
Craft, David; Monz, Michael
2010-02-01
To introduce a method to simultaneously explore a collection of Pareto surfaces. The method will allow radiotherapy treatment planners to interactively explore treatment plans for different beam angle configurations as well as different treatment modalities. The authors assume a convex optimization setting and represent the Pareto surface for each modality or given beam set by a set of discrete points on the surface. Weighted averages of these discrete points produce a continuous representation of each Pareto surface. The authors calculate a set of Pareto surfaces and use linear programming to navigate across the individual surfaces, allowing switches between surfaces. The switches are organized such that the plan profits in the requested way, while trying to keep the change in dose as small as possible. The system is demonstrated on a phantom pancreas IMRT case using 100 different five beam configurations and a multicriteria formulation with six objectives. The system has intuitive behavior and is easy to control. Also, because the underlying linear programs are small, the system is fast enough to offer real-time exploration for the Pareto surfaces of the given beam configurations. The system presented offers a sound starting point for building clinical systems for multicriteria exploration of different modalities and offers a controllable way to explore hundreds of beam angle configurations in IMRT planning, allowing the users to focus their attention on the dose distribution and treatment planning objectives instead of spending excessive time on the technicalities of delivery.
Zalazinsky, A. G.; Kryuchkov, D. I.; Nesterenko, A. V.; Titov, V. G.
2017-12-01
The results of an experimental study of the mechanical properties of pressed and sintered briquettes consisting of powders obtained from a high-strength VT-22 titanium alloy by plasma spraying with additives of PTM-1 titanium powder obtained by the hydride-calcium method and powder of PV-N70Yu30 nickel-aluminum alloy are presented. The task is set for the choice of an optimal charge material composition of a composite material providing the required mechanical characteristics and cost of semi-finished products and items. Pareto optimal values for the composition of the composite material charge have been obtained.
Pareto law of the expenditure of a person in convenience stores
Mizuno, Takayuki; Toriyama, Masahiro; Terano, Takao; Takayasu, Misako
2008-06-01
We study the statistical laws of the expenditure of a person in convenience stores by analyzing around 100 million receipts. The density function of expenditure exhibits a fat tail that follows a power law. Using the Lorenz curve, the Gini coefficient is estimated to be 0.70; this implies that loyal customers contribute significantly to a store’s sales. We observe the Pareto principle where both the top 25% and 2% of the customers account for 80% and 25% of the store’s sales, respectively.
Inferring biological tasks using Pareto analysis of high-dimensional data.
Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri
2015-03-01
We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.
MATLAB implementation of satellite positioning error overbounding by generalized Pareto distribution
Ahmad, Khairol Amali; Ahmad, Shahril; Hashim, Fakroul Ridzuan
2018-02-01
In the satellite navigation community, error overbound has been implemented in the process of integrity monitoring. In this work, MATLAB programming is used to implement the overbounding of satellite positioning error CDF. Using a trajectory of reference, the horizontal position errors (HPE) are computed and its non-parametric distribution function is given by the empirical Cumulative Distribution Function (ECDF). According to the results, these errors have a heavy-tailed distribution. Sınce the ECDF of the HPE in urban environment is not Gaussian distributed, the ECDF is overbound with the CDF of the generalized Pareto distribution (GPD).
Group Acceptance Sampling Plan for Lifetime Data Using Generalized Pareto Distribution
Muhammad Aslam
2010-02-01
Full Text Available In this paper, a group acceptance sampling plan (GASP is introduced for the situations when lifetime of the items follows the generalized Pareto distribution. The design parameters such as minimum group size and acceptance number are determined when the consumer’s risk and the test termination time are specified. The proposed sampling plan is compared with the existing sampling plan. It is concluded that the proposed sampling plan performs better than the existing plan in terms of minimum sample size required to reach the same decision.
A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution
Enrique Calderín-Ojeda
2018-02-01
Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.
Stable power laws in variable economies; Lotka-Volterra implies Pareto-Zipf
Solomon, S.; Richmond, P.
2002-05-01
In recent years we have found that logistic systems of the Generalized Lotka-Volterra type (GLV) describing statistical systems of auto-catalytic elements posses power law distributions of the Pareto-Zipf type. In particular, when applied to economic systems, GLV leads to power laws in the relative individual wealth distribution and in market returns. These power laws and their exponent α are invariant to arbitrary variations in the total wealth of the system and to other endogenously and exogenously induced variations.
Finding the Pareto Optimal Equitable Allocation of Homogeneous Divisible Goods Among Three Players
Marco Dall'Aglio
2017-01-01
Full Text Available We consider the allocation of a finite number of homogeneous divisible items among three players. Under the assumption that each player assigns a positive value to every item, we develop a simple algorithm that returns a Pareto optimal and equitable allocation. This is based on the tight relationship between two geometric objects of fair division: The Individual Pieces Set (IPS and the Radon-Nykodim Set (RNS. The algorithm can be considered as an extension of the Adjusted Winner procedure by Brams and Taylor to the three-player case, without the guarantee of envy-freeness. (original abstract
"Front" hotshet izvinitsja / Aleksandr Ikonnikov
Ikonnikov, Aleksandr
2003-01-01
Põhiliselt vene rahvusest noori ühendava liikumise "Front" esindajad kavatsevad kohtuda USA suursaadikuga Eestis ja vabandada kevadel suursaatkonna ees vägivallatsemisega lõppenud meeleavalduse pärast
Energy conversion at dipolarization fronts
Khotyaintsev, Yu. V.; Divin, A.; Vaivads, A.; André, M.; Markidis, S.
2017-02-01
We use multispacecraft observations by Cluster in the Earth's magnetotail and 3-D particle-in-cell simulations to investigate conversion of electromagnetic energy at the front of a fast plasma jet. We find that the major energy conversion is happening in the Earth (laboratory) frame, where the electromagnetic energy is being transferred from the electromagnetic field to particles. This process operates in a region with size of the order several ion inertial lengths across the jet front, and the primary contribution to E·j is coming from the motional electric field and the ion current. In the frame of the front we find fluctuating energy conversion with localized loads and generators at sub-ion scales which are primarily related to the lower hybrid drift instability excited at the front; however, these provide relatively small net energy conversion.
Mozaffari, Ahmad; Gorji-Bandpy, Mofid; Samadian, Pendar
2013-01-01
Optimizing and controlling of complex engineering systems is a phenomenon that has attracted an incremental interest of numerous scientists. Until now, a variety of intelligent optimizing and controlling techniques such as neural networks, fuzzy logic, game theory, support vector machines...... and stochastic algorithms were proposed to facilitate controlling of the engineering systems. In this study, an extended version of mutable smart bee algorithm (MSBA) called Pareto based mutable smart bee (PBMSB) is inspired to cope with multi-objective problems. Besides, a set of benchmark problems and four...... well-known Pareto based optimizing algorithms i.e. multi-objective bee algorithm (MOBA), multi-objective particle swarm optimization (MOPSO) algorithm, non-dominated sorting genetic algorithm (NSGA-II), and strength Pareto evolutionary algorithm (SPEA 2) are utilized to confirm the acceptable...
Enrique Calderín-Ojeda
2017-11-01
Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.
Graham, John H; Robb, Daniel T; Poe, Amy R
2012-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of
Jorge Caldera-Serrano
2015-09-01
Full Text Available Se analiza la reutilización de las colecciones audiovisuales de las cadenas de televisión con el fin de detectar si se cumple el Índice de Pareto, facilitando mecanismos para su control y explotación de la parte de la colección audiovisual menos utilizada. Se detecta que la correlación de Pareto se establece no sólo en el uso sino también en la presencia de elementos temáticos y elementos onomásticos en el archivo y en la difusión de contenidos, por lo que se plantea formas de control en la integración de información en la colección y de recursos en la difusión. Igualmente se describe el Índice de Pareto, los Media Asset Management y el cambio de paradigma al digital, elementos fundamentales para entender los problemas y las soluciones para la eliminación de problemas en la recuperación y en la conformación de la colección. Abstract: Reuse of audiovisual collections television networks in order to detect whether the Pareto index, providing mechanisms for control and exploitation of the least used part of the audiovisual collection holds analyzed. It is found that the correlation of Pareto is established not only in the use but also the presence of thematic elements and onomastic elements in the file and in the distribution of content, so forms of control arises in the integration of information collection and distributing resources. Likewise, the Pareto index, the Media Asset Management and the paradigm shift to digital, essential to understanding the problems and solutions to eliminate problems in recovery and in the establishment of collection elements described. Keywords: Information processing. Television. Electronic media. Information systems evaluation.
Single Cell Dynamics Causes Pareto-Like Effect in Stimulated T Cell Populations.
Cosette, Jérémie; Moussy, Alice; Onodi, Fanny; Auffret-Cariou, Adrien; Neildez-Nguyen, Thi My Anh; Paldi, Andras; Stockholm, Daniel
2015-12-09
Cell fate choice during the process of differentiation may obey to deterministic or stochastic rules. In order to discriminate between these two strategies we used time-lapse microscopy of individual murine CD4 + T cells that allows investigating the dynamics of proliferation and fate commitment. We observed highly heterogeneous division and death rates between individual clones resulting in a Pareto-like dominance of a few clones at the end of the experiment. Commitment to the Treg fate was monitored using the expression of a GFP reporter gene under the control of the endogenous Foxp3 promoter. All possible combinations of proliferation and differentiation were observed and resulted in exclusively GFP-, GFP+ or mixed phenotype clones of very different population sizes. We simulated the process of proliferation and differentiation using a simple mathematical model of stochastic decision-making based on the experimentally observed parameters. The simulations show that a stochastic scenario is fully compatible with the observed Pareto-like imbalance in the final population.
Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph
2015-01-01
Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature - integrin β4 (ITGB4) - was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance.
J. S. Sadaghiani
2014-04-01
Full Text Available Flexible job shop scheduling problem is a key factor of using efficiently in production systems. This paper attempts to simultaneously optimize three objectives including minimization of the make span, total workload and maximum workload of jobs. Since the multi objective flexible job shop scheduling problem is strongly NP-Hard, an integrated heuristic approach has been used to solve it. The proposed approach was based on a floating search procedure that has used some heuristic algorithms. Within floating search procedure utilize local heuristic algorithms; it makes the considered problem into two sections including assigning and sequencing sub problem. First of all search is done upon assignment space achieving an acceptable solution and then search would continue on sequencing space based on a heuristic algorithm. This paper has used a multi-objective approach for producing Pareto solution. Thus proposed approach was adapted on NSGA II algorithm and evaluated Pareto-archives. The elements and parameters of the proposed algorithms were adjusted upon preliminary experiments. Finally, computational results were used to analyze efficiency of the proposed algorithm and this results showed that the proposed algorithm capable to produce efficient solutions.
The Reduction of Modal Sensor Channels through a Pareto Chart Methodology
Kaci J. Lemler
2015-01-01
Full Text Available Presented herein is a new experimental sensor placement procedure developed to assist in placing sensors in key locations in an efficient method to reduce the number of channels for a full modal analysis. It is a fast, noncontact method that uses a laser vibrometer to gather a candidate set of sensor locations. These locations are then evaluated using a Pareto chart to obtain a reduced set of sensor locations that still captures the motion of the structure. The Pareto chart is employed to identify the points on a structure that have the largest reaction to an input excitation and thus reduce the number of channels while capturing the most significant data. This method enhances the correct and efficient placement of sensors which is crucial in modal testing. Previously this required the development and/or use of a complicated model or set of equations. This new technique is applied in a case study on a small unmanned aerial system. The test procedure is presented and the results are discussed.
Reddy, P.V.; Engwerda, J.C.
2010-01-01
In this article we derive necessary and sufficient conditions for the existence of Pareto optimal solutions for an N player cooperative infinite horizon differential game. Firstly, we write the problem of finding Pareto candidates as solving N constrained optimal control subproblems. We derive some
Characteristic wave fronts in magnetohydrodynamics
Menon, V.V.; Sharma, V.D.
1981-01-01
The influence of magnetic field on the process of steepening or flattening of the characteristic wave fronts in a plane and cylindrically symmetric motion of an ideal plasma is investigated. This aspect of the problem has not been considered until now. Remarkable differences between plane, cylindrical diverging, and cylindrical converging waves are discovered. The discontinuity in the velocity gradient at the wave front is shown to satisfy a Bernoulli-type equation. The discussion of the solutions of such equations reported in the literature is shown to be incomplete, and three general theorems are established. 18 refs
Application of Pareto optimization method for ontology matching in nuclear reactor domain
Meenachi, N. Madurai; Baba, M. Sai
2017-01-01
This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.
Estimations of parameters in Pareto reliability model in the presence of masked data
Sarhan, Ammar M.
2003-01-01
Estimations of parameters included in the individual distributions of the life times of system components in a series system are considered in this paper based on masked system life test data. We consider a series system of two independent components each has a Pareto distributed lifetime. The maximum likelihood and Bayes estimators for the parameters and the values of the reliability of the system's components at a specific time are obtained. Symmetrical triangular prior distributions are assumed for the unknown parameters to be estimated in obtaining the Bayes estimators of these parameters. Large simulation studies are done in order: (i) explain how one can utilize the theoretical results obtained; (ii) compare the maximum likelihood and Bayes estimates obtained of the underlying parameters; and (iii) study the influence of the masking level and the sample size on the accuracy of the estimates obtained
Coordinated Pitch & Torque Control of Large-Scale Wind Turbine Based on Pareto Eciency Analysis
Lin, Zhongwei; Chen, Zhenyu; Wu, Qiuwei
2018-01-01
For the existing pitch and torque control of the wind turbine generator system (WTGS), further development on coordinated control is necessary to improve effectiveness for practical applications. In this paper, the WTGS is modeled as a coupling combination of two subsystems: the generator torque...... control subsystem and blade pitch control subsystem. Then, the pole positions in each control subsystem are adjusted coordinately to evaluate the controller participation and used as the objective of optimization. A two-level parameters-controllers coordinated optimization scheme is proposed and applied...... to optimize the controller coordination based on the Pareto optimization theory. Three solutions are obtained through optimization, which includes the optimal torque solution, optimal power solution, and satisfactory solution. Detailed comparisons evaluate the performance of the three selected solutions...
Pareto-optimal electricity tariff rates in the Republic of Armenia
Kaiser, M.J.
2000-01-01
The economic impact of electricity tariff rates on the residential sector of Yerevan, Armenia, is examined. The effect of tariff design on revenue generation and equity measures is considered, and the combination of energy pricing and compensatory social policies which provides the best mix of efficiency and protection for poor households is examined. An equity measure is defined in terms of a cumulative distribution function which describes the percent of the population that spends x percent or less of their income on electricity consumption. An optimal (Pareto-efficient) tariff is designed based on the analysis of survey data and an econometric model, and the Armenian tariff rate effective 1 January 1997 to 15 September 1997 is shown to be non-optimal relative to this rate. 22 refs
Higher moments method for generalized Pareto distribution in flood frequency analysis
Zhou, C. R.; Chen, Y. F.; Huang, Q.; Gu, S. H.
2017-08-01
The generalized Pareto distribution (GPD) has proven to be the ideal distribution in fitting with the peak over threshold series in flood frequency analysis. Several moments-based estimators are applied to estimating the parameters of GPD. Higher linear moments (LH moments) and higher probability weighted moments (HPWM) are the linear combinations of Probability Weighted Moments (PWM). In this study, the relationship between them will be explored. A series of statistical experiments and a case study are used to compare their performances. The results show that if the same PWM are used in LH moments and HPWM methods, the parameter estimated by these two methods is unbiased. Particularly, when the same PWM are used, the PWM method (or the HPWM method when the order equals 0) shows identical results in parameter estimation with the linear Moments (L-Moments) method. Additionally, this phenomenon is significant when r ≥ 1 that the same order PWM are used in HPWM and LH moments method.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
Risk finance for catastrophe losses with Pareto-calibrated Lévy-stable severities.
Powers, Michael R; Powers, Thomas Y; Gao, Siwei
2012-11-01
For catastrophe losses, the conventional risk finance paradigm of enterprise risk management identifies transfer, as opposed to pooling or avoidance, as the preferred solution. However, this analysis does not necessarily account for differences between light- and heavy-tailed characteristics of loss portfolios. Of particular concern are the decreasing benefits of diversification (through pooling) as the tails of severity distributions become heavier. In the present article, we study a loss portfolio characterized by nonstochastic frequency and a class of Lévy-stable severity distributions calibrated to match the parameters of the Pareto II distribution. We then propose a conservative risk finance paradigm that can be used to prepare the firm for worst-case scenarios with regard to both (1) the firm's intrinsic sensitivity to risk and (2) the heaviness of the severity's tail. © 2012 Society for Risk Analysis.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Application of Pareto optimization method for ontology matching in nuclear reactor domain
Meenachi, N. Madurai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Planning and Human Resource Management Div.; Baba, M. Sai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Resources Management Group
2017-12-15
This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George
2017-03-01
Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
Perry, Jim
1995-01-01
Discussion of management styles and front-end analysis focuses on a review of Douglas McGregor's theories. Topics include Theories X, Y, and Z; leadership skills; motivational needs of employees; intrinsic and extrinsic rewards; and faulty implementation of instructional systems design processes. (LRW)
Travelling fronts in stochastic Stokes’ drifts
Blanchet, Adrien
2008-10-01
By analytical methods we study the large time properties of the solution of a simple one-dimensional model of stochastic Stokes\\' drift. Semi-explicit formulae allow us to characterize the behaviour of the solutions and compute global quantities such as the asymptotic speed of the center of mass or the effective diffusion coefficient. Using an equivalent tilted ratchet model, we observe that the speed of the center of mass converges exponentially to its limiting value. A diffuse, oscillating front attached to the center of mass appears. The description of the front is given using an asymptotic expansion. The asymptotic solution attracts all solutions at an algebraic rate which is determined by the effective diffusion coefficient. The proof relies on an entropy estimate based on homogenized logarithmic Sobolev inequalities. In the travelling frame, the macroscopic profile obeys to an isotropic diffusion. Compared with the original diffusion, diffusion is enhanced or reduced, depending on the regime. At least in the limit cases, the rate of convergence to the effective profile is always decreased. All these considerations allow us to define a notion of efficiency for coherent transport, characterized by a dimensionless number, which is illustrated on two simple examples of travelling potentials with a sinusoidal shape in the first case, and a sawtooth shape in the second case. © 2008 Elsevier B.V. All rights reserved.
Frozen reaction fronts in steady flows: A burning-invariant-manifold perspective
Mahoney, John R.; Li, John; Boyer, Carleen; Solomon, Tom; Mitchell, Kevin A.
2015-12-01
The dynamics of fronts, such as chemical reaction fronts, propagating in two-dimensional fluid flows can be remarkably rich and varied. For time-invariant flows, the front dynamics may simplify, settling in to a steady state in which the reacted domain is static, and the front appears "frozen." Our central result is that these frozen fronts in the two-dimensional fluid are composed of segments of burning invariant manifolds, invariant manifolds of front-element dynamics in x y θ space, where θ is the front orientation. Burning invariant manifolds (BIMs) have been identified previously as important local barriers to front propagation in fluid flows. The relevance of BIMs for frozen fronts rests in their ability, under appropriate conditions, to form global barriers, separating reacted domains from nonreacted domains for all time. The second main result of this paper is an understanding of bifurcations that lead from a nonfrozen state to a frozen state, as well as bifurcations that change the topological structure of the frozen front. Although the primary results of this study apply to general fluid flows, our analysis focuses on a chain of vortices in a channel flow with an imposed wind. For this system, we present both experimental and numerical studies that support the theoretical analysis developed here.
Submesoscale CO2 variability across an upwelling front off Peru
Köhn, Eike E.; Thomsen, Sören; Arévalo-Martínez, Damian L.; Kanzow, Torsten
2017-12-01
As a major source for atmospheric CO2, the Peruvian upwelling region exhibits strong variability in surface fCO2 on short spatial and temporal scales. Understanding the physical processes driving the strong variability is of fundamental importance for constraining the effect of marine emissions from upwelling regions on the global CO2 budget. In this study, a frontal decay on length scales of 𝒪(10 km) was observed off the Peruvian coast following a pronounced decrease in down-frontal (equatorward) wind speed with a time lag of 9 h. Simultaneously, the sea-to-air flux of CO2 on the inshore (cold) side of the front dropped from up to 80 to 10 mmol m-2 day-1, while the offshore (warm) side of the front was constantly outgassing at a rate of 10-20 mmol m-2 day-1. Based on repeated ship transects the decay of the front was observed to occur in two phases. The first phase was characterized by a development of coherent surface temperature anomalies which gained in amplitude over 6-9 h. The second phase was characterized by a disappearance of the surface temperature front within 6 h. Submesoscale mixed-layer instabilities were present but seem too slow to completely remove the temperature gradient in this short time period. Dynamics such as a pressure-driven gravity current appear to be a likely mechanism behind the evolution of the front.
Light-front QCD. II. Two-component theory
Zhang, W.; Harindranath, A.
1993-01-01
The light-front gauge A a + =0 is known to be a convenient gauge in practical QCD calculations for short-distance behavior, but there are persistent concerns about its use because of its ''singular'' nature. The study of nonperturbative field theory quantizing on a light-front plane for hadronic bound states requires one to gain a priori systematic control of such gauge singularities. In the second paper of this series we study the two-component old-fashioned perturbation theory and various severe infrared divergences occurring in old-fashioned light-front Hamiltonian calculations for QCD. We also analyze the ultraviolet divergences associated with a large transverse momentum and examine three currently used regulators: an explicit transverse cutoff, transverse dimensional regularization, and a global cutoff. We discuss possible difficulties caused by the light-front gauge singularity in the applications of light-front QCD to both old-fashioned perturbative calculations for short-distance physics and upcoming nonperturbative investigations for hadronic bound states
Raffaele Federici
2017-08-01
Full Text Available In questa ricerca di senso fra la fine di un'epoca e la nuova visione del mondo, c’è, nei due Autori, quello che potrebbe chiamarsi una betweenness: Pareto, quasi un franco-italiano, e Michels, un italiano-tedesco, anzi un più che italiano. Nella linea di faglia rappresentata dal primo conflitto mondiale, i due sociologi sono in una doppia relazione interiore appunto franco-italiana Pareto e italo-tedesca Michels e una relazione esteriore fra il mondo di ieri e il mondo successivo al cataclisma che fu la prima guerra mondiale, quando ben quattro imperi colossali erano stati smembrati (l’Impero Russo, l’Impero Tedesco, l’Impero Austro-ungarico e l’Impero ottomano, nello stesso tempo in cui Emile Durkheim guardava con inquietudine alla disgregazione delle vecchie comunità tradizionali, dove il senso della crisi del tempo investe non solo le persone e i comportamenti, ma il mondo logico stesso. Lo scambio epistolare avviene nella stessa terra: Pareto a Celigny, sul lago di Ginevra , e Michels a Basilea , lungo le rive del Reno. Vi è, fra i due sociologi un profondo rispetto, che vedrà Robert Michels dedicare allo “scienziato e amico Vilfredo Pareto con venerazione” un’opera importante come “Problemi di sociologia applicata” pubblicata solo tre anni dopo il Trattato di Sociologia Generale del Maestro. In questa antologia di saggi Robert Michels, probabilmente composti fra il 1914 e il 1917, negli anni del grande cataclisma, anzi concepiti prima «dell’insediamento di questa terribile corte suprema di cassazione di tutte le nostre ideologie, che è la guerra» , quindi contemporanea al Trattato, il Maestro viene citato tre volte, come Max Weber, ma, de facto, la presenza di Pareto è continua. In particolare, il richiamo al Maestro è iscritto a due piste di ricerca: da una parte la realtà della ricerca sociologica e del suo amplissimo spettro di analisi e dall’altra la teoria della circolazione delle elités. È proprio
Light front quantum chromodynamics: Towards phenomenology
Light front dynamics; quantum chromodynamics; deep inelastic scattering. PACS Nos 11.10. ... What makes light front dynamics appealing from high energy phenomenology point of view? .... given in terms of Poincarй generators by. MВ = W P ...
Front Propagation in Stochastic Neural Fields
Bressloff, Paul C.; Webber, Matthew A.
2012-01-01
We analyze the effects of extrinsic multiplicative noise on front propagation in a scalar neural field with excitatory connections. Using a separation of time scales, we represent the fluctuating front in terms of a diffusive-like displacement
Seabirds and fronts: a brief overview
Schneider, David C.
1990-01-01
Oceanographic fronts are the sites of enhanced physical and biological activity, including locally concentrated feeding by marine birds. Two general hypotheses relating marine birds to fronts have been developed. The first is that enhanced primary production at fronts increases prey supply through increased animal growth, reproduction, or immigration. The second is that prey patches develop at fronts either through behavioural responses of prey to thermal or salinity gradients, or through int...
Xu, Chuanpei; Niu, Junhao; Ling, Jing; Wang, Suyan
2018-03-01
In this paper, we present a parallel test strategy for bandwidth division multiplexing under the test access mechanism bandwidth constraint. The Pareto solution set is combined with a cloud evolutionary algorithm to optimize the test time and power consumption of a three-dimensional network-on-chip (3D NoC). In the proposed method, all individuals in the population are sorted in non-dominated order and allocated to the corresponding level. Individuals with extreme and similar characteristics are then removed. To increase the diversity of the population and prevent the algorithm from becoming stuck around local optima, a competition strategy is designed for the individuals. Finally, we adopt an elite reservation strategy and update the individuals according to the cloud model. Experimental results show that the proposed algorithm converges to the optimal Pareto solution set rapidly and accurately. This not only obtains the shortest test time, but also optimizes the power consumption of the 3D NoC.
Anat Lerner
2014-04-01
Full Text Available We characterize the efficiency space of deterministic, dominant-strategy incentive compatible, individually rational and Pareto-optimal combinatorial auctions in a model with two players and k nonidentical items. We examine a model with multidimensional types, private values and quasilinear preferences for the players with one relaxation: one of the players is subject to a publicly known budget constraint. We show that if it is publicly known that the valuation for the largest bundle is less than the budget for at least one of the players, then Vickrey-Clarke-Groves (VCG uniquely fulfills the basic properties of being deterministic, dominant-strategy incentive compatible, individually rational and Pareto optimal. Our characterization of the efficient space for deterministic budget constrained combinatorial auctions is similar in spirit to that of Maskin 2000 for Bayesian single-item constrained efficiency auctions and comparable with Ausubel and Milgrom 2002 for non-constrained combinatorial auctions.
Fluctuation charge effects in ionization fronts
Arrayas, Manuel; Trueba, Jose L; Baltanas, J P
2008-01-01
In this paper, we study the effects of charge fluctuations on the propagation of both negative and positive ionization fronts in streamer discharges. We show that fronts accelerate when random charge creation events are present. This effect might play a similar role to photoionization in order to make the front move faster
Fluctuation charge effects in ionization fronts
Arrayas, Manuel; Trueba, Jose L [Area de Electromagnetismo, Universidad Rey Juan Carlos, Camino del Molino s/n, 28943 Fuenlabrada, Madrid (Spain); Baltanas, J P [Departamento de Fisica Aplicada II, Universidad de Sevilla, Av. Reina Mercedes 2, 41012 Sevilla (Spain)
2008-05-21
In this paper, we study the effects of charge fluctuations on the propagation of both negative and positive ionization fronts in streamer discharges. We show that fronts accelerate when random charge creation events are present. This effect might play a similar role to photoionization in order to make the front move faster.
Suarez, R
2001-01-01
In this paper an alternative non-parametric historical simulation approach, the Mixing Unconditional Disturbances model with constant volatility, where price paths are generated by reshuffling disturbances for S&P 500 Index returns over the period 1950 - 1998, is used to estimate a Generalized Extreme Value Distribution and a Generalized Pareto Distribution. An ordinary back-testing for period 1999 - 2008 was made to verify this technique, providing higher accuracy returns level under upper ...
Single Event Upsets in the ATLAS IBL Front End ASICs
Rozanov, Alexander; The ATLAS collaboration
2018-01-01
During operation at instantaneous luminosities of up to 2.1 10^{34} cm^{-2} s^{-1} the front end chips of the ATLAS innermost pixel layer (IBL) experienced single event upsets affecting its global registers as well as the settings for the individual pixels, causing, among other things loss of occupancy, noisy pixels, and silent pixels. A quantitative analysis of the single event upsets as well as the operational issues and mitigation techniques will be presented.
Amanifard, N.; Nariman-Zadeh, N.; Borji, M.; Khalkhali, A.; Habibdoust, A.
2008-01-01
Three-dimensional heat transfer characteristics and pressure drop of water flow in a set of rectangular microchannels are numerically investigated using Fluent and compared with those of experimental results. Two metamodels based on the evolved group method of data handling (GMDH) type neural networks are then obtained for modelling of both pressure drop (ΔP) and Nusselt number (Nu) with respect to design variables such as geometrical parameters of microchannels, the amount of heat flux and the Reynolds number. Using such obtained polynomial neural networks, multi-objective genetic algorithms (GAs) (non-dominated sorting genetic algorithm, NSGA-II) with a new diversity preserving mechanism is then used for Pareto based optimization of microchannels considering two conflicting objectives such as (ΔP) and (Nu). It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of microchannels can be discovered by Pareto based multi-objective optimization of the obtained polynomial metamodels representing their heat transfer and flow characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modelling and the Pareto optimization approach
TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.
Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald
2018-01-01
Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.
Pareto-Optimization of HTS CICC for High-Current Applications in Self-Field
Giordano Tomassetti
2018-01-01
Full Text Available The ENEA superconductivity laboratory developed a novel design for Cable-in-Conduit Conductors (CICCs comprised of stacks of 2nd-generation REBCO coated conductors. In its original version, the cable was made up of 150 HTS tapes distributed in five slots, twisted along an aluminum core. In this work, taking advantage of a 2D finite element model, able to estimate the cable’s current distribution in the cross-section, a multiobjective optimization procedure was implemented. The aim of optimization was to simultaneously maximize both engineering current density and total current flowing inside the tapes when operating in self-field, by varying the cross-section layout. Since the optimization process involved both integer and real geometrical variables, the choice of an evolutionary search algorithm was strictly necessary. The use of an evolutionary algorithm in the frame of a multiple objective optimization made it an obliged choice to numerically approach the problem using a nonstandard fast-converging optimization algorithm. By means of this algorithm, the Pareto frontiers for the different configurations were calculated, providing a powerful tool for the designer to achieve the desired preliminary operating conditions in terms of engineering current density and/or total current, depending on the specific application field, that is, power transmission cable and bus bar systems.
Generalized Pareto for Pattern-Oriented Random Walk Modelling of Organisms' Movements.
Sophie Bertrand
Full Text Available How organisms move and disperse is crucial to understand how population dynamics relates to the spatial heterogeneity of the environment. Random walk (RW models are typical tools to describe movement patterns. Whether Lévy or alternative RW better describes forager movements is keenly debated. We get around this issue using the Generalized Pareto Distribution (GPD. GPD includes as specific cases Normal, exponential and power law distributions, which underlie Brownian, Poisson-like and Lévy walks respectively. Whereas previous studies typically confronted a limited set of candidate models, GPD lets the most likely RW model emerge from the data. We illustrate the wide applicability of the method using GPS-tracked seabird foraging movements and fishing vessel movements tracked by Vessel Monitoring System (VMS, both collected in the Peruvian pelagic ecosystem. The two parameters from the fitted GPD, a scale and a shape parameter, provide a synoptic characterization of the observed movement in terms of characteristic scale and diffusive property. They reveal and quantify the variability, among species and individuals, of the spatial strategies selected by predators foraging on a common prey field. The GPD parameters constitute relevant metrics for (1 providing a synthetic and pattern-oriented description of movement, (2 using top predators as ecosystem indicators and (3 studying the variability of spatial behaviour among species or among individuals with different personalities.
Entropies of negative incomes, Pareto-distributed loss, and financial crises.
Gao, Jianbo; Hu, Jing; Mao, Xiang; Zhou, Mi; Gurbaxani, Brian; Lin, Johnny
2011-01-01
Health monitoring of world economy is an important issue, especially in a time of profound economic difficulty world-wide. The most important aspect of health monitoring is to accurately predict economic downturns. To gain insights into how economic crises develop, we present two metrics, positive and negative income entropy and distribution analysis, to analyze the collective "spatial" and temporal dynamics of companies in nine sectors of the world economy over a 19 year period from 1990-2008. These metrics provide accurate predictive skill with a very low false-positive rate in predicting downturns. The new metrics also provide evidence of phase transition-like behavior prior to the onset of recessions. Such a transition occurs when negative pretax incomes prior to or during economic recessions transition from a thin-tailed exponential distribution to the higher entropy Pareto distribution, and develop even heavier tails than those of the positive pretax incomes. These features propagate from the crisis initiating sector of the economy to other sectors.
Enrique Carlos Canessa-Terrazas
2016-01-01
Full Text Available Se presenta el uso de Análisis Envolvente de Datos (AED para priorizar y seleccionar soluciones encontradas por un Algoritmo Genético de Pareto (AGP a problemas de diseño robusto en sistemas multirespuesta con muchos factores de control y ruido. El análisis de eficiencia de las soluciones con AED muestra que el AGP encuentra una buena aproximación a la frontera eficiente. Además, se usa AED para determinar la combinación del nivel de ajuste de media y variación de las respuestas del sistema, y con la finalidad de minimizar el costo económico de alcanzar dichos objetivos. Al unir ese costo con otras consideraciones técnicas y/o económicas, la solución que mejor se ajuste con un nivel predeterminado de calidad puede ser seleccionada más apropiadamente.
PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar
Sen, Satyabrata [ORNL
2014-01-01
We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratio (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.
Sensitivity analysis for decision-making using the MORE method-A Pareto approach
Ravalico, Jakin K.; Maier, Holger R.; Dandy, Graeme C.
2009-01-01
Integrated Assessment Modelling (IAM) incorporates knowledge from different disciplines to provide an overarching assessment of the impact of different management decisions. The complex nature of these models, which often include non-linearities and feedback loops, requires special attention for sensitivity analysis. This is especially true when the models are used to form the basis of management decisions, where it is important to assess how sensitive the decisions being made are to changes in model parameters. This research proposes an extension to the Management Option Rank Equivalence (MORE) method of sensitivity analysis; a new method of sensitivity analysis developed specifically for use in IAM and decision-making. The extension proposes using a multi-objective Pareto optimal search to locate minimum combined parameter changes that result in a change in the preferred management option. It is demonstrated through a case study of the Namoi River, where results show that the extension to MORE is able to provide sensitivity information for individual parameters that takes into account simultaneous variations in all parameters. Furthermore, the increased sensitivities to individual parameters that are discovered when joint parameter variation is taken into account shows the importance of ensuring that any sensitivity analysis accounts for these changes.
Extending the Generalised Pareto Distribution for Novelty Detection in High-Dimensional Spaces.
Clifton, David A; Clifton, Lei; Hugueny, Samuel; Tarassenko, Lionel
2014-01-01
Novelty detection involves the construction of a "model of normality", and then classifies test data as being either "normal" or "abnormal" with respect to that model. For this reason, it is often termed one-class classification. The approach is suitable for cases in which examples of "normal" behaviour are commonly available, but in which cases of "abnormal" data are comparatively rare. When performing novelty detection, we are typically most interested in the tails of the normal model, because it is in these tails that a decision boundary between "normal" and "abnormal" areas of data space usually lies. Extreme value statistics provides an appropriate theoretical framework for modelling the tails of univariate (or low-dimensional) distributions, using the generalised Pareto distribution (GPD), which can be demonstrated to be the limiting distribution for data occurring within the tails of most practically-encountered probability distributions. This paper provides an extension of the GPD, allowing the modelling of probability distributions of arbitrarily high dimension, such as occurs when using complex, multimodel, multivariate distributions for performing novelty detection in most real-life cases. We demonstrate our extension to the GPD using examples from patient physiological monitoring, in which we have acquired data from hospital patients in large clinical studies of high-acuity wards, and in which we wish to determine "abnormal" patient data, such that early warning of patient physiological deterioration may be provided.
Modeling air quality in main cities of Peninsular Malaysia by using a generalized Pareto model.
Masseran, Nurulkamal; Razali, Ahmad Mahir; Ibrahim, Kamarulzaman; Latif, Mohd Talib
2016-01-01
The air pollution index (API) is an important figure used for measuring the quality of air in the environment. The API is determined based on the highest average value of individual indices for all the variables which include sulfur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3), and suspended particulate matter (PM10) at a particular hour. API values that exceed the limit of 100 units indicate an unhealthy status for the exposed environment. This study investigates the risk of occurrences of API values greater than 100 units for eight urban areas in Peninsular Malaysia for the period of January 2004 to December 2014. An extreme value model, known as the generalized Pareto distribution (GPD), has been fitted to the API values found. Based on the fitted model, return period for describing the occurrences of API exceeding 100 in the different cities has been computed as the indicator of risk. The results obtained indicated that most of the urban areas considered have a very small risk of occurrence of the unhealthy events, except for Kuala Lumpur, Malacca, and Klang. However, among these three cities, it is found that Klang has the highest risk. Based on all the results obtained, the air quality standard in urban areas of Peninsular Malaysia falls within healthy limits to human beings.
Using Pareto optimality to explore the topology and dynamics of the human connectome.
Avena-Koenigsberger, Andrea; Goñi, Joaquín; Betzel, Richard F; van den Heuvel, Martijn P; Griffa, Alessandra; Hagmann, Patric; Thiran, Jean-Philippe; Sporns, Olaf
2014-10-05
Graph theory has provided a key mathematical framework to analyse the architecture of human brain networks. This architecture embodies an inherently complex relationship between connection topology, the spatial arrangement of network elements, and the resulting network cost and functional performance. An exploration of these interacting factors and driving forces may reveal salient network features that are critically important for shaping and constraining the brain's topological organization and its evolvability. Several studies have pointed to an economic balance between network cost and network efficiency with networks organized in an 'economical' small-world favouring high communication efficiency at a low wiring cost. In this study, we define and explore a network morphospace in order to characterize different aspects of communication efficiency in human brain networks. Using a multi-objective evolutionary approach that approximates a Pareto-optimal set within the morphospace, we investigate the capacity of anatomical brain networks to evolve towards topologies that exhibit optimal information processing features while preserving network cost. This approach allows us to investigate network topologies that emerge under specific selection pressures, thus providing some insight into the selectional forces that may have shaped the network architecture of existing human brains.
Wu, Hao; Ihme, Matthias
2017-11-01
The modeling of turbulent combustion requires the consideration of different physico-chemical processes, involving a vast range of time and length scales as well as a large number of scalar quantities. To reduce the computational complexity, various combustion models are developed. Many of them can be abstracted using a lower-dimensional manifold representation. A key issue in using such lower-dimensional combustion models is the assessment as to whether a particular combustion model is adequate in representing a certain flame configuration. The Pareto-efficient combustion (PEC) modeling framework was developed to perform dynamic combustion model adaptation based on various existing manifold models. In this work, the PEC model is applied to a turbulent flame simulation, in which a computationally efficient flamelet-based combustion model is used in together with a high-fidelity finite-rate chemistry model. The combination of these two models achieves high accuracy in predicting pollutant species at a relatively low computational cost. The relevant numerical methods and parallelization techniques are also discussed in this work.
Generalized Pareto for Pattern-Oriented Random Walk Modelling of Organisms' Movements.
Bertrand, Sophie; Joo, Rocío; Fablet, Ronan
2015-01-01
How organisms move and disperse is crucial to understand how population dynamics relates to the spatial heterogeneity of the environment. Random walk (RW) models are typical tools to describe movement patterns. Whether Lévy or alternative RW better describes forager movements is keenly debated. We get around this issue using the Generalized Pareto Distribution (GPD). GPD includes as specific cases Normal, exponential and power law distributions, which underlie Brownian, Poisson-like and Lévy walks respectively. Whereas previous studies typically confronted a limited set of candidate models, GPD lets the most likely RW model emerge from the data. We illustrate the wide applicability of the method using GPS-tracked seabird foraging movements and fishing vessel movements tracked by Vessel Monitoring System (VMS), both collected in the Peruvian pelagic ecosystem. The two parameters from the fitted GPD, a scale and a shape parameter, provide a synoptic characterization of the observed movement in terms of characteristic scale and diffusive property. They reveal and quantify the variability, among species and individuals, of the spatial strategies selected by predators foraging on a common prey field. The GPD parameters constitute relevant metrics for (1) providing a synthetic and pattern-oriented description of movement, (2) using top predators as ecosystem indicators and (3) studying the variability of spatial behaviour among species or among individuals with different personalities.
Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup
2015-01-01
In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF
Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-02-15
In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.
Pareto frontier analyses based decision making tool for transportation of hazardous waste
Das, Arup; Mazumder, T.N.; Gupta, A.K.
2012-01-01
Highlights: ► Posteriori method using multi-objective approach to solve bi-objective routing problem. ► System optimization (with multiple source–destination pairs) in a capacity constrained network using non-dominated sorting. ► Tools like cost elasticity and angle based focus used to analyze Pareto frontier to aid stakeholders make informed decisions. ► A real life case study of Kolkata Metropolitan Area to explain the workability of the model. - Abstract: Transportation of hazardous wastes through a region poses immense threat on the development along its road network. The risk to the population, exposed to such activities, has been documented in the past. However, a comprehensive framework for routing hazardous wastes has often been overlooked. A regional Hazardous Waste Management scheme should incorporate a comprehensive framework for hazardous waste transportation. This framework would incorporate the various stakeholders involved in decision making. Hence, a multi-objective approach is required to safeguard the interest of all the concerned stakeholders. The objective of this study is to design a methodology for routing of hazardous wastes between the generating units and the disposal facilities through a capacity constrained network. The proposed methodology uses posteriori method with multi-objective approach to find non-dominated solutions for the system consisting of multiple origins and destinations. A case study of transportation of hazardous wastes in Kolkata Metropolitan Area has also been provided to elucidate the methodology.
Modelling road accident blackspots data with the discrete generalized Pareto distribution.
Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María
2014-10-01
This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Entropies of negative incomes, Pareto-distributed loss, and financial crises.
Jianbo Gao
Full Text Available Health monitoring of world economy is an important issue, especially in a time of profound economic difficulty world-wide. The most important aspect of health monitoring is to accurately predict economic downturns. To gain insights into how economic crises develop, we present two metrics, positive and negative income entropy and distribution analysis, to analyze the collective "spatial" and temporal dynamics of companies in nine sectors of the world economy over a 19 year period from 1990-2008. These metrics provide accurate predictive skill with a very low false-positive rate in predicting downturns. The new metrics also provide evidence of phase transition-like behavior prior to the onset of recessions. Such a transition occurs when negative pretax incomes prior to or during economic recessions transition from a thin-tailed exponential distribution to the higher entropy Pareto distribution, and develop even heavier tails than those of the positive pretax incomes. These features propagate from the crisis initiating sector of the economy to other sectors.
Pareto-Optimal Evaluation of Ultimate Limit States in Offshore Wind Turbine Structural Analysis
Michael Muskulus
2015-12-01
Full Text Available The ultimate capacity of support structures is checked with extreme loads. This is straightforward when the limit state equations depend on a single load component, and it has become common to report maxima for each load component. However, if more than one load component is influential, e.g., both axial force and bending moments, it is not straightforward how to define an extreme load. The combination of univariate maxima can be too conservative, and many different combinations of load components can result in the worst value of the limit state equations. The use of contemporaneous load vectors is typically non-conservative. Therefore, in practice, limit state checks are done for each possible load vector, from each time step of a simulation. This is not feasible when performing reliability assessments and structural optimization, where additional, time-consuming computations are involved for each load vector. We therefore propose to use Pareto-optimal loads, which are a small set of loads that together represent all possible worst case scenarios. Simulations with two reference wind turbines show that this approach can be very useful for jacket structures, whereas the design of monopiles is often governed by the bending moment only. Even in this case, the approach might be useful when approaching the structural limits during optimization.
Hydrodynamic instabilities in an ablation front
Piriz, A R; Portugues, R F
2004-01-01
The hydrodynamic stability of an ablation front is studied for situations in which the wavelength of the perturbations is larger than the distance to the critical surface where the driving radiation is absorbed. An analytical model is presented, and it shows that under conditions in which the thermal flux is limited within the supercritical region of the ablative corona, the front may behave like a flame or like an ablation front, depending on the perturbation wavelength. For relatively long wavelengths the critical and ablation surfaces practically lump together into a unique surface and the front behaves like a flame, whereas for the shortest wavelengths the ablation front substructure is resolved
Hydrodynamic instabilities in an ablation front
Piriz, A R; Portugues, R F [E.T.S.I. Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)
2004-06-01
The hydrodynamic stability of an ablation front is studied for situations in which the wavelength of the perturbations is larger than the distance to the critical surface where the driving radiation is absorbed. An analytical model is presented, and it shows that under conditions in which the thermal flux is limited within the supercritical region of the ablative corona, the front may behave like a flame or like an ablation front, depending on the perturbation wavelength. For relatively long wavelengths the critical and ablation surfaces practically lump together into a unique surface and the front behaves like a flame, whereas for the shortest wavelengths the ablation front substructure is resolved.
Front Propagation in Stochastic Neural Fields
Bressloff, Paul C.
2012-01-01
We analyze the effects of extrinsic multiplicative noise on front propagation in a scalar neural field with excitatory connections. Using a separation of time scales, we represent the fluctuating front in terms of a diffusive-like displacement (wandering) of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. One major result of our analysis is a comparison between freely propagating fronts and fronts locked to an externally moving stimulus. We show that the latter are much more robust to noise, since the stochastic wandering of the mean front profile is described by an Ornstein-Uhlenbeck process rather than a Wiener process, so that the variance in front position saturates in the long time limit rather than increasing linearly with time. Finally, we consider a stochastic neural field that supports a pulled front in the deterministic limit, and show that the wandering of such a front is now subdiffusive. © 2012 Society for Industrial and Applied Mathematics.
Penrod, Nadia M; Greene, Casey S; Moore, Jason H
2014-01-01
Molecularly targeted drugs promise a safer and more effective treatment modality than conventional chemotherapy for cancer patients. However, tumors are dynamic systems that readily adapt to these agents activating alternative survival pathways as they evolve resistant phenotypes. Combination therapies can overcome resistance but finding the optimal combinations efficiently presents a formidable challenge. Here we introduce a new paradigm for the design of combination therapy treatment strategies that exploits the tumor adaptive process to identify context-dependent essential genes as druggable targets. We have developed a framework to mine high-throughput transcriptomic data, based on differential coexpression and Pareto optimization, to investigate drug-induced tumor adaptation. We use this approach to identify tumor-essential genes as druggable candidates. We apply our method to a set of ER(+) breast tumor samples, collected before (n = 58) and after (n = 60) neoadjuvant treatment with the aromatase inhibitor letrozole, to prioritize genes as targets for combination therapy with letrozole treatment. We validate letrozole-induced tumor adaptation through coexpression and pathway analyses in an independent data set (n = 18). We find pervasive differential coexpression between the untreated and letrozole-treated tumor samples as evidence of letrozole-induced tumor adaptation. Based on patterns of coexpression, we identify ten genes as potential candidates for combination therapy with letrozole including EPCAM, a letrozole-induced essential gene and a target to which drugs have already been developed as cancer therapeutics. Through replication, we validate six letrozole-induced coexpression relationships and confirm the epithelial-to-mesenchymal transition as a process that is upregulated in the residual tumor samples following letrozole treatment. To derive the greatest benefit from molecularly targeted drugs it is critical to design combination
Using the Pareto principle in genome-wide breeding value estimation.
Yu, Xijiang; Meuwissen, Theo H E
2011-11-01
Genome-wide breeding value (GWEBV) estimation methods can be classified based on the prior distribution assumptions of marker effects. Genome-wide BLUP methods assume a normal prior distribution for all markers with a constant variance, and are computationally fast. In Bayesian methods, more flexible prior distributions of SNP effects are applied that allow for very large SNP effects although most are small or even zero, but these prior distributions are often also computationally demanding as they rely on Monte Carlo Markov chain sampling. In this study, we adopted the Pareto principle to weight available marker loci, i.e., we consider that x% of the loci explain (100 - x)% of the total genetic variance. Assuming this principle, it is also possible to define the variances of the prior distribution of the 'big' and 'small' SNP. The relatively few large SNP explain a large proportion of the genetic variance and the majority of the SNP show small effects and explain a minor proportion of the genetic variance. We name this method MixP, where the prior distribution is a mixture of two normal distributions, i.e. one with a big variance and one with a small variance. Simulation results, using a real Norwegian Red cattle pedigree, show that MixP is at least as accurate as the other methods in all studied cases. This method also reduces the hyper-parameters of the prior distribution from 2 (proportion and variance of SNP with big effects) to 1 (proportion of SNP with big effects), assuming the overall genetic variance is known. The mixture of normal distribution prior made it possible to solve the equations iteratively, which greatly reduced computation loads by two orders of magnitude. In the era of marker density reaching million(s) and whole-genome sequence data, MixP provides a computationally feasible Bayesian method of analysis.
David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera
2017-04-01
This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
The upgraded Tevatron front end
Glass, M.; Zagel, J.; Smith, P.; Marsh, W.; Smolucha, J.
1990-01-01
We are replacing the computers which support the CAMAC crates in the Fermilab accelerator control system. We want a significant performance increase, but we still want to be able to service scores of different varieties of CAMAC cards in a manner essentially transparent to console applications software. Our new architecture is based on symmetric multiprocessing. Several processors on the same bus, each running identical software, work simultaneously at satisfying different pieces of a console's request for data. We dynamically adjust the load between the processors. We can obtain more processing power by simply plugging in more processor cards and rebooting. We describe in this paper what we believe to be the interesting architectural features of the new front-end computers. We also note how we use some of the advanced features of the Multibus TM II bus and the Intel 80386 processor design to achieve reliability and expandability of both hardware and software. (orig.)
Étienne Poncelet
2011-06-01
Full Text Available De la porte d’eau de la Basse Deûle jusqu’au fort Saint-Sauveur, le front oriental de Lille, fortifié à l’époque espagnole, glisse ses courtines dans les entrelacs du périphérique et des gares. L’enjeu urbain actuel consiste à s’appuyer sur ces murs historiques pour « passer malgré tout » à travers cet écheveau urbain et retisser les fils de la continuité des promenades au cœur de la ville. Moins connus que le front occidental de la reine des citadelles, ces anciens espaces militaires sont une chance pour l’urbanisme de demain dont les opérations en cours de la Porte de Gand et de la Basse Deûle témoignent déjà.The east wall, at Lille, fortified during the period of Spanish occupation, extends from the Porte d'Eau de la Basse-Deûle to the Saint-Sauveur fort. Its curtain walls emerge today in a landscape of ring roads and railway territories. The issue today is to profit from these historic walls in order to make some sense of the urban chaos and to reinstate some urban continuity in the city-centre walkways. Although they are not as well known as the western wall of this major fortified city, these former military properties are an exciting opportunity for tomorrow's town-planners, as the operations already underway at the Porte de Gand et de la Basse Deûle suggest.
Muon front end for the neutrino factory
C. T. Rogers
2013-04-01
Full Text Available In the neutrino factory, muons are produced by firing high-energy protons onto a target to produce pions. The pions decay to muons and pass through a capture channel known as the muon front end, before acceleration to 12.6 GeV. The muon front end comprises a variable frequency rf system for longitudinal capture and an ionization cooling channel. In this paper we detail recent improvements in the design of the muon front end.
Muon front end for the neutrino factory
Rogers, C T; Prior, G; Gilardoni, S; Neuffer, D; Snopok, P; Alekou, A; Pasternak, J
2013-01-01
In the neutrino factory, muons are produced by firing high-energy protons onto a target to produce pions. The pions decay to muons and pass through a capture channel known as the muon front end, before acceleration to 12.6 GeV. The muon front end comprises a variable frequency rf system for longitudinal capture and an ionization cooling channel. In this paper we detail recent improvements in the design of the muon front end.
Wave fronts and spatiotemporal chaos in an array of coupled Lorenz oscillators
Pazo, Diego; Montejo, Noelia; Perez-Munuzuri, Vicente
2001-01-01
The effects of coupling strength and single-cell dynamics (SCD) on spatiotemporal pattern formation are studied in an array of Lorenz oscillators. Different spatiotemporal structures (stationary patterns, propagating wave fronts, short wavelength bifurcation) arise for bistable SCD, and two well differentiated types of spatiotemporal chaos for chaotic SCD (in correspondence with the transition from stationary patterns to propagating fronts). Wave-front propagation in the bistable regime is studied in terms of global bifurcation theory, while a short wavelength pattern region emerges through a pitchfork bifurcation
Quantum electrodynamics in the light-front Weyl gauge
Przeszowski, J.; Naus, H.W.; Kalloniatis, A.C.
1996-01-01
We examine (3+1)-dimensional QED quantized in the open-quote open-quote front form close-quote close-quote with finite open-quote open-quote volume close-quote close-quote regularization, namely, in discretized light-cone quantization. Instead of the light-cone or Coulomb gauges, we impose the light-front Weyl gauge A - =0. The Dirac method is used to arrive at the quantum commutation relations for the independent variables. We apply open-quote open-quote quantum-mechanical gauge fixing close-quote close-quote to implement Gauss close-quote law, and derive the physical Hamiltonian in terms of unconstrained variables. As in the instant form, this Hamiltonian is invariant under global residual gauge transformations, namely, displacements. On the light cone the symmetry manifests itself quite differently. copyright 1996 The American Physical Society
Nucleon parton distributions in a light-front quark model
Gutsche, Thomas; Lyubovitskij, Valery E.; Schmidt, Ivan
2017-01-01
Continuing our analysis of parton distributions in the nucleon, we extend our light-front quark model in order to obtain both the helicity-independent and the helicity-dependent parton distributions, analytically matching the results of global fits at the initial scale μ∝ 1 GeV; they also contain the correct Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution. We also calculate the transverse parton, Wigner and Husimi distributions from a unified point of view, using our light-front wave functions and expressing them in terms of the parton distributions q_v(x) and δq_v(x). Our results are very relevant for the current and future program of the COMPASS experiment at SPS (CERN). (orig.)
Nucleon parton distributions in a light-front quark model
Gutsche, Thomas [Universitaet Tuebingen, Institut fuer Theoretische Physik, Kepler Center for Astro and Particle Physics, Tuebingen (Germany); Lyubovitskij, Valery E. [Universitaet Tuebingen, Institut fuer Theoretische Physik, Kepler Center for Astro and Particle Physics, Tuebingen (Germany); Tomsk State University, Department of Physics, Tomsk (Russian Federation); Tomsk Polytechnic University, Laboratory of Particle Physics, Mathematical Physics Department, Tomsk (Russian Federation); Universidad Tecnica Federico Santa Maria, Departamento de Fisica y Centro Cientifico Tecnologico de Valparaiso (CCTVal), Valparaiso (Chile); Schmidt, Ivan [Universidad Tecnica Federico Santa Maria, Departamento de Fisica y Centro Cientifico Tecnologico de Valparaiso (CCTVal), Valparaiso (Chile)
2017-02-15
Continuing our analysis of parton distributions in the nucleon, we extend our light-front quark model in order to obtain both the helicity-independent and the helicity-dependent parton distributions, analytically matching the results of global fits at the initial scale μ∝ 1 GeV; they also contain the correct Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution. We also calculate the transverse parton, Wigner and Husimi distributions from a unified point of view, using our light-front wave functions and expressing them in terms of the parton distributions q{sub v}(x) and δq{sub v}(x). Our results are very relevant for the current and future program of the COMPASS experiment at SPS (CERN). (orig.)
Владимир Геннадьевич Иванов
2015-12-01
Full Text Available The given article presents research of the evolution of the Russian party system. The chosen methodology is based on the heuristic potential of agent-based modelling. The author analyzes various scenarios of parties’ competition (applying Pareto distribution in connection with recent increase of the number of political parties. In addition, the author predicts the level of ideological diversity of the parties’ platforms (applying the principles of Hotelling distribution in order to evaluate their potential competitiveness in the struggle for voters.
Rica Gonen
2013-11-01
Full Text Available We analyze the space of deterministic, dominant-strategy incentive compatible, individually rational and Pareto optimal combinatorial auctions. We examine a model with multidimensional types, nonidentical items, private values and quasilinear preferences for the players with one relaxation; the players are subject to publicly-known budget constraints. We show that the space includes dictatorial mechanisms and that if dictatorial mechanisms are ruled out by a natural anonymity property, then an impossibility of design is revealed. The same impossibility naturally extends to other abstract mechanisms with an arbitrary outcome set if one maintains the original assumptions of players with quasilinear utilities, public budgets and nonnegative prices.
Alexandr Victorovich Budylskiy
2014-06-01
Full Text Available This article considers the multicriteria optimization approach using the modified genetic algorithm to solve the project-scheduling problem under duration and cost constraints. The work contains the list of choices for solving this problem. The multicriteria optimization approach is justified here. The study describes the Pareto principles, which are used in the modified genetic algorithm. We identify the mathematical model of the project-scheduling problem. We introduced the modified genetic algorithm, the ranking strategies, the elitism approaches. The article includes the example.
Larsén, Xiaoli Guo; Mann, Jakob; Rathmann, Ole
2015-01-01
This study examines the various sources to the uncertainties in the application of two widely used extreme value distribution functions, the generalized extreme value distribution (GEVD) and the generalized Pareto distribution (GPD). The study is done through the analysis of measurements from...... as a guideline for applying GEVD and GPD to wind time series of limited length. The data analysis shows that, with reasonable choice of relevant parameters, GEVD and GPD give consistent estimates of the return winds. For GEVD, the base period should be chosen in accordance with the occurrence of the extreme wind...
Sergey E. Bukhtoyarov
2005-05-01
Full Text Available A multicriterion linear combinatorial problem with a parametric principle of optimality is considered. This principle is defined by a partitioning of partial criteria onto Pareto preference relation groups within each group and the lexicographic preference relation between them. Quasistability of the problem is investigated. This type of stability is a discrete analog of Hausdorff lower semi-continuity of the multiple-valued mapping that defines the choice function. A formula of quasistability radius is derived for the case of the metric l∞. Some known results are stated as corollaries. Mathematics Subject Classification 2000: 90C05, 90C10, 90C29, 90C31.
5-D interpolation with wave-front attributes
Xie, Yujiang; Gajewski, Dirk
2017-11-01
Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that
A fast method for calculating reliable event supports in tree reconciliations via Pareto optimality.
To, Thu-Hien; Jacox, Edwin; Ranwez, Vincent; Scornavacca, Celine
2015-11-14
Given a gene and a species tree, reconciliation methods attempt to retrieve the macro-evolutionary events that best explain the discrepancies between the two tree topologies. The DTL parsimonious approach searches for a most parsimonious reconciliation between a gene tree and a (dated) species tree, considering four possible macro-evolutionary events (speciation, duplication, transfer, and loss) with specific costs. Unfortunately, many events are erroneously predicted due to errors in the input trees, inappropriate input cost values or because of the existence of several equally parsimonious scenarios. It is thus crucial to provide a measure of the reliability for predicted events. It has been recently proposed that the reliability of an event can be estimated via its frequency in the set of most parsimonious reconciliations obtained using a variety of reasonable input cost vectors. To compute such a support, a straightforward but time-consuming approach is to generate the costs slightly departing from the original ones, independently compute the set of all most parsimonious reconciliations for each vector, and combine these sets a posteriori. Another proposed approach uses Pareto-optimality to partition cost values into regions which induce reconciliations with the same number of DTL events. The support of an event is then defined as its frequency in the set of regions. However, often, the number of regions is not large enough to provide reliable supports. We present here a method to compute efficiently event supports via a polynomial-sized graph, which can represent all reconciliations for several different costs. Moreover, two methods are proposed to take into account alternative input costs: either explicitly providing an input cost range or allowing a tolerance for the over cost of a reconciliation. Our methods are faster than the region based method, substantially faster than the sampling-costs approach, and have a higher event-prediction accuracy on
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the
Stability of cosmological deflagration fronts
Mégevand, Ariel; Membiela, Federico Agustín
2014-05-01
In a cosmological first-order phase transition, bubbles of the stable phase nucleate and expand in the supercooled metastable phase. In many cases, the growth of bubbles reaches a stationary state, with bubble walls propagating as detonations or deflagrations. However, these hydrodynamical solutions may be unstable under corrugation of the interface. Such instability may drastically alter some of the cosmological consequences of the phase transition. Here, we study the hydrodynamical stability of deflagration fronts. We improve upon previous studies by making a more careful and detailed analysis. In particular, we take into account the fact that the equation of motion for the phase interface depends separately on the temperature and fluid velocity on each side of the wall. Fluid variables on each side of the wall are similar for weakly first-order phase transitions, but differ significantly for stronger phase transitions. As a consequence, we find that, for large enough supercooling, any subsonic wall velocity becomes unstable. Moreover, as the velocity approaches the speed of sound, perturbations become unstable on all wavelengths. For smaller supercooling and small wall velocities, our results agree with those of previous works. Essentially, perturbations on large wavelengths are unstable, unless the wall velocity is higher than a critical value. We also find a previously unobserved range of marginally unstable wavelengths. We analyze the dynamical relevance of the instabilities, and we estimate the characteristic time and length scales associated with their growth. We discuss the implications for the electroweak phase transition and its cosmological consequences.
Stability of cosmological detonation fronts
Mégevand, Ariel; Membiela, Federico Agustín
2014-05-01
The steady-state propagation of a phase-transition front is classified, according to hydrodynamics, as a deflagration or a detonation, depending on its velocity with respect to the fluid. These propagation modes are further divided into three types, namely, weak, Jouguet, and strong solutions, according to their disturbance of the fluid. However, some of these hydrodynamic modes will not be realized in a phase transition. One particular cause is the presence of instabilities. In this work we study the linear stability of weak detonations, which are generally believed to be stable. After discussing in detail the weak detonation solution, we consider small perturbations of the interface and the fluid configuration. When the balance between the driving and friction forces is taken into account, it turns out that there are actually two different kinds of weak detonations, which behave very differently as functions of the parameters. We show that the branch of stronger weak detonations are unstable, except very close to the Jouguet point, where our approach breaks down.
Grey, C.A.
1994-01-01
A picture is drawn of the current supply side of the front-end fuel cycle production capacities in the CIS. Uranium production has been steadily declining, as in the West. Market realities have been reflected in local costs of production since the break-up of the former Soviet Union and some uneconomic mines have been closed. In terms of actual production, Kazakhstan, Russia and Uzbekistan, remain among the top five uranium producers in the world. Western government action has been taken to restrict the market access for natural uranium from the CIS. Reactors in the CIS continue to be supplied with fabricated fuel solely by Russian, though Western fuel fabricators have reduced Russian supplies to Eastern Europe. Russia's current dominance in conversion and enrichment services in both the CIS and Eastern Europe is likely to continue as long as the present surplus low enriched uranium stocks last and surplus production capacity exists. Market penetration in the West has been limited by government action but Russia in 1993 still held about 20% of the world's conversion market and nearly 19% of the enrichment market. (6 figures, 2 tables, 4 references) (UK)
Application of up-front licensing
Grant, S.D.; Snell, V.G.
1995-01-01
AECL has been pioneering 'up-front' licensing of new reactor designs. The CANDU 3 design has been formally reviewed by AECB staff for a number of years. The CANDU 9 design has just started the up-front licensing process. The process gives designers, regulators and potential customers early confidence in the licensability of future plants. (author). 4 refs., 2 tabs
Application of up-front licensing
Grant, S D [Atomic Energy of Canada Ltd., Saskatoon, SK (Canada); Snell, V G [Atomic Energy of Canada Ltd., Mississauga, ON (Canada)
1996-12-31
AECL has been pioneering `up-front` licensing of new reactor designs. The CANDU 3 design has been formally reviewed by AECB staff for a number of years. The CANDU 9 design has just started the up-front licensing process. The process gives designers, regulators and potential customers early confidence in the licensability of future plants. (author). 4 refs., 2 tabs.
RPC performance vs. front-end electronics
Cardarelli, R.; Aielli, G.; Camarri, P.; Di Ciaccio, A.; Di Stante, L.; Liberti, B.; Pastori, E.; Santonico, R.; Zerbini, A.
2012-01-01
Moving the amplification from the gas to the front-end electronics was a milestone in the development of Resistive Plate Chambers. Here we discuss the historical evolution of RPCs and we show the results obtained with newly developed front-end electronics with threshold in the fC range.
Through the EU's Back and Front Doors
Adler-Nissen, Rebecca
2015-01-01
Through the EU's front- and backdoors: The selective Danish and Norwegian approaches in the Area of Freedom, Security and Justice Rebecca Adler-Nissen......Through the EU's front- and backdoors: The selective Danish and Norwegian approaches in the Area of Freedom, Security and Justice Rebecca Adler-Nissen...
Akaoka, K.; Wakaida, I.
1996-01-01
We controlled the laser wave front through a laser beam simulation experiment propagating through medium. Thus, we confirmed that the RMS, defined as the quadratic mean of the laser beam wave front, dropped to the 1/3 - 1/6 of the pre-control value
Nuclear Physics on the Light Front
Miller, Gerald A.
1999-01-01
High energy scattering experiments involving nuclei are typically analyzed in terms of light front variables. The desire to provide realistic, relativistic wave functions expressed in terms of these variables led me to try to use light front dynamics to compute nuclear wave functions. The progress is summarized here.
Wave fronts of electromagnetic cyclotron harmonic waves
Ohnuma, T.; Watanabe, T.
1982-01-01
In an inhomogeneous high-density magnetized plasma, the spatial properties of the wave fronts and ray trajectories of electromagnetic ordinary and extraordinary cyclotron harmonic waves are investigated. Those waves which are radiated from a local source are found to have wave fronts which are almost parallel to the magnetic field. Also, the reflective properties of the electromagnetic cyclotron harmonic waves are confirmed
End-Users, Front Ends and Librarians.
Bourne, Donna E.
1989-01-01
The increase in end-user searching, the advantages and limitations of front ends, and the role of the librarian in end-user searching are discussed. It is argued that librarians need to recognize that front ends can be of benefit to themselves and patrons, and to assume the role of advisors and educators for end-users. (37 references) (CLB)
Enrique Canessa
2014-01-01
Full Text Available Se presenta un Algoritmo Genético de Pareto (AGP, que encuentra la frontera de Pareto en problemas de diseño robusto para sistemas multiobjetivo. El AGP fue diseñado para ser aplicado usando el método de Diseño de Parámetros de Taguchi, el cual es el método más frecuentemente empleado por profesionales para ejecutar diseño robusto. El AGP se probó con datos obtenidos de un sistema real con una respuesta y de un simulador de procesos multiobjetivo con muchos factores de control y ruido. En todos los casos, el AGP entregó soluciones óptimas que cumplen con los objetivos del diseño robusto. Además, la discusión de resultados muestra que tener dichas soluciones ayuda en la selección de las mejores a ser implementadas en el sistema bajo estudio, especialmente cuando el sistema tiene muchos factores de control y salidas.
Blocking-resistant communication through domain fronting
Fifield David
2015-06-01
Full Text Available We describe “domain fronting,” a versatile censorship circumvention technique that hides the remote endpoint of a communication. Domain fronting works at the application layer, using HTTPS, to communicate with a forbidden host while appearing to communicate with some other host, permitted by the censor. The key idea is the use of different domain names at different layers of communication. One domain appears on the “outside” of an HTTPS request—in the DNS request and TLS Server Name Indication—while another domain appears on the “inside”—in the HTTP Host header, invisible to the censor under HTTPS encryption. A censor, unable to distinguish fronted and nonfronted traffic to a domain, must choose between allowing circumvention traffic and blocking the domain entirely, which results in expensive collateral damage. Domain fronting is easy to deploy and use and does not require special cooperation by network intermediaries. We identify a number of hard-to-block web services, such as content delivery networks, that support domain-fronted connections and are useful for censorship circumvention. Domain fronting, in various forms, is now a circumvention workhorse. We describe several months of deployment experience in the Tor, Lantern, and Psiphon circumvention systems, whose domain-fronting transports now connect thousands of users daily and transfer many terabytes per month.
Ballistic propagation of turbulence front in tokamak edge plasmas
Sugita, Satoru; Itoh, Kimitaka; Itoh, Sanae-I; Yagi, Masatoshi; Fuhr, Guillaume; Beyer, Peter; Benkadda, Sadruddin
2012-01-01
The flux-driven nonlinear simulation of resistive ballooning mode turbulence with tokamak edge geometry is performed to study the non-steady component in the edge turbulence. The large-scale and dynamical events in transport are investigated in a situation where the mean flow is suppressed. Two types of dynamics are observed. One is the radial propagation of the pulse of pressure gradient, the other is the appearance/disappearance of radially elongated global structure of turbulent heat flux. The ballistic propagation is observed in the pulse of pressure gradient, which is associated with the front of turbulent heat flux. We focus on this ballistic propagation phenomenon. Both of the bump of pressure gradient and the front of heat flux propagate inward and outward direction. It is confirmed that the strong fluctuation propagates with the pulse front. It is observed that the number of pulses going outward is close to those going inward. This ballistic phenomenon does not contradict to the turbulence spreading theory. Statistical characteristics of the ballistic propagation of pulses are evaluated and compared with scaling laws which is given by the turbulence spreading theory. It is found that they give qualitatively good agreement. (paper)
Characterizing Ion Flows Across a Dipolarization Front
Arnold, H.; Drake, J. F.; Swisdak, M.
2017-12-01
In light of the Magnetospheric Multiscale Mission (MMS) moving to study predominately symmetric magnetic reconnection in the Earth's magnetotail, it is of interest to investigate various methods for determining the relative location of the satellites with respect to the x line or a dipolarization front. We use a 2.5 dimensional PIC simulation to explore the dependence of various characteristics of a front, or flux bundle, on the width of the front in the dawn-dusk direction. In particular, we characterize the ion flow in the x-GSM direction across the front. We find a linear relationship between the width of a front, w, and the maximum velocity of the ion flow in the x-GSM direction, Vxi, for small widths: Vxi/VA=w/di*1/2*(mVA2)/Ti*Bz/Bxwhere m, VA, di, Ti, Bz, and Bx are the ion mass, upstream Alfven speed, ion inertial length, ion temperature, and magnetic fields in the z-GSM and x-GSM directions respectively. However, once the width reaches around 5 di, the relationship gradually approaches the well-known theoretical limit for ion flows, the upstream Alfven speed. Furthermore, we note that there is a reversal in the Hall magnetic field near the current sheet on the positive y-GSM side of the front. This reversal is most likely due to conservation of momentum in the y-GSM direction as the ions accelerate towards the x-GSM direction. This indicates that while the ions are primarily energized in the x-GSM direction by the front, they transfer energy to the electromagnetic fields in the y-GSM direction. The former energy transfer is greater than the latter, but the reversal of the Hall magnetic field drags the frozen-in electrons along with it outside of the front. These simulations should better able researchers to determine the relative location of a satellite crossing a dipolarization front.
Managing Controversies in the Fuzzy Front End
Christiansen, John K.; Gasparin, Marta
2016-01-01
This research investigates the controversies that emerge in the fuzzy front end (FFE) and how they are closed so the innovation process can move on. The fuzzy front has been characterized in the literature as a very critical phase, but controversies in the FFE have not been studied before....... The analysis investigates the microprocesses around the controversies that emerge during the fuzzy front end of four products. Five different types of controversies are identified: profit, production, design, brand and customers/market. Each controversy represents a threat, but also an opportunity to search...
SPD very front end electronics
Luengo, S.; Gascon, D.; Comerma, A.; Garrido, L.; Riera, J.; Tortella, S.; Vilasis, X.
2006-01-01
The Scintillator Pad Detector (SPD) is part of the LHCb calorimetry system [D. Breton, The front-end electronics for LHCb calorimeters, Tenth International Conference on Calorimetry in Particle Physics, CALOR, Pasadena, 2002] that provides high-energy hadron, electron and photon candidates for the first level trigger. The SPD is designed to distinguish electrons from photons. It consists of a plastic scintillator layer, divided into about 6000 cells of different size to obtain better granularity near the beam [S. Amato, et al., LHCb technical design report, CERN/LHCC/2000-0036, 2000]. Charged particles will produce, and photons will not, ionization in the scintillator. This ionization generates a light pulse that is collected by a WaveLength Shifting (WLS) fiber that is coiled inside the scintillator cell. The light is transmitted through a clear fiber to the readout system that is placed at the periphery of the detector. Due to space constraints, and in order to reduce costs, these 6000 cells are divided in groups using a MAPMT [Z. Ajaltouni, et al., Nucl. Instr. and Meth. A 504 (2003) 9] of 64 channels that provides information to the VFE readout electronics. The SPD signal has rather large statistical fluctuations because of the low number (20-30) of photoelectrons per MIP. Therefore the signal is integrated over the whole bunch crossing length of 25 ns in order to have the maximum value. Since in average about 85% of the SPD signal is within 25 ns, 15% of a sample is subtracted from the following one using an operational amplifier. The SPD VFE readout system that will be presented consists of the following components. A specific ASIC [D. Gascon, et al., Discriminator ASIC for the VFE SPD of the LHCb Calorimeter, LHCB Technical Note, LHCB 2004-xx] integrates the signal, makes the signal-tail subtraction, and compares the level obtained to a programmable threshold (to distinguish electrons from photons). A FPGA programmes the ASIC threshold and the value for
Global climate change and the equity-efficiency puzzle
Manne, Alan S.; Stephan, Gunter
2005-01-01
There is a broad consensus that the costs of abatement of global climate change can be reduced efficiently through the assignment of quota rights and through international trade in these rights. There is, however, no consensus on whether the initial assignment of emissions permits can affect the Pareto-optimal global level of abatement. This paper provides some insight into the equity-efficiency puzzle. Qualitative results are obtained from a small-scale model; then quantitative evidence of separability is obtained from MERGE, a multiregion integrated assessment model. It is shown that if all the costs of climate change can be expressed in terms of GDP losses, Pareto-efficient abatement strategies are independent of the initial allocation of emissions rights. This is the case sometimes described as 'market damages'. If, however, different regions assign different values to nonmarket damages such as species losses, different sharing rules may affect the Pareto-optimal level of greenhouse gas abatement. Separability may then be demonstrated only in specific cases (e.g. identical welfare functions or quasi-linearity of preferences or small shares of wealth devoted to abatement)
On Front Slope Stability of Berm Breakwaters
Burcharth, Hans F.
2013-01-01
The short communication presents application of the conventional Van der Meer stability formula for low-crested breakwaters for the prediction of front slope erosion of statically stable berm breakwaters with relatively high berms. The method is verified (Burcharth, 2008) by comparison...... with the reshaping of a large Norwegian breakwater exposed to the North Sea waves. As a motivation for applying the Van der Meer formula a discussion of design parameters related to berm breakwater stability formulae is given. Comparisons of front erosion predicted by the use of the Van der Meer formula with model...... test results including tests presented in Sigurdarson and Van der Meer (2011) are discussed. A proposal is presented for performance of new model tests with the purpose of developing more accurate formulae for the prediction of front slope erosion as a function of front slope, relative berm height...
Can we observe the fronts of the Antarctic Circumpolar Current using GRACE OBP?
Makowski, J.; Chambers, D. P.; Bonin, J. A.
2014-12-01
The Antarctic Circumpolar Current (ACC) and the Southern Ocean remains one of the most undersampled regions of the world's oceans. The ACC is comprised of four major fronts: the Sub-Tropical Front (STF), the Polar Front (PF), the Sub-Antarctic Front (SAF), and the Southern ACC Front (SACCF). These were initially observed individually from repeat hydrographic sections and their approximate locations globally have been quantified using all available temperature data from the World Ocean and Climate Experiment (WOCE). More recent studies based on satellite altimetry have found that the front positions are more dynamic and have shifted south by up to 1° on average since 1993. Using ocean bottom pressure (OBP) data from the current Gravity Recovery and Climate Experiment (GRACE) we have measured integrated transport variability of the ACC south of Australia. However, differentiation of variability of specific fronts has been impossible due to the necessary smoothing required to reduce noise and correlated errors in the measurements. The future GRACE Follow-on (GFO) mission and the post 2020 GRACE-II mission are expected to produce higher resolution gravity fields with a monthly temporal resolution. Here, we study the resolution and error characteristics of GRACE gravity data that would be required to resolve variations in the front locations and transport. To do this, we utilize output from a high-resolution model of the Southern Ocean, hydrology models, and ice sheet surface mass balance models; add various amounts of random and correlated errors that may be expected from GFO and GRACE-II; and quantify requirements needed for future satellite gravity missions to resolve variations along the ACC fronts.
Hern, W M
1993-01-01
honor those who advanced the cause of women's rights. They honored the physician who had to shout over hecklers to make his remarks heard. After a year of operation, the physician encountered differences with the Board of Directors of the clinic. Soon after that, he resigned and opened his own clinic with a bank loan of $7000. Within 4 years, his clinic had expanded, and he purchased its building. The harassment from antiabortion protesters continued, with broken windows, pickets, and, in February 1988, bullets fired through the front windows of the waiting room. This necessitated the installation of bullet-proof glass and a security system which cost $17,000. As of March 1, 1993, there had been 1285 acts of violence towards abortion clinics, which led to the destruction of more than 100. On March 10 of that year, a physician who performed abortions in Florida was gunned down by an anti-abortion protestor. People who provide abortions hope for legal protection and respect for their civil liberties, but they will continue to provide this service even if conditions do not improve.
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
Kudo, Fumiya; Yoshikawa, Tomohiro; Furuhashi, Takeshi
Recentry, Multi-objective Genetic Algorithm, which is the application of Genetic Algorithm to Multi-objective Optimization Problems is focused on in the engineering design field. In this field, the analysis of design variables in the acquired Pareto solutions, which gives the designers useful knowledge in the applied problem, is important as well as the acquisition of advanced solutions. This paper proposes a new visualization method using Isomap which visualizes the geometric distances of solutions in the design variable space considering their distances in the objective space. The proposed method enables a user to analyze the design variables of the acquired solutions considering their relationship in the objective space. This paper applies the proposed method to the conceptual design optimization problem of hybrid rocket engine and studies the effectiveness of the proposed method.
Bíró, Gábor; Barnaföldi, Gergely Gábor; Biró, Tamás Sándor; Shen, Keming
2018-02-01
The latest, high-accuracy identified hadron spectra measurements in highenergy nuclear collisions led us to the investigation of the strongly interacting particles and collective effects in small systems. Since microscopical processes result in a statistical Tsallis - Pareto distribution, the fit parameters q and T are well suited for identifying system size scalings and initial conditions. Moreover, parameter values provide information on the deviation from the extensive, Boltzmann - Gibbs statistics in finite-volumes. We apply here the fit procedure developed in our earlier study for proton-proton collisions [1, 2]. The observed mass and center-of-mass energy trends in the hadron production are compared to RHIC dAu and LHC pPb data in different centrality/multiplicity classes. Here we present new results on mass hierarchy in pp and pA from light to heavy hadrons.
Nursamsiah; Nugroho Sugianto, Denny; Suprijanto, Jusup; Munasik; Yulianto, Bambang
2018-02-01
The information of extreme wave height return level was required for maritime planning and management. The recommendation methods in analyzing extreme wave were better distributed by Generalized Pareto Distribution (GPD). Seasonal variation was often considered in the extreme wave model. This research aims to identify the best model of GPD by considering a seasonal variation of the extreme wave. By using percentile 95 % as the threshold of extreme significant wave height, the seasonal GPD and non-seasonal GPD fitted. The Kolmogorov-Smirnov test was applied to identify the goodness of fit of the GPD model. The return value from seasonal and non-seasonal GPD was compared with the definition of return value as criteria. The Kolmogorov-Smirnov test result shows that GPD fits data very well both seasonal and non-seasonal model. The seasonal return value gives better information about the wave height characteristics.
Arnaut Dierck
2015-01-01
Full Text Available Designing textile antennas for real-life applications requires a design strategy that is able to produce antennas that are optimized over a wide bandwidth for often conflicting characteristics, such as impedance matching, axial ratio, efficiency, and gain, and, moreover, that is able to account for the variations that apply for the characteristics of the unconventional materials used in smart textile systems. In this paper, such a strategy, incorporating a multiobjective constrained Pareto optimization, is presented and applied to the design of a Galileo E6-band antenna with optimal return loss and wide-band axial ratio characteristics. Subsequently, different prototypes of the optimized antenna are fabricated and measured to validate the proposed design strategy.
Leimbach, Marian [Potsdam-Institut fuer Klimafolgenforschung e.V., Potsdam (Germany); Eisenack, Klaus [Oldenburg Univ. (Germany). Dept. of Economics and Statistics
2008-11-15
In this paper we present an algorithm that deals with trade interactions within a multi-region model. In contrast to traditional approaches this algorithm is able to handle spillover externalities. Technological spillovers are expected to foster the diffusion of new technologies, which helps to lower the cost of climate change mitigation. We focus on technological spillovers which are due to capital trade. The algorithm of finding a pareto-optimal solution in an intertemporal framework is embedded in a decomposed optimization process. The paper analyzes convergence and equilibrium properties of this algorithm. In the final part of the paper, we apply the algorithm to investigate possible impacts of technological spillovers. While benefits of technological spillovers are significant for the capital-importing region, benefits for the capital-exporting region depend on the type of regional disparities and the resulting specialization and terms-of-trade effects. (orig.)
Rozenberg, P
2017-06-01
Ultrasound measurement of cervical length in the general population enables the identification of women at risk for spontaneous preterm delivery. Vaginal progesterone is effective in reducing the risk of preterm delivery in this population. This screening associated with treatment by vaginal progesterone is cost-effective. Universal screening of cervical length can therefore be considered justified. Nonetheless, this screening will not appreciably reduce the preterm birth prevalence: in France or UK, where the preterm delivery rate is around 7.4%, this strategy would make it possible to reduce it only to 7.0%. This small benefit must be set against the considerable effort required in terms of screening ultrasound scans. Universal ultrasound screening of cervical length is the inverse of Pareto's principle: a small benefit against a considerable effort. © 2016 Royal College of Obstetricians and Gynaecologists.
Achi Rinaldi
2016-06-01
Full Text Available Extreme event such as extreme rainfall have been analyzed and most concern for the country all around the world. There are two common distribution for extreme value which are Generalized Extreme Value distribution and Generalized Pareto distribution. These two distribution have shown good performace to estimate the parameter of extreme value. This research was aim to estimate parameter of extreme value using GEV distribution and GP distribution, and also to characterized effect of extreme event such as flood. The rainfall data was taken from BMKG for 5 location in DKI Jakarta. Both of distribution shown a good perfromance. The resut showed that Tanjung Priok station has biggest location parameter for GEV and also the biggest scale parameter for GP, that mean the biggest probability to take flood effect of the extreme rainfall.
José Raúl Castro
2016-02-01
Full Text Available This paper presents an efficient algorithm to solve the multi-objective (MO voltage control problem in distribution networks. The proposed algorithm minimizes the following three objectives: voltage variation on pilot buses, reactive power production ratio deviation, and generator voltage deviation. This work leverages two optimization techniques: fuzzy logic to find the optimum value of the reactive power of the distributed generation (DG and Pareto optimization to find the optimal value of the pilot bus voltage so that this produces lower losses under the constraints that the voltage remains within established limits. Variable loads and DGs are taken into account in this paper. The algorithm is tested on an IEEE 13-node test feeder and the results show the effectiveness of the proposed model.
Solomon, Sorin; Levy, Moshe
2001-06-01
The LLS stock market model (see Levy Levy and Solomon Academic Press 2000 "Microscopic Simulation of Financial Markets; From Investor Behavior to Market Phenomena" for a review) is a model of heterogeneous quasi-rational investors operating in a complex environment about which they have incomplete information. We review the main features of this model and several of its extensions. We study the effects of investor heterogeneity and show that predation, competition, or symbiosis may occur between different investor populations. The dynamics of the LLS model lead to the empirically observed Pareto wealth distribution. Many properties observed in actual markets appear as natural consequences of the LLS dynamics: - truncated Levy distribution of short-term returns, - excess volatility, - a return autocorrelation "U-shape" pattern, and - a positive correlation between volume and absolute returns.
Hurford, Anthony; Harou, Julien
2014-05-01
Water related eco-system services are important to the livelihoods of the poorest sectors of society in developing countries. Degradation or loss of these services can increase the vulnerability of people decreasing their capacity to support themselves. New approaches to help guide water resources management decisions are needed which account for the non-market value of ecosystem goods and services. In case studies from Brazil and Kenya we demonstrate the capability of many objective Pareto-optimal trade-off analysis to help decision makers balance economic and non-market benefits from the management of existing multi-reservoir systems. A multi-criteria search algorithm is coupled to a water resources management simulator of each basin to generate a set of Pareto-approximate trade-offs representing the best case management decisions. In both cases, volume dependent reservoir release rules are the management decisions being optimised. In the Kenyan case we further assess the impacts of proposed irrigation investments, and how the possibility of new investments impacts the system's trade-offs. During the multi-criteria search (optimisation), performance of different sets of management decisions (policies) is assessed against case-specific objective functions representing provision of water supply and irrigation, hydropower generation and maintenance of ecosystem services. Results are visualised as trade-off surfaces to help decision makers understand the impacts of different policies on a broad range of stakeholders and to assist in decision-making. These case studies show how the approach can reveal unexpected opportunities for win-win solutions, and quantify the trade-offs between investing to increase agricultural revenue and negative impacts on protected ecosystems which support rural livelihoods.
Park, Jungsoo; Song, Soonho; Lee, Kyo Seung
2015-01-01
Highlights: • Model-based control of dual-loop EGR system is performed. • EGR split index is developed to provide non-dimensional index for optimization. • EGR rates are calibrated using EGR split index at specific operating conditions. • Multi-objective Pareto optimization is performed to minimize NO X and BSFC. • Optimum split strategies are suggested with LP-rich dual-loop EGR at high load. - Abstract: A proposed dual-loop exhaust-gas recirculation (EGR) system that combines the features of high-pressure (HP) and low-pressure (LP) systems is considered a key technology for improving the combustion behavior of diesel engines. The fraction of HP and LP flows, known as the EGR split, for a given dual-loop EGR rate play an important role in determining the engine performance and emission characteristics. Therefore, identifying the proper EGR split is important for the engine optimization and calibration processes, which affect the EGR response and deNO X efficiencies. The objective of this research was to develop a dual-loop EGR split strategy using numerical analysis and one-dimensional (1D) cycle simulation. A control system was modeled by coupling the 1D cycle simulation and the control logic. An EGR split index was developed to investigate the HP/LP split effects on the engine performance and emissions. Using the model-based control system, a multi-objective Pareto (MOP) analysis was used to minimize the NO X formation and fuel consumption through optimized engine operating parameters. The MOP analysis was performed using a response surface model extracted from Latin hypercube sampling as a fractional factorial design of experiment. By using an LP rich dual-loop EGR, a high EGR rate was attained at low, medium, and high engine speeds, increasing the applicable load ranges compared to base conditions
Relating precipitation to fronts at a sub-daily basis
Hénin, Riccardo; Ramos, Alexandre M.; Liberato, Margarida L. R.; Gouveia, Célia
2017-04-01
.M. Trigo and M.L.R. Liberato (2014) A ranking of high-resolution daily precipitation extreme events for the Iberian Peninsula, Atmospheric Science Letters 15, 328 - 334. doi: 10.1002/asl2.507. Shemm S., I. Rudeva and I. Simmonds (2014) Extratropical fronts in the lower troposphere - global perspectives obtained from two automated methods. Quarterly Journal of the Royal Meteorological Society, 141: 1686-1698, doi: 10.1002/qj.2471. ACKNOWLEDGEMENTS This work is supported by FCT - project UID/GEO/50019/2013 - Instituto Dom Luiz. Fundação para a Ciência e a Tecnologia, Portugal (FCT) is also providing for R. Hénin doctoral grant (PD/BD/114479/2016) and A.M. Ramos postdoctoral grant (FCT/DFRH/SFRH/BPD/84328/2012).
Developing a Front-of-Package Labelling System in Guatemala to ...
With one of the highest per capita consumption rates of soft drinks globally, the people ... a front-of-package label on energy-dense, nutrient-poor foods and beverages to ... New labels for sugar-sweetened beverages This research will target ...
Brodsky, Stanley J.; de Teramond, Guy F.; /SLAC /Southern Denmark U., CP3-Origins /Costa Rica U.
2011-01-10
AdS/QCD, the correspondence between theories in a dilaton-modified five-dimensional anti-de Sitter space and confining field theories in physical space-time, provides a remarkable semiclassical model for hadron physics. Light-front holography allows hadronic amplitudes in the AdS fifth dimension to be mapped to frame-independent light-front wavefunctions of hadrons in physical space-time. The result is a single-variable light-front Schroedinger equation which determines the eigenspectrum and the light-front wavefunctions of hadrons for general spin and orbital angular momentum. The coordinate z in AdS space is uniquely identified with a Lorentz-invariant coordinate {zeta} which measures the separation of the constituents within a hadron at equal light-front time and determines the off-shell dynamics of the bound state wavefunctions as a function of the invariant mass of the constituents. The hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. Higher Fock states with extra quark-anti quark pairs also arise. The soft-wall model also predicts the form of the nonperturbative effective coupling and its {beta}-function. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method to systematically include QCD interaction terms. Some novel features of QCD are discussed, including the consequences of confinement for quark and gluon condensates. A method for computing the hadronization of quark and gluon jets at the amplitude level is outlined.
Brodsky, Stanley J.; de Teramond, Guy F.
2011-01-01
AdS/QCD, the correspondence between theories in a dilaton-modified five-dimensional anti-de Sitter space and confining field theories in physical space-time, provides a remarkable semiclassical model for hadron physics. Light-front holography allows hadronic amplitudes in the AdS fifth dimension to be mapped to frame-independent light-front wavefunctions of hadrons in physical space-time. The result is a single-variable light-front Schroedinger equation which determines the eigenspectrum and the light-front wavefunctions of hadrons for general spin and orbital angular momentum. The coordinate z in AdS space is uniquely identified with a Lorentz-invariant coordinate ζ which measures the separation of the constituents within a hadron at equal light-front time and determines the off-shell dynamics of the bound state wavefunctions as a function of the invariant mass of the constituents. The hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. Higher Fock states with extra quark-anti quark pairs also arise. The soft-wall model also predicts the form of the nonperturbative effective coupling and its β-function. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method to systematically include QCD interaction terms. Some novel features of QCD are discussed, including the consequences of confinement for quark and gluon condensates. A method for computing the hadronization of quark and gluon jets at the amplitude level is outlined.
Global shape optimization of airfoil using multi-objective genetic algorithm
Lee, Ju Hee; Lee, Sang Hwan; Park, Kyoung Woo
2005-01-01
The shape optimization of an airfoil has been performed for an incompressible viscous flow. In this study, Pareto frontier sets, which are global and non-dominated solutions, can be obtained without various weighting factors by using the multi-objective genetic algorithm. An NACA0012 airfoil is considered as a baseline model, and the profile of the airfoil is parameterized and rebuilt with four Bezier curves. Two curves, from leading to maximum thickness, are composed of five control points and the rest, from maximum thickness to tailing edge, are composed of four control points. There are eighteen design variables and two objective functions such as the lift and drag coefficients. A generation is made up of forty-five individuals. After fifteenth evolutions, the Pareto individuals of twenty can be achieved. One Pareto, which is the best of the reduction of the drag force, improves its drag to 13% and lift-drag ratio to 2%. Another Pareto, however, which is focused on increasing the lift force, can improve its lift force to 61%, while sustaining its drag force, compared to those of the baseline model
Global shape optimization of airfoil using multi-objective genetic algorithm
Lee, Ju Hee; Lee, Sang Hwan [Hanyang Univ., Seoul (Korea, Republic of); Park, Kyoung Woo [Hoseo Univ., Asan (Korea, Republic of)
2005-10-01
The shape optimization of an airfoil has been performed for an incompressible viscous flow. In this study, Pareto frontier sets, which are global and non-dominated solutions, can be obtained without various weighting factors by using the multi-objective genetic algorithm. An NACA0012 airfoil is considered as a baseline model, and the profile of the airfoil is parameterized and rebuilt with four Bezier curves. Two curves, from leading to maximum thickness, are composed of five control points and the rest, from maximum thickness to tailing edge, are composed of four control points. There are eighteen design variables and two objective functions such as the lift and drag coefficients. A generation is made up of forty-five individuals. After fifteenth evolutions, the Pareto individuals of twenty can be achieved. One Pareto, which is the best of the reduction of the drag force, improves its drag to 13% and lift-drag ratio to 2%. Another Pareto, however, which is focused on increasing the lift force, can improve its lift force to 61%, while sustaining its drag force, compared to those of the baseline model.
Light-Front Holography and the Light-Front Schrodinger Equation
Brodsky, Stanley J.; de Teramond, Guy
2012-08-15
One of the most important nonperturbative methods for solving QCD is quantization at fixed light-front time {tau} = t+z=c - Dirac's 'Front Form'. The eigenvalues of the light-front QCD Hamiltonian predict the hadron spectrum and the eigensolutions provide the light-front wavefunctions which describe hadron structure. More generally, we show that the valence Fock-state wavefunctions of the light-front QCD Hamiltonian satisfy a single-variable relativistic equation of motion, analogous to the nonrelativistic radial Schrodinger equation, with an effective confining potential U which systematically incorporates the effects of higher quark and gluon Fock states. We outline a method for computing the required potential from first principles in QCD. The holographic mapping of gravity in AdS space to QCD, quantized at fixed light-front time, yields the same light front Schrodinger equation; in fact, the soft-wall AdS/QCD approach provides a model for the light-front potential which is color-confining and reproduces well the light-hadron spectrum. One also derives via light-front holography a precise relation between the bound-state amplitudes in the fifth dimension of AdS space and the boost-invariant light-front wavefunctions describing the internal structure of hadrons in physical space-time. The elastic and transition form factors of the pion and the nucleons are found to be well described in this framework. The light-front AdS/QCD holographic approach thus gives a frame-independent first approximation of the color-confining dynamics, spectroscopy, and excitation spectra of relativistic light-quark bound states in QCD.
Light-Front Quantization of Gauge Theories
Brodsky, Stanley J.
2003-03-25
Light-front wavefunctions provide a frame-independent representation of hadrons in terms of their physical quark and gluon degrees of freedom. The light-front Hamiltonian formalism provides new nonperturbative methods for obtaining the QCD spectrum and eigensolutions, including resolvant methods, variational techniques, and discretized light-front quantization. A new method for quantizing gauge theories in light-cone gauge using Dirac brackets to implement constraints is presented. In the case of the electroweak theory, this method of light-front quantization leads to a unitary and renormalizable theory of massive gauge particles, automatically incorporating the Lorentz and 't Hooft conditions as well as the Goldstone boson equivalence theorem. Spontaneous symmetry breaking is represented by the appearance of zero modes of the Higgs field leaving the light-front vacuum equal to the perturbative vacuum. I also discuss an ''event amplitude generator'' for automatically computing renormalized amplitudes in perturbation theory. The importance of final-state interactions for the interpretation of diffraction, shadowing, and single-spin asymmetries in inclusive reactions such as deep inelastic lepton-hadron scattering is emphasized.
Light-Front Quantization of Gauge Theories
Brodskey, Stanley
2002-12-01
Light-front wavefunctions provide a frame-independent representation of hadrons in terms of their physical quark and gluon degrees of freedom. The light-front Hamiltonian formalism provides new nonperturbative methods for obtaining the QCD spectrum and eigensolutions, including resolvant methods, variational techniques, and discretized light-front quantization. A new method for quantizing gauge theories in light-cone gauge using Dirac brackets to implement constraints is presented. In the case of the electroweak theory, this method of light-front quantization leads to a unitary and renormalizable theory of massive gauge particles, automatically incorporating the Lorentz and 't Hooft conditions as well as the Goldstone boson equivalence theorem. Spontaneous symmetry breaking is represented by the appearance of zero modes of the Higgs field leaving the light-front vacuum equal to the perturbative vacuum. I also discuss an ''event amplitude generator'' for automatically computing renormalized amplitudes in perturbation theory. The importance of final-state interactions for the interpretation of diffraction, shadowing, and single-spin asymmetries in inclusive reactions such as deep inelastic lepton-hadron scattering is emphasized.
Statistical Physics and Light-Front Quantization
Raufeisen, J
2004-08-12
Light-front quantization has important advantages for describing relativistic statistical systems, particularly systems for which boost invariance is essential, such as the fireball created in a heavy ion collisions. In this paper the authors develop light-front field theory at finite temperature and density with special attention to quantum chromodynamics. They construct the most general form of the statistical operator allowed by the Poincare algebra and show that there are no zero-mode related problems when describing phase transitions. They then demonstrate a direct connection between densities in light-front thermal field theory and the parton distributions measured in hard scattering experiments. The approach thus generalizes the concept of a parton distribution to finite temperature. In light-front quantization, the gauge-invariant Green's functions of a quark in a medium can be defined in terms of just 2-component spinors and have a much simpler spinor structure than the equal-time fermion propagator. From the Green's function, the authors introduce the new concept of a light-front density matrix, whose matrix elements are related to forward and to off-diagonal parton distributions. Furthermore, they explain how thermodynamic quantities can be calculated in discretized light-cone quantization, which is applicable at high chemical potential and is not plagued by the fermion-doubling problems.
Sharp fronts within geochemical transport problems
Grindrod, P.
1995-01-01
The authors consider some reactive geochemical transport problems in groundwater systems. When incoming fluid is in disequilibrium with the mineralogy sharp transition fronts may develop. They show that this is a generic property for a class of systems where the timescales associated with reaction and diffusion phenomena are much shorter than those associated with advective transport. Such multiple timescale problems are relevant to a variety of processes in natural systems: mathematically methods of singular perturbation theory reduce the dimension of the problems to be solved locally. Furthermore, they consider how spatial heterogeneous mineralogy can impact upon the propagation of sharp geochemical fronts. The authors developed an asymptotic approach in which they solve equations for the evolving geometry of the front and indicate how the non-smooth perturbations due to natural heterogeneity of the mineralogy on underlying ground water flow field are balanced against the smoothing effect of diffusion/dispersive processes. Fronts are curvature damped, and the results here indicate the generic nature of separate front propagation within both model (idealized) and natural (heterogeneous) geochemical systems
Systematic front distortion and presence of consecutive fronts in a precipitation system
Volford, A.; Izsak, F.; Ripszam, M.; Lagzi, I.
2006-01-01
A new simple reaction-diffusion system is presented focusing on pattern formation phenomena as consecutive precipitation fronts and distortion of the precipitation front.The chemical system investigated here is based on the amphoteric property of aluminum hydroxide and exhibits two unique phenomena.
Optimal back-to-front airplane boarding
Bachmat, Eitan; Khachaturov, Vassilii; Kuperman, Ran
2013-06-01
The problem of finding an optimal back-to-front airplane boarding policy is explored, using a mathematical model that is related to the 1+1 polynuclear growth model with concave boundary conditions and to causal sets in gravity. We study all airplane configurations and boarding group sizes. Optimal boarding policies for various airplane configurations are presented. Detailed calculations are provided along with simulations that support the main conclusions of the theory. We show that the effectiveness of back-to-front policies undergoes a phase transition when passing from lightly congested airplanes to heavily congested airplanes. The phase transition also affects the nature of the optimal or near-optimal policies. Under what we consider to be realistic conditions, optimal back-to-front policies lead to a modest 8-12% improvement in boarding time over random (no policy) boarding, using two boarding groups. Having more than two groups is not effective.
Friction forces on phase transition fronts
Mégevand, Ariel
2013-01-01
In cosmological first-order phase transitions, the microscopic interaction of the phase transition fronts with non-equilibrium plasma particles manifests itself macroscopically as friction forces. In general, it is a nontrivial problem to compute these forces, and only two limits have been studied, namely, that of very slow walls and, more recently, ultra-relativistic walls which run away. In this paper we consider ultra-relativistic velocities and show that stationary solutions still exist when the parameters allow the existence of runaway walls. Hence, we discuss the necessary and sufficient conditions for the fronts to actually run away. We also propose a phenomenological model for the friction, which interpolates between the non-relativistic and ultra-relativistic values. Thus, the friction depends on two friction coefficients which can be calculated for specific models. We then study the velocity of phase transition fronts as a function of the friction parameters, the thermodynamic parameters, and the amount of supercooling
Optimal back-to-front airplane boarding.
Bachmat, Eitan; Khachaturov, Vassilii; Kuperman, Ran
2013-06-01
The problem of finding an optimal back-to-front airplane boarding policy is explored, using a mathematical model that is related to the 1+1 polynuclear growth model with concave boundary conditions and to causal sets in gravity. We study all airplane configurations and boarding group sizes. Optimal boarding policies for various airplane configurations are presented. Detailed calculations are provided along with simulations that support the main conclusions of the theory. We show that the effectiveness of back-to-front policies undergoes a phase transition when passing from lightly congested airplanes to heavily congested airplanes. The phase transition also affects the nature of the optimal or near-optimal policies. Under what we consider to be realistic conditions, optimal back-to-front policies lead to a modest 8-12% improvement in boarding time over random (no policy) boarding, using two boarding groups. Having more than two groups is not effective.
PIV tracer behavior on propagating shock fronts
Glazyrin, Fyodor N; Mursenkova, Irina V; Znamenskaya, Irina A
2016-01-01
The present work was aimed at the quantitative particle image velocimetry (PIV) measurement of a velocity field near the front of a propagating shock wave and the study of the dynamics of liquid tracers crossing the shock front. For this goal, a shock tube with a rectangular cross-section (48 × 24 mm) was used. The flat shock wave with Mach numbers M = 1.4–2.0 propagating inside the tube channel was studied as well as an expanding shock wave propagating outside the channel with M = 1.2–1.8 at its main axis. The PIV imaging of the shock fronts was carried out with an aerosol of dioctyl sebacate (DEHS) as tracer particles. The pressures of the gas in front of the shock waves studied ranged from 0.013 Mpa to 0.1 MPa in the series of experiments. The processed PIV data, compared to the 1D normal shock theory, yielded consistent values of wake velocity immediately behind the plain shock wave. Special attention was paid to the blurring of the velocity jump on the shock front due to the inertial particle lag and peculiarities of the PIV technique. A numerical algorithm was developed for analysis and correction of the PIV data on the shock fronts, based on equations of particle-flow interaction. By application of this algorithm, the effective particle diameter of the DEHS aerosol tracers was estimated as 1.03 ± 0.12 μm. A number of different formulations for particle drag were tested with this algorithm, with varying success. The results show consistency with previously reported experimental data obtained for cases of stationary shock waves. (paper)
RF front-end world class designs
Love, Janine
2009-01-01
All the design and development inspiration and direction a harware engineer needs in one blockbuster book! Janine Love site editor for RF Design Line,columnist, and author has selected the very best RF design material from the Newnes portfolio and has compiled it into this volume. The result is a book covering the gamut of RF front end design from antenna and filter design fundamentals to optimized layout techniques with a strong pragmatic emphasis. In addition to specific design techniques and practices, this book also discusses various approaches to solving RF front end design problems and h
THREE PERSPECTIVES ON MANAGING FRONT END INNOVATION
Jensen, Anna Rose Vagn; Clausen, Christian; Gish, Liv
2018-01-01
as a complementary perspective. The paper combines a literature review with an empirical examination of the application of these multiple perspectives across three cases of front end of innovation (FEI) management in mature product developing companies. While the process models represent the dominant, albeit rather...... to represent an emergent approach in managing FEI where process models, knowledge strategies and objects become integrated elements in more advanced navigational strategies for key players.......This paper presents three complementary perspectives on the management of front end innovation: A process model perspective, a knowledge perspective and a translational perspective. While the first two perspectives are well established in literature, we offer the translation perspective...
Discretionary Power on the Front-line
Sanden, Guro Refsum; Lønsmann, Dorte
This article investigates the communication practices used by front-line employees to cross language boundaries in the context of English language policies implemented by the management of three multinational corporations (MNCs) headquartered in Scandinavia. Based on an analysis of interview...... and document data, our findings show that employees face a number of different language boundaries in their everyday work, and that ad hoc and informal solutions in many cases are vital for successful cross-language communication. We introduce the concept of ‘discretionary power’ to explain how and why front...
Sanden, Guro Refsum; Lønsmann, Dorte
language boundaries in their everyday work. Despite official English language policies in the three companies, our findings show that employees face a number of different language boundaries, and that ad hoc and informal solutions in many cases are vital for successful cross-language communication. Drawing......This article investigates how front-line employees respond to English language policies implemented by the management of three multinational corporations (MNCs) headquartered in Scandinavia. Based on interview and document data the article examines the ways in which front-line employees cross...
Discretionary power on the front-line
Sanden, Guro Refsum; Lønsmann, Dorte
2018-01-01
This article investigates the communication practices used by front-line employees to cross language boundaries in the context of English language policies implemented by the management of three multinational corporations headquartered in Scandinavia. Based on an analysis of interview and document...... data, our findings show that employees face a number of different language boundaries in their everyday work, and that ad hoc and informal solutions in many cases are vital for successful cross-language communication. We introduce the concept of discretionary power to explain how and why front...
Discretionary Power on the Front Line
Sanden, Guro Refsum; Lønsmann, Dorte
2018-01-01
This paper investigates the communication practices used by front-line employees to cross language boundaries in the context of English language policies implemented by the management of three multinational corporations (MNCs) headquartered in Scandinavia. Based on an analysis of interview...... and document data, our findings show that employees face a number of different language boundaries in their everyday work, and that ad hoc and informal solutions in many cases are vital for successful cross-language communication. We introduce the concept of 'discretionary power' to explain how and why front...
Pole solutions for flame front propagation
Kupervasser, Oleg
2015-01-01
This book deals with solving mathematically the unsteady flame propagation equations. New original mathematical methods for solving complex non-linear equations and investigating their properties are presented. Pole solutions for flame front propagation are developed. Premixed flames and filtration combustion have remarkable properties: the complex nonlinear integro-differential equations for these problems have exact analytical solutions described by the motion of poles in a complex plane. Instead of complex equations, a finite set of ordinary differential equations is applied. These solutions help to investigate analytically and numerically properties of the flame front propagation equations.
Light-front nuclear shell-model
Johnson, M.B.
1990-01-01
I examine the effects of nuclear structure on high-energy, high-momentum transfer processes, specifically the EMC effect. For pedagogical reasons, a fictitious but simple two-body system consisting of two equal-mass particles interacting in a harmonic oscillator potential has been chosen. For this toy nucleus, I utilize a widely-used link between instant-form and light-front dynamics, formulating nuclear structure and deep-inelastic scattering consistently in the laboratory system. Binding effects are compared within conventional instant-form and light-front dynamical frameworks, with appreciable differences being found in the two cases. 20 refs
Light-Front Dynamics in Hadron Physics
Ji, C.R.; Bakker, B.L.G.; Choi, H.M.
2013-01-01
Light-front dynamics(LFD) plays an important role in the analyses of relativistic few-body systems. As evidenced from the recent studies of generalized parton distributions (GPDs) in hadron physics, a natural framework for a detailed study of hadron structures is LFD due to its direct application in
Positional Velar Fronting: An Updated Articulatory Account
Byun, Tara McAllister
2012-01-01
This study develops the hypothesis that the child-specific phenomenon of positional velar fronting can be modeled as the product of phonologically encoded articulatory limitations unique to immature speakers. Children have difficulty executing discrete tongue movements, preferring to move the tongue and jaw as a single unit. This predisposes the…
Magnetohydrodynamical Effects on Nuclear Deflagration Fronts in Type Ia Supernovae
Hristov, Boyan; Collins, David C.; Hoeflich, Peter; Weatherford, Charles A.; Diamond, Tiara R.
2018-05-01
This article presents a study of the effects of magnetic fields on non-distributed nuclear burning fronts as a possible solution to a fundamental problem for the thermonuclear explosion of a Chandrasekhar mass ({M}Ch}) white dwarf (WD), the currently favored scenario for the majority of Type Ia SNe. All existing 3D hydrodynamical simulations predict strong global mixing of the burning products due to Rayleigh–Taylor (RT) instabilities, which contradicts observations. As a first step toward studying the flame physics, we present a set of computational magnet-hydrodynamic models in rectangular flux tubes, resembling a small inner region of a WD. We consider initial magnetic fields up to {10}12 {{G}} of various orientations. We find an increasing suppression of RT instabilities starting at about {10}9 {{G}}. The front speed tends to decrease with increasing magnitude up to about {10}11 {{G}}. For even higher fields new small-scale, finger-like structures develop, which increase the burning speed by a factor of 3 to 4 above the field-free RT-dominated regime. We suggest that the new instability may provide sufficiently accelerated energy production during the distributed burning regime to go over the Chapman–Jougey limit and trigger a detonation. Finally, we discuss the possible origins of high magnetic fields during the final stage of the progenitor evolution or the explosion.
QCD and Light-Front Holography
Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins; de Teramond, Guy F.; /Costa Rica U.
2010-10-27
The soft-wall AdS/QCD model, modified by a positive-sign dilaton metric, leads to a remarkable one-parameter description of nonperturbative hadron dynamics. The model predicts a zero-mass pion for zero-mass quarks and a Regge spectrum of linear trajectories with the same slope in the leading orbital angular momentum L of hadrons and the radial quantum number N. Light-Front Holography maps the amplitudes which are functions of the fifth dimension variable z of anti-de Sitter space to a corresponding hadron theory quantized on the light front. The resulting Lorentz-invariant relativistic light-front wave equations are functions of an invariant impact variable {zeta} which measures the separation of the quark and gluonic constituents within the hadron at equal light-front time. The result is to a semi-classical frame-independent first approximation to the spectra and light-front wavefunctions of meson and baryon light-quark bound states, which in turn predict the behavior of the pion and nucleon form factors. The theory implements chiral symmetry in a novel way: the effects of chiral symmetry breaking increase as one goes toward large interquark separation, consistent with spectroscopic data, and the the hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. The soft-wall model also predicts the form of the non-perturbative effective coupling {alpha}{sub s}{sup AdS} (Q) and its {beta}-function which agrees with the effective coupling {alpha}{sub g1} extracted from the Bjorken sum rule. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method in order to systematically include the QCD interaction terms. A new perspective on quark and gluon condensates is also reviewed.
New results in light-front phenomenology
Brodsky, S.J.
2005-01-01
The light-front quantization of gauge theories in light-cone gauge provides a frame-independent wavefunction representation of relativistic bound states, simple forms for current matrix elements, explicit unitarity, and a trivial vacuum. In this talk I review the theoretical methods and constraints which can be used to determine these central elements of QCD phenomenology. The freedom to choose the light-like quantization four-vector provides an explicitly covariant formulation of light-front quantization and can be used to determine the analytic structure of light-front wave functions and define a kinematical definition of angular momentum. The AdS/CFT correspondence of large N c supergravity theory in higher-dimensional anti-de Sitter space with supersymmetric QCD in four-dimensional space-time has interesting implications for hadron phenomenology in the conformal limit, including an all-orders demonstration of counting rules for exclusive processes. String/gauge duality also predicts the QCD power-law behavior of light-front Fock-state hadronic wavefunctions with arbitrary orbital angular momentum at high momentum transfer. The form of these near-conformal wavefunctions can be used as an initial ansatz for a variational treatment of the light-front QCD Hamiltonian. The light-front Fock-state wavefunctions encode the bound state properties of hadrons in terms of their quark and gluon degrees of freedom at the amplitude level. The nonperturbative Fock-state wavefunctions contain intrinsic gluons, and sea quarks at any scale Q with asymmetries such as s(x) ≠ s-bar(x), u-bar(x) ≠ d-bar(x). Intrinsic charm and bottom quarks appear at large x in the light-front wavefunctions since this minimizes the invariant mass and off-shellness of the higher Fock state. In the case of nuclei, the Fock state expansion contains 'hidden color' states which cannot be classified in terms of of nucleonic degrees of freedom. I also briefly review recent analyses which show that some
Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F
2016-06-07
IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only -0.2 ± 0.9 Gy (mean ± 1 SD) for D mean,-1.0 ± 1.6% for V 65, and -0.4 ± 1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1 ± 1.6 Gy and 4.8 ± 4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate
QCD Phenomenology and Light-Front Wavefunctions
Brodsky, Stanley J.
2001-01-01
A natural calculus for describing the bound-state structure of relativistic composite systems in quantum field theory is the light-front Fock expansion which encodes the properties of a hadrons in terms of a set of frame-independent n-particle wavefunctions. Light-front quantization in the doubly-transverse light-cone gauge has a number of remarkable advantages, including explicit unitarity, a physical Fock expansion, the absence of ghost degrees of freedom, and the decoupling properties needed to prove factorization theorems in high momentum transfer inclusive and exclusive reactions. A number of applications are discussed in these lectures, including semileptonic B decays, two-photon exclusive reactions, diffractive dissociation into jets, and deeply virtual Compton scattering. The relation of the intrinsic sea to the light-front wavefunctions is discussed. Light-front quantization can also be used in the Hamiltonian form to construct an event generator for high energy physics reactions at the amplitude level. The light-cone partition function, summed over exponentially weighted light-cone energies, has simple boost properties which may be useful for studies in heavy ion collisions. I also review recent work which shows that the structure functions measured in deep inelastic lepton scattering are affected by final-state rescattering, thus modifying their connection to light-front probability distributions. In particular, the shadowing of nuclear structure functions is due to destructive interference effects from leading-twist diffraction of the virtual photon, physics not included in the nuclear light-cone wavefunctions
Setiawan, R.
2018-05-01
In this paper, Economic Order Quantity (EOQ) of the vendor-buyer supply-chain model under a probabilistic condition with imperfect quality items has been analysed. The analysis is delivered using two concepts in game theory approach, which is Stackelberg equilibrium and Pareto Optimal, under non-cooperative and cooperative games, respectively. Another result is getting acomparison of theoptimal result between integrated scheme and game theory approach based on analytical and numerical result using appropriate simulation data.
Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.
2017-04-01
Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.
Gharari, Rahman; Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi
2016-01-01
In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor
Gharari, Rahman [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi [Nuclear Engineering Dept, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)
2016-10-15
In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor.
Insights from field observations into controls on flow front speed in submarine sediment flows
Heerema, C.; Talling, P.; Cartigny, M.; Paull, C. K.; Gwiazda, R.; Clare, M. A.; Parsons, D. R.; Xu, J.; Simmons, S.; Maier, K. L.; Chapplow, N.; Gales, J. A.; McGann, M.; Barry, J.; Lundsten, E. M.; Anderson, K.; O'Reilly, T. C.; Rosenberger, K. J.; Sumner, E. J.; Stacey, C.
2017-12-01
Seafloor avalanches of sediment called turbidity currents are one of the most important processes for moving sediment across our planet. Only rivers carry comparable amounts of sediment across such large areas. Here we present some of the first detailed monitoring of these underwater flows that is being undertaken at a series of test sites. We seek to understand the factors that determine flow front speed, and how that speed varies with distance. This frontal speed is particularly important for predicting flow runout, and how the power of these hazardous flows varies with distance. First, we consider unusually detailed measurements of flow front speed defined by transit times between moorings and other tracked objects placed on the floor of Monterey Canyon offshore California in 2016-17. These measurements are then compared to flow front speeds measured using multiple moorings in Bute Inlet, British Columbia in 2016; and by cable breaks in Gaoping Canyon offshore Taiwan in 2006 and 2009. We seek to understand how flow front velocity is related to seafloor gradient, flow front thickness and density. It appears that the spatial evolution of frontal speed is similar in multiple flows, although their peak frontal velocities vary. Flow front velocity tends to increase rapidly initially before declining rather gradually over tens or even hundreds of kilometres. It has been proposed that submarine flows will exist in one of two states; either eroding and accelerating, or depositing sediment and dissipating. We conclude by discussing the implications of this global compilation of flow front velocities for understanding submarine flow behaviour.
Globally linked vortex clusters in trapped wave fields
Crasovan, Lucian-Cornel; Molina-Terriza, Gabriel; Torres, Juan P.; Torner, Lluis; Perez-Garcia, Victor M.; Mihalache, Dumitru
2002-01-01
We put forward the existence of a rich variety of fully stationary vortex structures, termed H clusters, made of an increasing number of vortices nested in paraxial wave fields confined by trapping potentials. However, we show that the constituent vortices are globally linked, rather than products of independent vortices. Also, they always feature a monopolar global wave front and exist in nonlinear systems, such as the Bose-Einstein condensates. Clusters with multipolar global wave fronts are nonstationary or, at best, flipping
Converting existing Internal Combustion Generator (ICG) systems into HESs in standalone applications
Perera, A.T.D.; Attalage, R.A.; Perera, K.K.C.K.; Dassanayake, V.P.C.
2013-01-01
Graphical abstract: - Highlights: • Obtained Pareto fronts of LEC, power supply reliability (PSR) and ICC/GHG emission. • Pareto surface was observed for smaller ICGs when considering LEC–PSR–GHG. • Shape of the LEC–PSR–ICC Pareto front gradually changes with ICG capacity. • Importance of multi-criterion decision-making after multi objective optimization. - Abstract: Expanding existing Internal Combustion Generator (ICG) systems by combining renewable energy sources is getting popular due to global concern on emission of green house gases (GHG) and increasing fossil fuel costs. Life cycle cost, initial capital cost (ICC), power supply reliability of the system, and GHG emission by ICG are factors to be considered in this process. Pareto front of Levelized Energy Cost (LEC)–Unmet Load Fraction (ULF)–GHG emission was taken in this study for four different expansion scenarios. Furthermore, Pareto front of ICC–LE–ULF was taken for three different expansion scenarios in order to analyze the impact of renewable energy integration. The results clearly depict that characteristics of the Pareto front varies with the scale of expansion and objectives taken for the optimization. A detailed analysis was conducted for a scale up problem with a 4 kVA ICG by using the Pareto fronts obtained
Managing Controversies in the Fuzzy Front End
Christiansen, John K.; Gasparin, Marta
2016-01-01
. The analysis investigates the microprocesses around the controversies that emerge during the fuzzy front end of four products. Five different types of controversies are identified: profit, production, design, brand and customers/market. Each controversy represents a threat, but also an opportunity to search...... for new solutions in the unpredictable non-linear processes. The study uses an ethnographic approach using qualitative data from interviews, company documents, external communication and marketing material, minutes of meetings, informal conversations and observations. The analysis of four FFE processes...... demonstrates how the fuzzy front requires managers to deal with controversies that emerge from many different places and involve both human and non-human actors. Closing the controversies requires managers to take account of the situation, identify the problem that needs to be addressed, and initiate a search...
Trace metal fronts in European shelf waters
Kremling, K.
1983-01-01
The Hebrides shelf edge area is characterized by strong horizontal salinity gradients (fronts) which mark the boundary between Scottish coastal and oceanic waters. The results presented here, obtained in summer 1981 on a transect between the open north Atlantic and the German Bight, confirm that the hydrographical front is accompanied by dramatic increases in inorganic nutrients (phosphate, silicate) and dissolved trace elements such as Cd, Cu, Mn, and 226 Ra. These data (together with measurements from North Sea regions) suggest that the trace metals are mobilized from partly reduced (organic-rich) sediments and vertically mixed into the surface waters. The regional variations evident from the transect are interpreted as being the result of the hydrography prevailing in waters around the British Isles. (author)
Prototype ALICE front-end card
Maximilien Brice
2004-01-01
This circuit board is a prototype 48-channel front end digitizer card for the ALICE time projection chamber (TPC), which takes electrical signals from the wire sensors in the TPC and shapes the data before converting the analogue signal to digital data. A total of 4356 cards will be required to process the data from the ALICE TPC, the largest of this type of detector in the world.
Front Cover Photograph & Interview for FREEYE Magazine
Murray, Matthew
2003-01-01
Matthew Murray Front Cover Photograph & Interview for FREEYE Magazine - Dutch Quarterly For Exceptional International Photography, Holland.\\ud The article focuses on Murray's practice, his personal work, commissioned work, advertising, gallery and exhibition work along with his methodology. Looking at Murray's inspirations and how they feed into his personal projects and how this personal work feeds into shooting above the line advertising campaigns. Murray's work blurs the lines between pers...
Wave Front Sensor for Solar Concentrator Control
2009-10-01
terrestrial-based and space-based. Both types of concentrator can be either imaging or nonimaging and they can be rigid or inflatable. Other...and T is the temperature of the absorber and propellant. In (5), Iin is input intensity with effects of the optical path through the concentrator acting...Hartmann in 1900 and was used for checking optical telescopes for aberrations. It was an array of holes in a plate placed in front of the mirror of
Fronting and exhaustive exclusion in Biblical Hebrew
Kate H
48, 2017, 219-222 doi: 10.5774/48-0-292. Fronting and exhaustive exclusion in Biblical Hebrew. Christo H. J. van der Merwe. Department of Ancient Studies, University of Stellenbosch, South ... Merwe, Naudé and Kroeze 2017: 491-493). .... “And I will give him to the Lord all the days of his life, and no razor shall touch his.
Kinetics of a plasma streamer ionization front
Taccogna, Francesco; Pellegrini, Fabrizio
2018-02-01
A streamer is a non-linear and non-local gas breakdown mode. Its large-scale coherent structures, such as the ionization front, are the final results of a hierarchical cascade starting from the single particle dynamics. Therefore, this phenomenon covers, by definition, different space and time scales. In this study, we have reproduced the ionization front formation and development by means of a particle-based numerical methodology. The physical system investigated concerns of a high-voltage ns-pulsed surface dielectric barrier discharge. Different reduced electric field regimes ranging from 50 to 500 Td have been considered for two gases: pure atomic Ar and molecular N2. Results have shown the detailed structure of the negative streamer: the leading edge, the head, the interior and the tail. Its dynamical evolution and the front propagation velocity have been calculated for the different cases. Finally, the deviation of the electron energy distribution function from equilibrium behavior has been pointed out as a result of a fast and very localized phenomenon.
Wintertime sea surface temperature fronts in the Taiwan Strait
Chang, Yi; Shimada, Teruhisa; Lee, Ming-An; Lu, Hsueh-Jung; Sakaida, Futoki; Kawamura, Hiroshi
2006-12-01
We present wintertime variations and distributions of sea surface temperature (SST) fronts in the Taiwan Strait by applying an entropy-based edge detection method to 10-year (1996-2005) satellite SST images with grid size of 0.01°. From climatological monthly mean maps of SST gradient magnitude in winter, we identify four significant SST fronts in the Taiwan Strait. The Mainland China Coastal Front is a long frontal band along the 50-m isobath near the Chinese coast. The sharp Peng-Chang Front appears along the Peng-Hu Channel and extends northward around the Chang-Yuen Ridge. The Taiwan Bank Front evolves in early winter. As the winter progresses, the front becomes broad and moves toward the Chinese coast, connecting to the Mainland China Coastal Front. The Kuroshio Front extends northeastward from the northeastern tip of Taiwan with a semicircle-shape curving along the 100-m isobath.
Bibliometric analysis of acupuncture research fronts and their ...
Bibliometric analysis of acupuncture research fronts and their worldwide ... This study chronologically examined the changing features and research fronts of ... from the Science Citation Index Expanded and Social Science Citation Index.
Li, H.; Guo, L.; Zhou, M.; Cheng, Q.; Yu, X.; Huang, S.; Pang, Y.
2017-12-01
In this paper, we report the observation of the off-equatorial depolarization front structures by Magnetospheric Multiscale (MMS) mission at around X -8Re in the Earth's magnetotail. The dipolarization front was located at the flow rebounce region associated with a parallel electron beam. A large lower frequency electromagnetic wave fluctuation at the depolarization front is observed with the frequency near the ion gyrofrequency, left-handed polarization and a parallel propagation. A parallel current attributed to an electron beam coexist with the wave. The wave is believed to be generated by the current-driven ion cyclotron instability. Such instability is important because of its potential contribution to global electromagnetic energy conversion at the dipolarization front.
Topology optimization of front metallization patterns for solar cells
Gupta, D.K.; Langelaar, M.; Barink, M.; Keulen, F. van
2015-01-01
This paper presents the application of topology optimization (TO) for designing the front electrode patterns for solar cells. Improving the front electrode design is one of the approaches to improve the performance of the solar cells. It serves to produce the voltage distribution for the front
Automated Detection of Fronts using a Deep Learning Convolutional Neural Network
Biard, J. C.; Kunkel, K.; Racah, E.
2017-12-01
A deeper understanding of climate model simulations and the future effects of global warming on extreme weather can be attained through direct analyses of the phenomena that produce weather. Such analyses require these phenomena to be identified in automatic, unbiased, and comprehensive ways. Atmospheric fronts are centrally important weather phenomena because of the variety of significant weather events, such as thunderstorms, directly associated with them. In current operational meteorology, fronts are identified and drawn visually based on the approximate spatial coincidence of a number of quasi-linear localized features - a trough (relative minimum) in air pressure in combination with gradients in air temperature and/or humidity and a shift in wind, and are categorized as cold, warm, stationary, or occluded, with each type exhibiting somewhat different characteristics. Fronts are extended in space with one dimension much larger than the other (often represented by complex curved lines), which poses a significant challenge for automated approaches. We addressed this challenge by using a Deep Learning Convolutional Neural Network (CNN) to automatically identify and classify fronts. The CNN was trained using a "truth" dataset of front locations identified by National Weather Service meteorologists as part of operational 3-hourly surface analyses. The input to the CNN is a set of 5 gridded fields of surface atmospheric variables, including 2m temperature, 2m specific humidity, surface pressure, and the two components of the 10m horizontal wind velocity vector at 3-hr resolution. The output is a set of feature maps containing the per - grid cell probabilities for the presence of the 4 front types. The CNN was trained on a subset of the data and then used to produce front probabilities for each 3-hr time snapshot over a 14-year period covering the continental United States and some adjacent areas. The total frequencies of fronts derived from the CNN outputs matches
Novel Perspectives from Light-Front QCD, Super-Conformal Algebra, and Light-Front Holography
Brodsky, Stanley J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-12-01
Light-Front Quantization – Dirac’s “Front Form” – provides a physical, frame-independent formalism for hadron dynamics and structure. Observables such as structure functions, transverse momentum distributions, and distribution amplitudes are defined from the hadronic LFWFs. One obtains new insights into the hadronic mass scale, the hadronic spectrum, and the functional form of the QCD running coupling in the nonperturbative domain using light-front holography. In addition, superconformal algebra leads to remarkable supersymmetric relations between mesons and baryons. I also discuss evidence that the antishadowing of nuclear structure functions is nonuniversal; i.e., flavor dependent, and why shadowing and antishadowing phenomena may be incompatible with the momentum and other sum rules for the nuclear parton distribution functions.
Light front field theory: an advanced primer
Martinovic, L.
2007-01-01
We present an elementary introduction to quantum field theory formulated in terms of Dirac's light front variables. In addition to general principles and methods, a few more specific topics and approaches based on the author's work will be discussed. Most of the discussion deals with massive two-dimensional models formulated in a finite spatial volume starting with a detailed comparison between quantization of massive free fields in the usual field theory and the light front (LF) quantization. We discuss basic properties such as relativistic invariance and causality. After the LF treatment of the soluble Federbush model, a LF approach to spontaneous symmetry breaking is explained and a simple gauge theory - the massive Schwinger model in various gauges is studied. A LF version of bosonization and the massive Thirring model are also discussed. A special chapter is devoted to the method of discretized light cone quantization and its application to calculations of the properties of quantum solitons. The problem of LF zero modes is illustrated with the example of the two/dimensional Yukawa model. Hamiltonian perturbation theory in the LF formulation is derived and applied to a few simple processes to demonstrate its advantages. As a byproduct, it is shown that the LF theory cannot be obtained as a 'light-like' limit of the usual field theory quantized on a initial space-like surface. A simple LF formulation of the Higgs mechanism is then given Since our intention was to provide a treatment of the light front quantization accessible to postgradual students, an effort was made to discuss most of the topics pedagogically and number of technical details and derivations are contained in the appendices (Author)
Functional description of APS beamline front ends
Kuzay, T.
1993-02-01
Traditional synchrotron sources were designed to produce bending magnet radiation and have proven to be an essential scientific tool. Currently, a new generation of synchrotron sources is being built that will be able to accommodate a large number of insertion device (ID) and high quality bending magnet (BM) sources. One example is the 7-GeV Advanced Photon Source (APS) now under construction at Argonne National Laboratory. The research and development effort at the APS is designed to fully develop the potential of this new generation of synchrotron sources. Of the 40 straight sections in the APS storage ring, 34 will be available for IDs. The remaining six sections are reserved for the storage ring hardware and diagnostics. Although the ring incorporates 80 BMs, only 40 of them can be used to extract radiation. The accelerator hardware shadows five of these 40 bending magnets, so the maximum number of BM sources on the lattice is 35. Generally, a photon beamline consists of four functional sections. The first section is the ID or the BM, which provides the radiation source. The second section, which is immediately outside the storage ring but inside a concrete shielding tunnel, is the front end, which is designed to control, define, and/or confine the x-ray beam. In the case of the APS, the front ends are designed to confine the photon beam. The third section, just outside the concrete shielding tunnel and on the experimental floor, is the first optics enclosure, which contains optics to filter and monochromatize the photon beam. The fourth section of a beamline consists of beam transports, additional optics, and experiment stations to do the scientific investigations. This document describes only the front ends of the APS beamlines
Wang, D.; Shi, R.; Chen, J.; Guo, X.; Zeng, L.; Li, J.; Xie, Q.; Wang, X.
2017-12-01
The response of the marine atmospheric boundary layer (MABL) structure to an oceanic front is analyzed using Global Positioning System (GPS) sounding data obtained during a survey in the northwestern South China Sea (NSCS) over a period of about one week in April 2013. The Weather Research and Forecasting (WRF) model is used to further examine the thermodynamical mechanisms of the MABL's response to the front. The WRF model successfully simulates the change in the MABL structure across the front, which agrees well with the observations. The spatially high-pass-filtered fields of sea surface temperature (SST) and 10-m neutral equivalent wind from the WRF model simulation show a tight, positive coupling between the SST and surface winds near the front. Meanwhile, the SST front works as a damping zone to reduce the enhancement of wind blowing from the warm to the cold side of the front in the lower boundary layer. Analysis of the momentum budget shows that the most active and significant term affecting horizontal momentum over the frontal zone is the adjustment of the pressure gradient. It is found that the front in the NSCS is wide enough for slowly moving air parcels to be affected by the change in underlying SST. The different thermal structure upwind and downwind of the front causes a baroclinic adjustment of the perturbation pressure from the surface to the mid-layer of the MABL, which dominates the change in the wind profile across the front.
Front panel human interface for FASTBUS
Gustavson, D.B.; Holmes, T.L.; Paffrath, L.; Steffani, J.P.
1980-01-01
A human interface based on the Snoop diagnostic module has been designed to facilitate checkout of FASTBUS devices, diagnosis of system faults, and monitoring of system performance. This system, which is a generalization of the usual computer front panel or control console, includes logic analyzer functions, display and manual-control access to other modules, a microprocessor which allows the user to create and execute diagnostic programs and store them on a minifloppy disk, and a diagnostic network which allows remote console operation and coordination of information from multiple segments' Snoops
Light-Front Holography, Light-Front Wavefunctions, and Novel QCD Phenomena
Brodsky, S. J.; de Teramond, G. F.
2012-01-01
Light-front holography is one of the most remarkable features of the AdS/CFT correspondence. In spite of its present limitations, it provides important physical insights into the non-perturbative regime of QCD and its transition to the perturbative domain. This novel framework allows hadronic...... projected on the free Fock basis provides the complete set of valence and non-valence light-front Fock state wavefunctions Psi(n)/H(x(i), k(perpendicular to i), lambda(i)) which describe the hadron's momentum and spin distributions needed to compute the direct measures of hadron structure at the quark...
Syntactic and FSP Aspects of Fronting as a Style Marker
Libuše Dušková
2017-07-01
Full Text Available The paper examines contextual and emphatic fronting in academic prose, fiction narrative and fiction dialogue in order to ascertain whether the types of fronting can serve as a style marker. The differences in the distribution and their effect on style are assumed to be connected with the respective FSP structures: in emphatic fronting the fronted element is the rheme, whereas in contextual fronting it is the diatheme. Hence emphatic fronting displays a prominent deviation from the basic distribution of communicative dynamism, whereas contextual fronting achieves agreement with it. As compared with the unmarked postverbal ordering, emphatic fronting intensifies the emphatic/emotional character of the content being expressed, which is a feature of speech, while contextual fronting serves as a direct link with what precedes, hence contributes to textual cohesion, which is a characteristic of academic prose, with fiction narrative presumably occupying an intermediate position. The results of the study show more types of fronting with diversified structures and less clear-cut relations between the types of frontings and the examined text sorts.
Internal waves and temperature fronts on slopes
S. A. Thorpe
Full Text Available Time series measurements from an array of temperature miniloggers in a line at constant depth along the sloping boundary of a lake are used to describe the `internal surf zone' where internal waves interact with the sloping boundary. More small positive temperature time derivatives are recorded than negative, but there are more large negative values than positive, giving the overall distribution of temperature time derivatives a small negative skewness. This is consistent with the internal wave dynamics; fronts form during the up-slope phase of the motion, bringing cold water up the slope, and the return flow may become unstable, leading to small advecting billows and weak warm fronts. The data are analysed to detect `events', periods in which the temperature derivatives exceed a set threshold. The speed and distance travelled by `events' are described. The motion along the slope may be a consequence of (a instabilities advected by the flow (b internal waves propagating along-slope or (c internal waves approaching the slope from oblique directions. The propagation of several of the observed 'events' can only be explained by (c, evidence that the internal surf zone has some, but possibly not all, the characteristics of the conventional 'surface wave' surf zone, with waves steepening as they approach the slope at oblique angles.
Key words. Oceanography: general (benthic boundary layers; limnology, Oceanography: physical (internal and inertial waves
Biomechanics of front and back squat exercises
Braidot, A A; Brusa, M H; Lestussi, F E; Parera, G P
2007-01-01
Squat constitutes one of the most popular exercises to strengthen the muscles of the lower limbs. It is considered one of the most widely spread exercises for muscle sport training and is part of the competition movements comprised within olympic weight-lifting. In physical rehabilitation, squats are used for muscular recovery after different injuries of the lower limbs, especially the knee. In previous anterior cruciate ligament injuries, the mini-squats are generally used, in a knee flexion motion range from 0 deg. to 50 deg. because in this range the shear forces, the tibiofemoral and patellofemoral compression forces decrease related to greater flexion angles. The aim of this work is to make a comparative bidimensional study of the kinematic and dynamic variables of the excecution of the parallel squat exercise with the front and back bar. It is observed in the knee a better development of energy with the front bar, allowing a better muscular exercise with the same load. The mean power absorbed by the hip with the back bar is considerably greater, associated to the speed of the gesture
Biomechanics of front and back squat exercises
Braidot, A A [Laboratorio de Biomecanica FI-UNER. Ruta 11 Km 10 Oro Verde Entre Rios (Argentina); Brusa, M H [Laboratorio de Biomecanica FI-UNER. Ruta 11 Km 10 Oro Verde Entre Rios (Argentina); Lestussi, F E [Laboratorio de Biomecanica FI-UNER. Ruta 11 Km 10 Oro Verde Entre Rios (Argentina); Parera, G P [Licenciatura en KinesiologIa y FisiatrIa Universidad Abierta Interamericana. Sede Regional Rosario (Argentina)
2007-11-15
Squat constitutes one of the most popular exercises to strengthen the muscles of the lower limbs. It is considered one of the most widely spread exercises for muscle sport training and is part of the competition movements comprised within olympic weight-lifting. In physical rehabilitation, squats are used for muscular recovery after different injuries of the lower limbs, especially the knee. In previous anterior cruciate ligament injuries, the mini-squats are generally used, in a knee flexion motion range from 0 deg. to 50 deg. because in this range the shear forces, the tibiofemoral and patellofemoral compression forces decrease related to greater flexion angles. The aim of this work is to make a comparative bidimensional study of the kinematic and dynamic variables of the excecution of the parallel squat exercise with the front and back bar. It is observed in the knee a better development of energy with the front bar, allowing a better muscular exercise with the same load. The mean power absorbed by the hip with the back bar is considerably greater, associated to the speed of the gesture.
Biomechanics of front and back squat exercises
Braidot, A. A.; Brusa, M. H.; Lestussi, F. E.; Parera, G. P.
2007-11-01
Squat constitutes one of the most popular exercises to strengthen the muscles of the lower limbs. It is considered one of the most widely spread exercises for muscle sport training and is part of the competition movements comprised within olympic weight-lifting. In physical rehabilitation, squats are used for muscular recovery after different injuries of the lower limbs, especially the knee. In previous anterior cruciate ligament injuries, the mini-squats are generally used, in a knee flexion motion range from 0° to 50° because in this range the shear forces, the tibiofemoral and patellofemoral compression forces decrease related to greater flexion angles. The aim of this work is to make a comparative bidimensional study of the kinematic and dynamic variables of the excecution of the parallel squat exercise with the front and back bar. It is observed in the knee a better development of energy with the front bar, allowing a better muscular exercise with the same load. The mean power absorbed by the hip with the back bar is considerably greater, associated to the speed of the gesture.
Tracer filamentation at an unstable ocean front
Feng, Yen Chia; Mahadevan, Amala; Thiffeault, Jean-Luc; Yecko, Philip
2017-11-01
A front, where two bodies of ocean water with different physical properties meet, can become unstable and lead to a flow with high strain rate and vorticity. Phytoplankton and other oceanic tracers are stirred into filaments by such flow fields, as can often be seen in satellite imagery. The stretching and folding of a tracer by a two-dimensional flow field has been well studied. In the ocean, however, the vertical shear of horizontal velocity is typically two orders of magnitude larger than the horizontal velocity gradient. Theoretical calculations show that vertical shear alters the way in which horizontal strain affects the tracer, resulting in thin, sloping structures in the tracer field. Using a non-hydrostatic ocean model of an unstable ocean front, we simulate tracer filamentation to identify the effect of vertical shear on the deformation of the tracer. In a complementary laboratory experiment, we generate a simple, vertically sheared strain flow and use dye and particle image velocimetry to quantify the filamentary structures in terms of the strain and shear. We identify how vertical shear alters the tracer filaments and infer how the evolution of tracers in the ocean will differ from the idealized two-dimensional paradigm. Support of NSF DMS-1418956 is acknowledged.
The ALICE TPC front end electronics
Musa, L; Bialas, N; Bramm, R; Campagnolo, R; Engster, Claude; Formenti, F; Bonnes, U; Esteve-Bosch, R; Frankenfeld, Ulrich; Glässel, P; Gonzales, C; Gustafsson, Hans Åke; Jiménez, A; Junique, A; Lien, J; Lindenstruth, V; Mota, B; Braun-Munzinger, P; Oeschler, H; Österman, L; Renfordt, R E; Ruschmann, G; Röhrich, D; Schmidt, H R; Stachel, J; Soltveit, A K; Ullaland, K
2004-01-01
In this paper we present the front end electronics for the time projection chamber (TPC) of the ALICE experiment. The system, which consists of about 570000 channels, is based on two basic units: (a) an analogue ASIC (PASA) that incorporates the shaping-amplifier circuits for 16 channels; (b) a mixed-signal ASIC (ALTRO) that integrates 16 channels, each consisting of a 10-bit 25-MSPS ADC, the baseline subtraction, tail cancellation filter, zero suppression and multi-event buffer. The complete readout chain is contained in front end cards (FEC), with 128 channels each, connected to the detector by means of capton cables. A number of FECs (up to 25) are controlled by a readout control unit (RCU), which interfaces the FECs to the data acquisition (DAQ), the trigger, and the detector control system (DCS) . A function of the final electronics (1024 channels) has been characterized in a test that incorporates a prototype of the ALICE TPC as well as many other components of the final set-up. The tests show that the ...
Shojaeefard, Mohammad Hasan; Behnagh, Reza Abdi; Akbari, Mostafa; Givi, Mohammad Kazem Besharati; Farhani, Foad
2013-01-01
Highlights: ► Defect-free friction stir welds have been produced for AA5083-O/AA7075-O. ► Back-propagation was sufficient for predicting hardness and tensile strength. ► A hybrid multi-objective algorithm is proposed to deal with this MOP. ► Multi-objective particle swarm optimization was used to find the Pareto solutions. ► TOPSIS is used to rank the given alternatives of the Pareto solutions. -- Abstract: Friction Stir Welding (FSW) has been successfully used to weld similar and dissimilar cast and wrought aluminium alloys, especially for aircraft aluminium alloys, that generally present with low weldability by the traditional fusion welding process. This paper focuses on the microstructural and mechanical properties of the Friction Stir Welding (FSW) of AA7075-O to AA5083-O aluminium alloys. Weld microstructures, hardness and tensile properties were evaluated in as-welded condition. Tensile tests indicated that mechanical properties of the joint were better than in the base metals. An Artificial Neural Network (ANN) model was developed to simulate the correlation between the Friction Stir Welding parameters and mechanical properties. Performance of the ANN model was excellent and the model was employed to predict the ultimate tensile strength and hardness of butt joint of AA7075–AA5083 as functions of weld and rotational speeds. The multi-objective particle swarm optimization was used to obtain the Pareto-optimal set. Finally, the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) was applied to determine the best compromised solution.
Li Chao; Ebert, Ute; Hundsdorfer, Willem
2010-01-01
Streamers are the first stage of sparks and lightning; they grow due to a strongly enhanced electric field at their tips; this field is created by a thin curved space charge layer. These multiple scales are already challenging when the electrons are approximated by densities. However, electron density fluctuations in the leading edge of the front and non-thermal stretched tails of the electron energy distribution (as a cause of X-ray emissions) require a particle model to follow the electron motion. But present computers cannot deal with all electrons in a fully developed streamer. Therefore, super-particle have to be introduced, which leads to wrong statistics and numerical artifacts. The method of choice is a hybrid computation in space where individual electrons are followed in the region of high electric field and low density while the bulk of the electrons is approximated by densities (or fluids). We here develop the hybrid coupling for planar fronts. First, to obtain a consistent flux at the interface between particle and fluid model in the hybrid computation, the widely used classical fluid model is replaced by an extended fluid model. Then the coupling algorithm and the numerical implementation of the spatially hybrid model are presented in detail, in particular, the position of the model interface and the construction of the buffer region. The method carries generic features of pulled fronts that can be applied to similar problems like large deviations in the leading edge of population fronts, etc.
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General
Vector mesons on the light front
Naito, K.; Maedan, S.; Itakura, K.
2004-01-01
We apply the light-front quantization to the Nambu-Jona-Lasinio model with the vector interaction, and compute vector meson's mass and light-cone wavefunction in the large N limit. Following the same procedure as in the previous analyses for scalar and pseudo-scalar mesons, we derive the bound-state equations of a qq-bar system in the vector channel. We include the lowest order effects of the vector interaction. The resulting transverse and longitudinal components of the bound-state equation look different from each other. But eventually after imposing an appropriate cutoff, one finds these two are identical, giving the same mass and the same (spin-independent) light-cone wavefunction. Mass of the vector meson decreases as one increases the strength of the vector interaction
Light-front quantization of field theory
Srivastava, Prem P. [Universidade do Estado, Rio de Janeiro, RJ (Brazil). Inst. de Fisica]|[Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil)
1996-07-01
Some basic topics in Light-Front (LF) quantized field theory are reviewed. Poincare algebra and the LF spin operator are discussed. The local scalar field theory of the conventional framework is shown to correspond to a non-local Hamiltonian theory on the LF in view of the constraint equations on the phase space, which relate the bosonic condensates to the non-zero modes. This new ingredient is useful to describe the spontaneous symmetry breaking on the LF. The instability of the symmetric phase in two dimensional scalar theory when the coupling constant grows is shown in the LF theory renormalized to one loop order. Chern-Simons gauge theory, regarded to describe excitations with fractional statistics, is quantized in the light-cone gauge and a simple LF Hamiltonian obtained which may allow us to construct renormalized theory of anyons. (author). 20 refs.
Light-front quantization of field theory
Srivastava, Prem P.
1996-07-01
Some basic topics in Light-Front (LF) quantized field theory are reviewed. Poincare algebra and the LF spin operator are discussed. The local scalar field theory of the conventional framework is shown to correspond to a non-local Hamiltonian theory on the LF in view of the constraint equations on the phase space, which relate the bosonic condensates to the non-zero modes. This new ingredient is useful to describe the spontaneous symmetry breaking on the LF. The instability of the symmetric phase in two dimensional scalar theory when the coupling constant grows is shown in the LF theory renormalized to one loop order. Chern-Simons gauge theory, regarded to describe excitations with fractional statistics, is quantized in the light-cone gauge and a simple LF Hamiltonian obtained which may allow us to construct renormalized theory of anyons. (author). 20 refs
The CMS Tracker Readout Front End Driver
Foudas, C.; Ballard, D.; Church, I.; Corrin, E.; Coughlan, J.A.; Day, C.P.; Freeman, E.J.; Fulcher, J.; Gannon, W.J.F.; Hall, G.; Halsall, R.N.J.; Iles, G.; Jones, J.; Leaver, J.; Noy, M.; Pearson, M.; Raymond, M.; Reid, I.; Rogers, G.; Salisbury, J.; Taghavi, S.; Tomalin, I.R.; Zorba, O.
2004-01-01
The Front End Driver, FED, is a 9U 400mm VME64x card designed for reading out the Compact Muon Solenoid, CMS, silicon tracker signals transmitted by the APV25 analogue pipeline Application Specific Integrated Circuits. The FED receives the signals via 96 optical fibers at a total input rate of 3.4 GB/sec. The signals are digitized and processed by applying algorithms for pedestal and common mode noise subtraction. Algorithms that search for clusters of hits are used to further reduce the input rate. Only the cluster data along with trigger information of the event are transmitted to the CMS data acquisition system using the S-LINK64 protocol at a maximum rate of 400 MB/sec. All data processing algorithms on the FED are executed in large on-board Field Programmable Gate Arrays. Results on the design, performance, testing and quality control of the FED are presented and discussed.
Light-Front Holography, Light-Front Wavefunctions, and Novel QCD Phenomena
Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins; de Teramond, Guy F.; /Costa Rica U.
2012-02-16
Light-Front Holography is one of the most remarkable features of the AdS/CFT correspondence. In spite of its present limitations it provides important physical insights into the nonperturbative regime of QCD and its transition to the perturbative domain. This novel framework allows hadronic amplitudes in a higher dimensional anti-de Sitter (AdS) space to be mapped to frame-independent light-front wavefunctions of hadrons in physical space-time. The model leads to an effective confining light-front QCD Hamiltonian and a single-variable light-front Schroedinger equation which determines the eigenspectrum and the light-front wavefunctions of hadrons for general spin and orbital angular momentum. The coordinate z in AdS space is uniquely identified with a Lorentz-invariant coordinate {zeta} which measures the separation of the constituents within a hadron at equal light-front time and determines the off-shell dynamics of the bound-state wavefunctions, and thus the fall-off as a function of the invariant mass of the constituents. The soft-wall holographic model modified by a positive-sign dilaton metric, leads to a remarkable one-parameter description of nonperturbative hadron dynamics - a semi-classical frame-independent first approximation to the spectra and light-front wavefunctions of meson and baryons. The model predicts a Regge spectrum of linear trajectories with the same slope in the leading orbital angular momentum L of hadrons and the radial quantum number n. The hadron eigensolutions projected on the free Fock basis provides the complete set of valence and non-valence light-front Fock state wavefunctions {Psi}{sub n/H} (x{sub i}, k{sub {perpendicular}i}, {lambda}{sub i}) which describe the hadron's momentum and spin distributions needed to compute the direct measures of hadron structure at the quark and gluon level, such as elastic and transition form factors, distribution amplitudes, structure functions, generalized parton distributions and transverse
Travelling fronts in stochastic Stokes’ drifts
Blanchet, Adrien; Dolbeault, Jean; Kowalczyk, Michał
2008-01-01
By analytical methods we study the large time properties of the solution of a simple one-dimensional model of stochastic Stokes' drift. Semi-explicit formulae allow us to characterize the behaviour of the solutions and compute global quantities
Bare quantifier fronting as contrastive topicalization
Ion Giurgea
2015-11-01
Full Text Available I argue that indefinites (in particular bare quantifiers such as ‘something’, ‘somebody’, etc. which are neither existentially presupposed nor in the restriction of a quantifier over situations, can undergo topicalization in a number of Romance languages (Catalan, Italian, Romanian, Spanish, but only if the sentence contains “verum” focus, i.e. focus on a high degree of certainty of the sentence. I analyze these indefinites as contrastive topics, using Büring’s (1999 theory (where the term ‘S-topic’ is used for what I call ‘contrastive topic’. I propose that the topic is evaluated in relation to a scalar set including generalized quantifiers such as {lP $x P(x, lP MANYx P(x, lP MOSTx P(x, lP “xP(x} or {lP $xP(x, lP P(a, lP P(b …}, and that the contrastive topic is the weakest generalized quantifier in this set. The verum focus, which is part of the “comment” that co-occurs with the “Topic”, introduces a set of alternatives including degrees of certainty of the assertion. The speaker asserts that his claim is certainly true or highly probable, contrasting it with stronger claims for which the degree of probability is unknown. This explains the observation that in downward entailing contexts, the fronted quantified DPs are headed by ‘all’ or ‘many’, whereas ‘some’, small numbers or ‘at least n’ appear in upward entailing contexts. Unlike other cases of non-specific topics, which are property topics, these are quantifier topics: the topic part is a generalized quantifier, the comment is a property of generalized quantifiers. This explains the narrow scope of the fronted quantified DP.
Salinity fronts in the tropical Pacific Ocean.
Kao, Hsun-Ying; Lagerloef, Gary S E
2015-02-01
This study delineates the salinity fronts (SF) across the tropical Pacific, and describes their variability and regional dynamical significance using Aquarius satellite observations. From the monthly maps of the SF, we find that the SF in the tropical Pacific are (1) usually observed around the boundaries of the fresh pool under the intertropical convergence zone (ITCZ), (2) stronger in boreal autumn than in other seasons, and (3) usually stronger in the eastern Pacific than in the western Pacific. The relationship between the SF and the precipitation and the surface velocity are also discussed. We further present detailed analysis of the SF in three key tropical Pacific regions. Extending zonally around the ITCZ, where the temperature is nearly homogeneous, we find the strong SF of 1.2 psu from 7° to 11°N to be the main contributor of the horizontal density difference of 0.8 kg/m 3 . In the eastern Pacific, we observe a southward extension of the SF in the boreal spring that could be driven by both precipitation and horizontal advection. In the western Pacific, the importance of these newly resolved SF associated with the western Pacific warm/fresh pool and El Niño southern oscillations are also discussed in the context of prior literature. The main conclusions of this study are that (a) Aquarius satellite salinity measurements reveal the heretofore unknown proliferation, structure, and variability of surface salinity fronts, and that (b) the fine-scale structures of the SF in the tropical Pacific yield important new information on the regional air-sea interaction and the upper ocean dynamics.
MMIC front-ends for optical communication systems
Petersen, Anders Kongstad
1993-01-01
Two different types of optical front-end MMIC amplifiers for a 2.5-Gb/s coherent heterodyne optical receiver are presented. A bandwidth of 6-12 GHz has been obtained for a tuned front-end and 3-13 GHz for a distributed front-end. An input noise current density of 5-15 pA/√Hz has been obtained for...
The upgraded CDF front end electronics for calorimetry
Drake, G.; Frei, D.; Hahn, S.R.; Nelson, C.A.; Segler, S.L.; Stuermer, W.
1991-11-01
The front end electronics used in the calorimetry of the CDF detector has been upgraded to meet system requirements for higher expected luminosity. A fast digitizer utilizing a 2 {mu}Sec, 16 bit ADC has been designed and built. Improvements to the front end trigger circuitry have been implemented, including the production of 900 new front end modules. Operational experience with the previous system is presented, with discussion of the problems and performance goals.
The upgraded CDF front end electronics for calorimetry
Drake, G.; Frei, D.; Hahn, S.R.; Nelson, C.A.; Segler, S.L.; Stuermer, W.
1991-11-01
The front end electronics used in the calorimetry of the CDF detector has been upgraded to meet system requirements for higher expected luminosity. A fast digitizer utilizing a 2 μSec, 16 bit ADC has been designed and built. Improvements to the front end trigger circuitry have been implemented, including the production of 900 new front end modules. Operational experience with the previous system is presented, with discussion of the problems and performance goals
Gniadek Agnieszka
2014-12-01
Full Text Available This study aims at demonstrating the usefulness of the Pareto in- clusive criterion methodology for comparative analyses of fungi toxicity. The toxicity of fungi is usually measured using a scale of several ranks. In practice, the ranks of toxicity are routinely grouped into only four conventional classes of toxicity: from a class of no toxicity, low toxicity, and moderate toxicity, to a class of high toxicity. The illustrative material included the N = 61 fungi samples obtained from three species: A. ochraceus, A. niger and A. flavus. In accordance with the Pareto approach, four partial criterions of the worst toxi- city were defined, a single criterion used for each conventional class of toxicity. Finally, the odds ratios (OR were calculated separately for each partial cri- terion, and the significance of the hypotheses OR = 1 was estimated. It was stated that A. ochraceus fungi are distinctly more toxic than the two remaining ones with respect to the all considered four partial criterions, with significance equal to p = 0.04, p = 0.04, p = 0.007 and p = 0.005, respectively. Thus, the suggested method illustrated its utility in the case under study.
Olivares, Marcelo A.; Haas, Jannik; Palma-Behnke, Rodrigo; Benavides, Carlos
2015-05-01
Hydrologic alteration due to hydropeaking reservoir operations is a main concern worldwide. Subdaily environmental flow constraints (ECs) on operations can be promising alternatives for mitigating negative impacts. However, those constraints reduce the flexibility of hydropower plants, potentially with higher costs for the power system. To study the economic and environmental efficiency of ECs, this work proposes a novel framework comprising four steps: (i) assessment of the current subdaily hydrologic alteration; (ii) formulation and implementation of a short-term, grid-wide hydrothermal coordination model; (iii) design of ECs in the form of maximum ramping rates (MRRs) and minimum flows (MIFs) for selected hydropower reservoirs; and (iv) identification of Pareto-efficient solutions in terms of grid-wide costs and the Richard-Baker flashiness index for subdaily hydrologic alteration (SDHA). The framework was applied to Chile's main power grid, assessing 25 EC cases, involving five MIFs and five MRRs. Each case was run for a dry, normal, and wet water year type. Three Pareto-efficient ECs are found, with remarkably small cost increase below 2% and a SDHA improvement between 28% and 90%. While the case involving the highest MIF worsens the flashiness of another basin, the other two have no negative effect on other basins and can be recommended for implementation.
Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.
2016-03-01
The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.
Application of deep convolutional neural networks for ocean front recognition
Lima, Estanislau; Sun, Xin; Yang, Yuting; Dong, Junyu
2017-10-01
Ocean fronts have been a subject of study for many years, a variety of methods and algorithms have been proposed to address the problem of ocean fronts. However, all these existing ocean front recognition methods are built upon human expertise in defining the front based on subjective thresholds of relevant physical variables. This paper proposes a deep learning approach for ocean front recognition that is able to automatically recognize the front. We first investigated four existing deep architectures, i.e., AlexNet, CaffeNet, GoogLeNet, and VGGNet, for the ocean front recognition task using remote sensing (RS) data. We then propose a deep network with fewer layers compared to existing architecture for the front recognition task. This network has a total of five learnable layers. In addition, we extended the proposed network to recognize and classify the front into strong and weak ones. We evaluated and analyzed the proposed network with two strategies of exploiting the deep model: full-training and fine-tuning. Experiments are conducted on three different RS image datasets, which have different properties. Experimental results show that our model can produce accurate recognition results.
DYNAMICS OF HIGH ENERGY IONS AT A STRUCTURED COLLISIONLESS SHOCK FRONT
Gedalin, M. [Department of Physics, Ben-Gurion University of the Negev, Beer-Sheva (Israel); Dröge, W.; Kartavykh, Y. Y., E-mail: gedalin@bgu.ac.il [Institute for Theoretical Physics and Astrophysics, University of Würzburg, Würzburg (Germany)
2016-07-10
Ions undergoing first-order Fermi acceleration at a shock are scattered in the upstream and downstream regions by magnetic inhomogeneities. For high energy ions this scattering is efficient at spatial scales substantially larger than the gyroradius of the ions. The transition from one diffusive region to the other occurs via crossing the shock, and the ion dynamics during this crossing is mainly affected by the global magnetic field change between the upstream and downstream region. We study the effects of the fine structure of the shock front, such as the foot-ramp-overshoot profile and the phase-standing upstream and downstream magnetic oscillations. We also consider time dependent features, including reformation and large amplitude coherent waves. We show that the influence of the spatial and temporal structure of the shock front on the dependence of the transition and reflection on the pitch angle of the ions is already weak at ion speeds five times the speed of the upstream flow.
The PHENIX Drift Chamber Front End Electroncs
Pancake, C.; Velkovska, J.; Pantuev, V.; Fong, D.; Hemmick, T.
1998-04-01
The PHENIX Drift Chamber (DC) is designed to operate in the high particle flux environment of the Relativistic Heavy Ion Collider and provide high resolution track measurements. It is segmented into 80 keystones with 160 readout channels each. The Front End Electronics (FEE) developed to meet the demanding operating conditions and the large number of readout channels of the DC will be discussed. It is based on two application specific integrated circuits: the ASD8 and the TMC-PHX1. The ASD8 chip contains 8 channels of bipolar amplifier-shaper-discriminator with 6 ns shaping time and ≈ 20 ns pulse width, which satisfies the two track resolution requirements. The TMC-PHX1 chip is a high-resolution multi-hit Time-to-Digital Converter. The outputs from the ASD8 are digitized in the Time Memory Cell (TMC) every (clock period)/32 or 0.78 ns (at 40 MHz), which gives the intrinsic time resolution of the system. A 256 words deep dual port memory keeps 6.4 μs time history of data at 40 MHz clock. Each DC keystone is supplied with 4 ASD8/TMC boards and one FEM board, which performs the readout of the TMC-PHX1's, buffers and formats the data to be transmitted over the Glink. The slow speed control communication between the FEM and the system is carried out over ARCNET. The full readout chain and the data aquisition system are being tested.
Gribben, John G
2010-01-14
Although chronic lymphocytic leukemia (CLL) remains incurable, over the past decade there have been major advances in understanding the pathophysiology of CLL and in the treatment of this disease. This has led to greatly increased response rates and durations of response but not yet improved survival. Advances in the use of prognostic factors that identify patients at high risk for progression have led us to the question whether there is still a role for a "watch and wait" approach in asymptomatic high-risk patients or whether they should be treated earlier in their disease course. Questions remain, including, what is the optimal first-line treatment and its timing and is there any role of maintenance therapy or stem cell transplantation in this disease? CLL is a disease of the elderly and not all patients are eligible for aggressive up-front chemoimmunotherapy regimens, so what is the optimal treatment approach for more frail elderly patients? It is highly likely that our treatment approaches will continue to evolve as the results of ongoing clinical trials are released and that further improvements in the outcome of this disease will result from identification of therapies that target the underlying pathophysiology of CLL.
Front lighted optical tooling method and apparatus
Stone, W. J.
1985-01-01
An optical tooling method and apparatus uses a front lighted shadowgraphic technique to enhance visual contrast of reflected light. The apparatus includes an optical assembly including a fiducial mark, such as cross hairs, reflecting polarized light with a first polarization, a polarizing element backing the fiducial mark and a reflective surface backing the polarizing element for reflecting polarized light bypassing the fiducial mark and traveling through the polarizing element. The light reflected by the reflecting surface is directed through a second pass of the polarizing element toward the frontal direction with a polarization differing from the polarization of the light reflected by the fiducial mark. When used as a tooling target, the optical assembly may be mounted directly to a reference surface or may be secured in a mounting, such as a magnetic mounting. The optical assembly may also be mounted in a plane defining structure and used as a spherometer in conjunction with an optical depth measuring instrument. A method of measuring a radius of curvature of an unknown surface includes positioning the spherometer on a surface between the surface and a depth measuring optical instrument. As the spherometer is frontally illuminated, the distance from the depth measuring instrument to the fiducial mark and the underlying surface are alternately measured and the difference in these measurements is used as the sagittal height to calculate a radius of curvature
Front tracking for hyperbolic conservation laws
Holden, Helge
2015-01-01
This is the second edition of a well-received book providing the fundamentals of the theory hyperbolic conservation laws. Several chapters have been rewritten, new material has been added, in particular, a chapter on space dependent flux functions, and the detailed solution of the Riemann problem for the Euler equations. Hyperbolic conservation laws are central in the theory of nonlinear partial differential equations and in science and technology. The reader is given a self-contained presentation using front tracking, which is also a numerical method. The multidimensional scalar case and the case of systems on the line are treated in detail. A chapter on finite differences is included. From the reviews of the first edition: "It is already one of the few best digests on this topic. The present book is an excellent compromise between theory and practice. Students will appreciate the lively and accurate style." D. Serre, MathSciNet "I have read the book with great pleasure, and I can recommend it to experts ...
Front tracking for hyperbolic conservation laws
Holden, Helge
2002-01-01
Hyperbolic conservation laws are central in the theory of nonlinear partial differential equations and in science and technology. The reader is given a self-contained presentation using front tracking, which is also a numerical method. The multidimensional scalar case and the case of systems on the line are treated in detail. A chapter on finite differences is included. "It is already one of the few best digests on this topic. The present book is an excellent compromise between theory and practice. Students will appreciate the lively and accurate style." D. Serre, MathSciNet "I have read the book with great pleasure, and I can recommend it to experts as well as students. It can also be used for reliable and very exciting basis for a one-semester graduate course." S. Noelle, Book review, German Math. Soc. "Making it an ideal first book for the theory of nonlinear partial differential equations...an excellent reference for a graduate course on nonlinear conservation laws." M. Laforest, Comp. Phys. Comm.
Light-Front Dynamics in Hadron Physics
Ji, C.-R.; Bakker, B.L.G.; Choi, H.-M.
2013-01-01
Light-front dynamics(LFD) plays an important role in the analyses of relativistic few-body systems. As evidenced from the recent studies of generalized parton distributions (GPDs) in hadron physics, a natural framework for a detailed study of hadron structures is LFD due to its direct application in Minkowski space as well as its distinct feature of accounting for the vacuum fluctuations in quantum field theories. In the last few years, however, it has been emphasized that treacherous points such as LF singularities and zero-modes should be taken into account for successful LFD applications to hadron phenomenology. In this paper, we discuss a typical example of the contemporary relativistic hadron physics in which the fundamental issues should be taken into account for the successful application of LFD. In particular, we focus on the kinematic issue of GPDs in deeply virtual Compton scattering (DVCS). Although this fundamental issue has been glossed over in the literature, it must be taken care of for the correct analysis of DVCS data. (author)
The Front Line of Genomic Translation
O'Neill, C. S.; McBride, C. M.; Koehly, L. M.; Bryan, A. D.; Wideroff, L.
2012-01-01
Cancer prevention, detection, and treatment represent the front line of genomic translation. Increasingly, new genomic knowledge is being used to inform personalized cancer prevention recommendations and treatment [1-3]. Genomic applications proposed and realized span the full cancer continuum, from cancer prevention and early detection vis a vis genomic risk profiles to motivate behavioral risk reduction and adherence [4] to screening and prophylactic prevention recommendations for high-risk families [5-7], to enhancing cancer survivorship by using genomic tumor profiles to inform treatment decisions and targeted cancer therapies [8, 9]. Yet the utility for many of these applications is as yet unclear and will be influenced heavily by the public’s, patients’, and health care providers’ responses and in numerous other factors, such as health care delivery models [3]. The contributors to this special issue consider various target groups’ responses and contextual factors. To reflect the cancer continuum, the special issue is divided into three broad, overlapping themes-primary prevention, high risk families and family communication and clinical translation.
FACILITATING RADICAL FRONT-END INNOVATION THROUGH TARGETED HR PRACTICES
Aagaard, Annabeth
2017-01-01
This study examines how radical front end innovation can be actively facilitated through selected and targeted HR practices and bundles of HR practices. The empirical field is an explorative case study of front end innovation and HR practices in the pharmaceutical industry, with an in-depth case ...
An improved front tracking method for the Euler equations
Witteveen, J.A.S.; Koren, B.; Bakker, P.G.
2007-01-01
An improved front tracking method for hyperbolic conservation laws is presented. The improved method accurately resolves discontinuities as well as continuous phenomena. The method is based on an improved front interaction model for a physically more accurate modeling of the Euler equations, as
MMIC tuned front-end for a coherent optical receiver
Petersen, Anders Kongstad; Jagd, A. M.; Ebskamp, F.
1993-01-01
A low-noise transformer tuned optical front-end for a coherent optical receiver is described. The front-end is based on a GaInAs/InP p-i-n photodiode and a full custom designed GaAs monolithic microwave integrated circuit (MMIC). The measured equivalent input noise current density is between 5-16 p...
Desirable forest structures for a restored Front Range
Yvette L. Dickinson; Rob Addington; Greg Aplet; Mike Babler; Mike Battaglia; Peter Brown; Tony Cheng; Casey Cooley; Dick Edwards; Jonas Feinstein; Paula Fornwalt; Hal Gibbs; Megan Matonis; Kristen Pelz; Claudia Regan
2014-01-01
As part of the federal Collaborative Forest Landscape Restoration Program administered by the US Forest Service, the Colorado Front Range Collaborative Forest Landscape Restoration Project (FR-CFLRP, a collaborative effort of the Front Range Roundtable1 and the US Forest Service) is required to define desired conditions for lower montane ponderosa pine (Pinus ponderosa...
Stability of reaction fronts in random walk simulations
Nagy, Noemi; Izsak, F.
A model of propagating reaction fronts is given for simple autocatalytic reactions and the stability of the propagating reaction fronts are studied in several numerical experiments. The corresponding random walk simulations - extending of a recent algorithm - make possible the simultaneous treatment
The Term Innovation and its Front End
Brem, Alexander
2009-01-01
, before the background of globalized economies, a common understanding is needed for successful future intercultural projects and appropriate management. Especially in scientific research, a first step should be made in this direction. Therefore, a comprehensive view of the term innovation and its...
Kontaxis, C; Bol, G; Lagendijk, J; Raaymakers, B [University Medical Center Utrecht, Utrecht (Netherlands); Breedveld, S; Sharfo, A; Heijmen, B [Erasmus University Medical Center Rotterdam, Rotterdam (Netherlands)
2016-06-15
Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certain percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan
Kontaxis, C; Bol, G; Lagendijk, J; Raaymakers, B; Breedveld, S; Sharfo, A; Heijmen, B
2016-01-01
Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certain percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan
Vicente-Serrano, S.; Beguería, S.
2003-01-01
This paper analyses fifty-year time series of daily precipitation in a region of the middle Ebro valley (northern Spain) in order to predict extreme dry-spell risk. A comparison of observed and estimated maximum dry spells (50-year return period) showed that the Generalised Pareto (GP)
Two economists in front of climate challenges
Stern, Nicholas; Guesnerie, Roger
2012-01-01
This document proposes a brief presentation of a book written by two economists about climate challenges. They address the cost of global warming now and if we do not do anything about it, and the cost of an alternative action. Although they do not agree on all topics, they agree on the fact that we must massively act now against global warming. They address and discuss issues of climate economic policy (carbon tax, border adjustment, etc.), and the conditions for a successful international negotiation. They outline that climate policies, beside their effect on emissions, would allow a correction of the insufficient ability of market to produce major innovations which are anyway necessary. They state that such innovations would stimulate an industrial revolution, incite creativity, and lead to a low carbon growth
Cluster Observations of Multiple Dipolarization Fronts
Hwang, Kyoung-Joo; Goldstein, Melvyn L.; Lee, Ensang; Pickett, Jolene S.
2011-01-01
We present Cluster observations of a series of dipolarization fronts (DF 1 to 6) at the central current sheet in Earth's magnetotail. The velocities of fast earthward flow following behind each DF 1-3, are comparable to the Alfven velocity, indicating that the flow bursts might have been generated by bursty reconnection that occurred tailward of the spacecraft. Based on multi-spacecraft timing analysis, DF normals are found to propagate mainly earthward at $160-335$ km/s with a thickness of 900-1500 km, which corresponds to the ion inertial length or gyroradius scale. Each DF is followed by significant fluctuations in the $x$ and $y$ components of the magnetic field whose peaks are found 1-2 minutes after the DF passage. These $(B_{x},B_{y} )$-fluctuations propagate dawnward (mainly) and earthward. Strongly enhanced field-aligned beams are observed coincidently with $(B_{x},B_{y})$ fluctuations, while an enhancement of cross-tail currents is associated with the DFs. From the observed pressure imbalance and flux-tube entropy changes between the two regions separated by the DF, we speculate that interchange instability destabilizes the DFs and causes the deformation of the mid-tail magnetic topology. This process generates significant field-aligned currents, and might power the auroral brightening in the ionosphere. However, this event is neither associated with the main substorm auroral breakup nor the poleward expansion, which might indicate that the observed multiple DFs have been dissipated before they reach the inner plasma sheet boundary.
Vanessa Voisin
2008-07-01
Full Text Available Front voennykh prokurorov is a collection of essays written by two writers who are not known as specialists of military history, but obviously have access to archives, as proves the reprint, in the middle of the book, of several pages from the personal files of Afanas’ev, former Main Military Prosecutor of Soviet Armed Forces.The first part of the book is devoted to Nikolai Porfir’evich Afanas’ev’s memoirs, written, according to the editors, after his retirement in 1950. Afanas’ev, though les...
Varzakas, Theodoros H; Arvanitoyannis, Ioannis S
2007-01-01
The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of corn curl manufacturing. A tentative approach of FMEA application to the snacks industry was attempted in an effort to exclude the presence of GMOs in the final product. This is of crucial importance both from the ethics and the legislation (Regulations EC 1829/2003; EC 1830/2003; Directive EC 18/2001) point of view. The Preliminary Hazard Analysis and the Fault Tree Analysis were used to analyze and predict the occurring failure modes in a food chain system (corn curls processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and the fishbone diagram). Finally, Pareto diagrams were employed towards the optimization of GMOs detection potential of FMEA.
Obolewicz, Jerzy; Dąbrowski, Andrzej
2017-11-16
The construction industry is an important sector of the economy in Poland. According to the National Labour Inspectorate (PIP) data of 2014, the number of victims of fatal accidents in the construction sector amounted to 80 as compared with 187 injured in all other sectors of economy in Poland. This article presents the results of surveys on the impact of construction worker behaviour on the occupational safety and health outcomes. The surveys took into account the point of view of both construction site management (tactical level) and construction workers (operational level). For the analysis of results, the method of numerical taxonomy and Pareto charts was employed, which allowed the authors to identify the areas of occupational safety and health at both an operational and a tactical level, in which improvement actions needed to be proposed for workers employed in micro, small, medium and large construction enterprises.
Universality of Generalized Parton Distributions in Light-Front Holographic QCD
de Téramond, Guy F.; Liu, Tianbo; Sufian, Raza Sabbir; Dosch, Hans Günter; Brodsky, Stanley J.; Deur, Alexandre; Hlfhs Collaboration
2018-05-01
The structure of generalized parton distributions is determined from light-front holographic QCD up to a universal reparametrization function w (x ) which incorporates Regge behavior at small x and inclusive counting rules at x →1 . A simple ansatz for w (x ) that fulfills these physics constraints with a single-parameter results in precise descriptions of both the nucleon and the pion quark distribution functions in comparison with global fits. The analytic structure of the amplitudes leads to a connection with the Veneziano model and hence to a nontrivial connection with Regge theory and the hadron spectrum.
The TOTEM front end driver, its components and applications in the TOTEM experiment
Antchev G; Aspell P; Barney D; Reynaud S; Snoeys W; Vichoudis P
2007-01-01
The TOTEM Front End Driver, so-called TOTFED, receives and handles trigger building and tracking data from the TOTEM detectors, and interfaces to the global trigger and data acquisition systems. The TOTFED is based on the VME64x standard and has deliberately been kept modular. It is very flexible and programmable to deal with the different TOTEM sub-detectors and possible evolution of the data treatment and trigger algorithms over the duration of the experiment. The main objectives for each u...