WorldWideScience

Sample records for hybrid pareto model

  1. A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.

    Science.gov (United States)

    Carreau, Julie; Bengio, Yoshua

    2009-07-01

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.

  2. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  3. Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.

    Science.gov (United States)

    Elhossini, Ahmed; Areibi, Shawki; Dony, Robert

    2010-01-01

    This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.

  4. A Pareto scale-inflated outlier model and its Bayesian analysis

    OpenAIRE

    Scollnik, David P. M.

    2016-01-01

    This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three wor...

  5. Bayesian modeling to paired comparison data via the Pareto distribution

    Directory of Open Access Journals (Sweden)

    Nasir Abbas

    2017-12-01

    Full Text Available A probabilistic approach to build models for paired comparison experiments based on the comparison of two Pareto variables is considered. Analysis of the proposed model is carried out in classical as well as Bayesian frameworks. Informative and uninformative priors are employed to accommodate the prior information. Simulation study is conducted to assess the suitablily and performance of the model under theoretical conditions. Appropriateness of fit of the is also carried out. Entire inferential procedure is illustrated by comparing certain cricket teams using real dataset.

  6. Hybridization of Strength Pareto Multiobjective Optimization with Modified Cuckoo Search Algorithm for Rectangular Array.

    Science.gov (United States)

    Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah

    2017-04-20

    This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele's (ZDT's) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.

  7. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  8. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    Science.gov (United States)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  9. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  10. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  11. Model-based problem solving through symbolic regression via pareto genetic programming

    NARCIS (Netherlands)

    Vladislavleva, E.

    2008-01-01

    Pareto genetic programming methodology is extended by additional generic model selection and generation strategies that (1) drive the modeling engine to creation of models of reduced non-linearity and increased generalization capabilities, and (2) improve the effectiveness of the search for robust

  12. Pareto-Lognormal Modeling of Known and Unknown Metal Resources. II. Method Refinement and Further Applications

    International Nuclear Information System (INIS)

    Agterberg, Frits

    2017-01-01

    Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that

  13. Pareto-Lognormal Modeling of Known and Unknown Metal Resources. II. Method Refinement and Further Applications

    Energy Technology Data Exchange (ETDEWEB)

    Agterberg, Frits, E-mail: agterber@nrcan.gc.ca [Geological Survey of Canada (Canada)

    2017-07-01

    Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that

  14. Multi-objective component sizing of a power-split plug-in hybrid electric vehicle powertrain using Pareto-based natural optimization machines

    Science.gov (United States)

    Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.

    2016-03-01

    The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.

  15. Household Labour Supply in Britain and Denmark: Some Interpretations Using a Model of Pareto Optimal Behaviour

    DEFF Research Database (Denmark)

    Barmby, Tim; Smith, Nina

    1996-01-01

    This paper analyses the labour supply behaviour of households in Denmark and Britain. It employs models in which the preferences of individuals within the household are explicitly represented. The households are then assumed to decide on their labour supply in a Pareto-Optimal fashion. Describing...

  16. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    Science.gov (United States)

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  17. An Investigation of the Pareto Distribution as a Model for High Grazing Angle Clutter

    Science.gov (United States)

    2011-03-01

    radar detection schemes under controlled conditions. Complicated clutter models result in mathematical difficulties in the determination of optimal and...a population [7]. It has been used in the modelling of actuarial data; an example is in excess of loss quotations in insurance [8]. Its usefulness as...UNCLASSIFIED modified Bessel functions, making it difficult to employ in radar detection schemes. The Pareto Distribution is amenable to mathematical

  18. Pareto utility

    NARCIS (Netherlands)

    Ikefuji, M.; Laeven, R.J.A.; Magnus, J.R.; Muris, C.H.M.

    2013-01-01

    In searching for an appropriate utility function in the expected utility framework, we formulate four properties that we want the utility function to satisfy. We conduct a search for such a function, and we identify Pareto utility as a function satisfying all four desired properties. Pareto utility

  19. Pareto printsiip

    Index Scriptorium Estoniae

    2011-01-01

    Itaalia majandusteadlase Vilfredo Pareto jõudmisest oma kuulsa printsiibini ja selle printsiibi mõjust tänapäevasele juhtimisele. Pareto printsiibi kohaselt ei aita suurem osa tegevusest meid tulemuseni jõuda, vaid on aja raiskamine. Diagramm

  20. The Primary Experiments of an Analysis of Pareto Solutions for Conceptual Design Optimization Problem of Hybrid Rocket Engine

    Science.gov (United States)

    Kudo, Fumiya; Yoshikawa, Tomohiro; Furuhashi, Takeshi

    Recentry, Multi-objective Genetic Algorithm, which is the application of Genetic Algorithm to Multi-objective Optimization Problems is focused on in the engineering design field. In this field, the analysis of design variables in the acquired Pareto solutions, which gives the designers useful knowledge in the applied problem, is important as well as the acquisition of advanced solutions. This paper proposes a new visualization method using Isomap which visualizes the geometric distances of solutions in the design variable space considering their distances in the objective space. The proposed method enables a user to analyze the design variables of the acquired solutions considering their relationship in the objective space. This paper applies the proposed method to the conceptual design optimization problem of hybrid rocket engine and studies the effectiveness of the proposed method.

  1. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  2. Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-09-01

    Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.

  3. Active learning of Pareto fronts.

    Science.gov (United States)

    Campigotto, Paolo; Passerini, Andrea; Battiti, Roberto

    2014-03-01

    This paper introduces the active learning of Pareto fronts (ALP) algorithm, a novel approach to recover the Pareto front of a multiobjective optimization problem. ALP casts the identification of the Pareto front into a supervised machine learning task. This approach enables an analytical model of the Pareto front to be built. The computational effort in generating the supervised information is reduced by an active learning strategy. In particular, the model is learned from a set of informative training objective vectors. The training objective vectors are approximated Pareto-optimal vectors obtained by solving different scalarized problem instances. The experimental results show that ALP achieves an accurate Pareto front approximation with a lower computational effort than state-of-the-art estimation of distribution algorithms and widely known genetic techniques.

  4. Hybrid Pareto artificial bee colony algorithm for multi-objective single machine group scheduling problem with sequence-dependent setup times and learning effects.

    Science.gov (United States)

    Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao

    2016-01-01

    Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.

  5. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  6. Modeling air quality in main cities of Peninsular Malaysia by using a generalized Pareto model.

    Science.gov (United States)

    Masseran, Nurulkamal; Razali, Ahmad Mahir; Ibrahim, Kamarulzaman; Latif, Mohd Talib

    2016-01-01

    The air pollution index (API) is an important figure used for measuring the quality of air in the environment. The API is determined based on the highest average value of individual indices for all the variables which include sulfur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3), and suspended particulate matter (PM10) at a particular hour. API values that exceed the limit of 100 units indicate an unhealthy status for the exposed environment. This study investigates the risk of occurrences of API values greater than 100 units for eight urban areas in Peninsular Malaysia for the period of January 2004 to December 2014. An extreme value model, known as the generalized Pareto distribution (GPD), has been fitted to the API values found. Based on the fitted model, return period for describing the occurrences of API exceeding 100 in the different cities has been computed as the indicator of risk. The results obtained indicated that most of the urban areas considered have a very small risk of occurrence of the unhealthy events, except for Kuala Lumpur, Malacca, and Klang. However, among these three cities, it is found that Klang has the highest risk. Based on all the results obtained, the air quality standard in urban areas of Peninsular Malaysia falls within healthy limits to human beings.

  7. Pareto genealogies arising from a Poisson branching evolution model with selection.

    Science.gov (United States)

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  8. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

    Science.gov (United States)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George

    2017-03-01

    Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  9. Application of Pareto-efficient combustion modeling framework to large eddy simulations of turbulent reacting flows

    Science.gov (United States)

    Wu, Hao; Ihme, Matthias

    2017-11-01

    The modeling of turbulent combustion requires the consideration of different physico-chemical processes, involving a vast range of time and length scales as well as a large number of scalar quantities. To reduce the computational complexity, various combustion models are developed. Many of them can be abstracted using a lower-dimensional manifold representation. A key issue in using such lower-dimensional combustion models is the assessment as to whether a particular combustion model is adequate in representing a certain flame configuration. The Pareto-efficient combustion (PEC) modeling framework was developed to perform dynamic combustion model adaptation based on various existing manifold models. In this work, the PEC model is applied to a turbulent flame simulation, in which a computationally efficient flamelet-based combustion model is used in together with a high-fidelity finite-rate chemistry model. The combination of these two models achieves high accuracy in predicting pollutant species at a relatively low computational cost. The relevant numerical methods and parallelization techniques are also discussed in this work.

  10. Modelling and Pareto optimization of mechanical properties of friction stir welded AA7075/AA5083 butt joints using neural network and particle swarm algorithm

    International Nuclear Information System (INIS)

    Shojaeefard, Mohammad Hasan; Behnagh, Reza Abdi; Akbari, Mostafa; Givi, Mohammad Kazem Besharati; Farhani, Foad

    2013-01-01

    Highlights: ► Defect-free friction stir welds have been produced for AA5083-O/AA7075-O. ► Back-propagation was sufficient for predicting hardness and tensile strength. ► A hybrid multi-objective algorithm is proposed to deal with this MOP. ► Multi-objective particle swarm optimization was used to find the Pareto solutions. ► TOPSIS is used to rank the given alternatives of the Pareto solutions. -- Abstract: Friction Stir Welding (FSW) has been successfully used to weld similar and dissimilar cast and wrought aluminium alloys, especially for aircraft aluminium alloys, that generally present with low weldability by the traditional fusion welding process. This paper focuses on the microstructural and mechanical properties of the Friction Stir Welding (FSW) of AA7075-O to AA5083-O aluminium alloys. Weld microstructures, hardness and tensile properties were evaluated in as-welded condition. Tensile tests indicated that mechanical properties of the joint were better than in the base metals. An Artificial Neural Network (ANN) model was developed to simulate the correlation between the Friction Stir Welding parameters and mechanical properties. Performance of the ANN model was excellent and the model was employed to predict the ultimate tensile strength and hardness of butt joint of AA7075–AA5083 as functions of weld and rotational speeds. The multi-objective particle swarm optimization was used to obtain the Pareto-optimal set. Finally, the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) was applied to determine the best compromised solution.

  11. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    Science.gov (United States)

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Pareto Optimization of a Half Car Passive Suspension Model Using a Novel Multiobjective Heat Transfer Search Algorithm

    Directory of Open Access Journals (Sweden)

    Vimal Savsani

    2017-01-01

    Full Text Available Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS algorithm, which is based on the search technique of heat transfer search (HTS algorithm. MOHTS employs the elitist nondominated sorting and crowding distance approach of an elitist based nondominated sorting genetic algorithm-II (NSGA-II for obtaining different nondomination levels and to preserve the diversity among the optimal set of solutions, respectively. The capability in yielding a Pareto front as close as possible to the true Pareto front of MOHTS has been tested on the multiobjective optimization problem of the vehicle suspension design, which has a set of five second-order linear ordinary differential equations. Half car passive ride model with two different sets of five objectives is employed for optimizing the suspension parameters using MOHTS and NSGA-II. The optimization studies demonstrate that MOHTS achieves the better nondominated Pareto front with the widespread (diveresed set of optimal solutions as compared to NSGA-II, and further the comparison of the extreme points of the obtained Pareto front reveals the dominance of MOHTS over NSGA-II, multiobjective uniform diversity genetic algorithm (MUGA, and combined PSO-GA based MOEA.

  13. Generalized Pareto for Pattern-Oriented Random Walk Modelling of Organisms' Movements.

    Directory of Open Access Journals (Sweden)

    Sophie Bertrand

    Full Text Available How organisms move and disperse is crucial to understand how population dynamics relates to the spatial heterogeneity of the environment. Random walk (RW models are typical tools to describe movement patterns. Whether Lévy or alternative RW better describes forager movements is keenly debated. We get around this issue using the Generalized Pareto Distribution (GPD. GPD includes as specific cases Normal, exponential and power law distributions, which underlie Brownian, Poisson-like and Lévy walks respectively. Whereas previous studies typically confronted a limited set of candidate models, GPD lets the most likely RW model emerge from the data. We illustrate the wide applicability of the method using GPS-tracked seabird foraging movements and fishing vessel movements tracked by Vessel Monitoring System (VMS, both collected in the Peruvian pelagic ecosystem. The two parameters from the fitted GPD, a scale and a shape parameter, provide a synoptic characterization of the observed movement in terms of characteristic scale and diffusive property. They reveal and quantify the variability, among species and individuals, of the spatial strategies selected by predators foraging on a common prey field. The GPD parameters constitute relevant metrics for (1 providing a synthetic and pattern-oriented description of movement, (2 using top predators as ecosystem indicators and (3 studying the variability of spatial behaviour among species or among individuals with different personalities.

  14. Generalized Pareto for Pattern-Oriented Random Walk Modelling of Organisms' Movements.

    Science.gov (United States)

    Bertrand, Sophie; Joo, Rocío; Fablet, Ronan

    2015-01-01

    How organisms move and disperse is crucial to understand how population dynamics relates to the spatial heterogeneity of the environment. Random walk (RW) models are typical tools to describe movement patterns. Whether Lévy or alternative RW better describes forager movements is keenly debated. We get around this issue using the Generalized Pareto Distribution (GPD). GPD includes as specific cases Normal, exponential and power law distributions, which underlie Brownian, Poisson-like and Lévy walks respectively. Whereas previous studies typically confronted a limited set of candidate models, GPD lets the most likely RW model emerge from the data. We illustrate the wide applicability of the method using GPS-tracked seabird foraging movements and fishing vessel movements tracked by Vessel Monitoring System (VMS), both collected in the Peruvian pelagic ecosystem. The two parameters from the fitted GPD, a scale and a shape parameter, provide a synoptic characterization of the observed movement in terms of characteristic scale and diffusive property. They reveal and quantify the variability, among species and individuals, of the spatial strategies selected by predators foraging on a common prey field. The GPD parameters constitute relevant metrics for (1) providing a synthetic and pattern-oriented description of movement, (2) using top predators as ecosystem indicators and (3) studying the variability of spatial behaviour among species or among individuals with different personalities.

  15. Improving Modeling of Extreme Events using Generalized Extreme Value Distribution or Generalized Pareto Distribution with Mixing Unconditional Disturbances

    OpenAIRE

    Suarez, R

    2001-01-01

    In this paper an alternative non-parametric historical simulation approach, the Mixing Unconditional Disturbances model with constant volatility, where price paths are generated by reshuffling disturbances for S&P 500 Index returns over the period 1950 - 1998, is used to estimate a Generalized Extreme Value Distribution and a Generalized Pareto Distribution. An ordinary back-testing for period 1999 - 2008 was made to verify this technique, providing higher accuracy returns level under upper ...

  16. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modelling heteroscedastic residual errors

    Science.gov (United States)

    David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera

    2017-04-01

    This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  17. Modelling and Pareto optimization of heat transfer and flow coefficients in microchannels using GMDH type neural networks and genetic algorithms

    International Nuclear Information System (INIS)

    Amanifard, N.; Nariman-Zadeh, N.; Borji, M.; Khalkhali, A.; Habibdoust, A.

    2008-01-01

    Three-dimensional heat transfer characteristics and pressure drop of water flow in a set of rectangular microchannels are numerically investigated using Fluent and compared with those of experimental results. Two metamodels based on the evolved group method of data handling (GMDH) type neural networks are then obtained for modelling of both pressure drop (ΔP) and Nusselt number (Nu) with respect to design variables such as geometrical parameters of microchannels, the amount of heat flux and the Reynolds number. Using such obtained polynomial neural networks, multi-objective genetic algorithms (GAs) (non-dominated sorting genetic algorithm, NSGA-II) with a new diversity preserving mechanism is then used for Pareto based optimization of microchannels considering two conflicting objectives such as (ΔP) and (Nu). It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of microchannels can be discovered by Pareto based multi-objective optimization of the obtained polynomial metamodels representing their heat transfer and flow characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modelling and the Pareto optimization approach

  18. Hybridization of Sensing Methods of the Search Domain and Adaptive Weighted Sum in the Pareto Approximation Problem

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available We consider the relatively new and rapidly developing class of methods to solve a problem of multi-objective optimization, based on the preliminary built finite-dimensional approximation of the set, and thereby, the Pareto front of this problem as well. The work investigates the efficiency of several modifications of the method of adaptive weighted sum (AWS. This method proposed in the paper of Ryu and Kim Van (JH. Ryu, S. Kim, H. Wan is intended to build Pareto approximation of the multi-objective optimization problem.The AWS method uses quadratic approximation of the objective functions in the current sub-domain of the search space (the area of trust based on the gradient and Hessian matrix of the objective functions. To build the (quadratic meta objective functions this work uses methods of the experimental design theory, which involves calculating the values of these functions in the grid nodes covering the area of trust (a sensing method of the search domain. There are two groups of the sensing methods under consideration: hypercube- and hyper-sphere-based methods. For each of these groups, a number of test multi-objective optimization tasks has been used to study the efficiency of the following grids: "Latin Hypercube"; grid, which is uniformly random for each measurement; grid, based on the LP  sequences.

  19. Pareto optimization in algebraic dynamic programming.

    Science.gov (United States)

    Saule, Cédric; Giegerich, Robert

    2015-01-01

    Pareto optimization combines independent objectives by computing the Pareto front of its search space, defined as the set of all solutions for which no other candidate solution scores better under all objectives. This gives, in a precise sense, better information than an artificial amalgamation of different scores into a single objective, but is more costly to compute. Pareto optimization naturally occurs with genetic algorithms, albeit in a heuristic fashion. Non-heuristic Pareto optimization so far has been used only with a few applications in bioinformatics. We study exact Pareto optimization for two objectives in a dynamic programming framework. We define a binary Pareto product operator [Formula: see text] on arbitrary scoring schemes. Independent of a particular algorithm, we prove that for two scoring schemes A and B used in dynamic programming, the scoring scheme [Formula: see text] correctly performs Pareto optimization over the same search space. We study different implementations of the Pareto operator with respect to their asymptotic and empirical efficiency. Without artificial amalgamation of objectives, and with no heuristics involved, Pareto optimization is faster than computing the same number of answers separately for each objective. For RNA structure prediction under the minimum free energy versus the maximum expected accuracy model, we show that the empirical size of the Pareto front remains within reasonable bounds. Pareto optimization lends itself to the comparative investigation of the behavior of two alternative scoring schemes for the same purpose. For the above scoring schemes, we observe that the Pareto front can be seen as a composition of a few macrostates, each consisting of several microstates that differ in the same limited way. We also study the relationship between abstract shape analysis and the Pareto front, and find that they extract information of a different nature from the folding space and can be meaningfully combined.

  20. Microergodicity effects on ebullition of methane modelled by Mixed Poisson process with Pareto mixing variable

    Czech Academy of Sciences Publication Activity Database

    Jordanova, P.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 128, OCT 15 (2013), s. 124-134 ISSN 0169-7439 R&D Projects: GA ČR(CZ) GAP504/11/1151; GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : environmental chemistry * ebullition of methane * mixed poisson processes * renewal process * pareto distribution * moving average process * robust statistics * sedge–grass marsh Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  1. Agent-Based Modelling of the Evolution of the Russian Party System Based on Pareto and Hotelling Distributions. Part II

    Directory of Open Access Journals (Sweden)

    Владимир Геннадьевич Иванов

    2015-12-01

    Full Text Available The given article presents research of the evolution of the Russian party system. The chosen methodology is based on the heuristic potential of agent-based modelling. The author analyzes various scenarios of parties’ competition (applying Pareto distribution in connection with recent increase of the number of political parties. In addition, the author predicts the level of ideological diversity of the parties’ platforms (applying the principles of Hotelling distribution in order to evaluate their potential competitiveness in the struggle for voters.

  2. Market Ecology, Pareto Wealth Distribution and Leptokurtic Returns in Microscopic Simulation of the LLS Stock Market Model

    Science.gov (United States)

    Solomon, Sorin; Levy, Moshe

    2001-06-01

    The LLS stock market model (see Levy Levy and Solomon Academic Press 2000 "Microscopic Simulation of Financial Markets; From Investor Behavior to Market Phenomena" for a review) is a model of heterogeneous quasi-rational investors operating in a complex environment about which they have incomplete information. We review the main features of this model and several of its extensions. We study the effects of investor heterogeneity and show that predation, competition, or symbiosis may occur between different investor populations. The dynamics of the LLS model lead to the empirically observed Pareto wealth distribution. Many properties observed in actual markets appear as natural consequences of the LLS dynamics: - truncated Levy distribution of short-term returns, - excess volatility, - a return autocorrelation "U-shape" pattern, and - a positive correlation between volume and absolute returns.

  3. Estimations of parameters in Pareto reliability model in the presence of masked data

    International Nuclear Information System (INIS)

    Sarhan, Ammar M.

    2003-01-01

    Estimations of parameters included in the individual distributions of the life times of system components in a series system are considered in this paper based on masked system life test data. We consider a series system of two independent components each has a Pareto distributed lifetime. The maximum likelihood and Bayes estimators for the parameters and the values of the reliability of the system's components at a specific time are obtained. Symmetrical triangular prior distributions are assumed for the unknown parameters to be estimated in obtaining the Bayes estimators of these parameters. Large simulation studies are done in order: (i) explain how one can utilize the theoretical results obtained; (ii) compare the maximum likelihood and Bayes estimates obtained of the underlying parameters; and (iii) study the influence of the masking level and the sample size on the accuracy of the estimates obtained

  4. A Pareto-Based Adaptive Variable Neighborhood Search for Biobjective Hybrid Flow Shop Scheduling Problem with Sequence-Dependent Setup Time

    Directory of Open Access Journals (Sweden)

    Huixin Tian

    2016-01-01

    Full Text Available Different from most researches focused on the single objective hybrid flowshop scheduling (HFS problem, this paper investigates a biobjective HFS problem with sequence dependent setup time. The two objectives are the minimization of total weighted tardiness and the total setup time. To efficiently solve this problem, a Pareto-based adaptive biobjective variable neighborhood search (PABOVNS is developed. In the proposed PABOVNS, a solution is denoted as a sequence of all jobs and a decoding procedure is presented to obtain the corresponding complete schedule. In addition, the proposed PABOVNS has three major features that can guarantee a good balance of exploration and exploitation. First, an adaptive selection strategy of neighborhoods is proposed to automatically select the most promising neighborhood instead of the sequential selection strategy of canonical VNS. Second, a two phase multiobjective local search based on neighborhood search and path relinking is designed for each selected neighborhood. Third, an external archive with diversity maintenance is adopted to store the nondominated solutions and at the same time provide initial solutions for the local search. Computational results based on randomly generated instances show that the PABOVNS is efficient and even superior to some other powerful multiobjective algorithms in the literature.

  5. Identifying the preferred subset of enzymatic profiles in nonlinear kinetic metabolic models via multiobjective global optimization and Pareto filters.

    Directory of Open Access Journals (Sweden)

    Carlos Pozo

    Full Text Available Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study

  6. Identifying the preferred subset of enzymatic profiles in nonlinear kinetic metabolic models via multiobjective global optimization and Pareto filters.

    Science.gov (United States)

    Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Sorribas, Albert; Jiménez, Laureano

    2012-01-01

    Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the

  7. Finding a pareto-optimal solution for multi-region models subject to capital trade and spillover externalities

    Energy Technology Data Exchange (ETDEWEB)

    Leimbach, Marian [Potsdam-Institut fuer Klimafolgenforschung e.V., Potsdam (Germany); Eisenack, Klaus [Oldenburg Univ. (Germany). Dept. of Economics and Statistics

    2008-11-15

    In this paper we present an algorithm that deals with trade interactions within a multi-region model. In contrast to traditional approaches this algorithm is able to handle spillover externalities. Technological spillovers are expected to foster the diffusion of new technologies, which helps to lower the cost of climate change mitigation. We focus on technological spillovers which are due to capital trade. The algorithm of finding a pareto-optimal solution in an intertemporal framework is embedded in a decomposed optimization process. The paper analyzes convergence and equilibrium properties of this algorithm. In the final part of the paper, we apply the algorithm to investigate possible impacts of technological spillovers. While benefits of technological spillovers are significant for the capital-importing region, benefits for the capital-exporting region depend on the type of regional disparities and the resulting specialization and terms-of-trade effects. (orig.)

  8. GENERALIZED DOUBLE PARETO SHRINKAGE.

    Science.gov (United States)

    Armagan, Artin; Dunson, David B; Lee, Jaeyong

    2013-01-01

    We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferences in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, forming a bridge between the Laplace and Normal-Jeffreys' priors. While it has a spike at zero like the Laplace density, it also has a Student's t -like tail behavior. Bayesian computation is straightforward via a simple Gibbs sampling algorithm. We investigate the properties of the maximum a posteriori estimator, as sparse estimation plays an important role in many problems, reveal connections with some well-established regularization procedures, and show some asymptotic results. The performance of the prior is tested through simulations and an application.

  9. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-07

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  10. Studies on generalized kinetic model and Pareto optimization of a product-driven self-cycling bioprocess.

    Science.gov (United States)

    Sun, Kaibiao; Kasperski, Andrzej; Tian, Yuan

    2014-10-01

    The aim of this study is the optimization of a product-driven self-cycling bioprocess and presentation of a way to determine the best possible decision variables out of a set of alternatives based on the designed model. Initially, a product-driven generalized kinetic model, which allows a flexible choice of the most appropriate kinetics is designed and analysed. The optimization problem is given as the bi-objective one, where maximization of biomass productivity and minimization of unproductive loss of substrate are the objective functions. Then, the Pareto fronts are calculated for exemplary kinetics. It is found that in the designed bioprocess, a decrease of emptying/refilling fraction and an increase of substrate feeding concentration cause an increase of the biomass productivity. An increase of emptying/refilling fraction and a decrease of substrate feeding concentration cause a decrease of unproductive loss of substrate. The preferred solutions are calculated using the minimum distance from an ideal solution method, while giving proposals of their modifications derived from a decision maker's reactions to the generated solutions.

  11. Pareto Distribution of Firm Size and Knowledge Spillover Process as a Network

    OpenAIRE

    Tomohiko Konno

    2013-01-01

    The firm size distribution is considered as Pareto distribution. In the present paper, we show that the Pareto distribution of firm size results from the spillover network model which was introduced in Konno (2010).

  12. Rayleigh Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Kareema ‎ Abed Al-Kadim

    2017-12-01

    Full Text Available In this paper Rayleigh Pareto distribution have  introduced denote by( R_PD. We stated some  useful functions. Therefor  we  give some of its properties like the entropy function, mean, mode, median , variance , the r-th moment about the mean, the rth moment about the origin, reliability, hazard functions, coefficients of variation, of sekeness and of kurtosis. Finally, we estimate the parameters  so the aim of this search  is to introduce a new distribution

  13. Robust bayesian inference of generalized Pareto distribution ...

    African Journals Online (AJOL)

    En utilisant une etude exhaustive de Monte Carlo, nous prouvons que, moyennant une fonction perte generalisee adequate, on peut construire un estimateur Bayesien robuste du modele. Key words: Bayesian estimation; Extreme value; Generalized Fisher information; Gener- alized Pareto distribution; Monte Carlo; ...

  14. A framework to identify Pareto-efficient subdaily environmental flow constraints on hydropower reservoirs using a grid-wide power dispatch model

    Science.gov (United States)

    Olivares, Marcelo A.; Haas, Jannik; Palma-Behnke, Rodrigo; Benavides, Carlos

    2015-05-01

    Hydrologic alteration due to hydropeaking reservoir operations is a main concern worldwide. Subdaily environmental flow constraints (ECs) on operations can be promising alternatives for mitigating negative impacts. However, those constraints reduce the flexibility of hydropower plants, potentially with higher costs for the power system. To study the economic and environmental efficiency of ECs, this work proposes a novel framework comprising four steps: (i) assessment of the current subdaily hydrologic alteration; (ii) formulation and implementation of a short-term, grid-wide hydrothermal coordination model; (iii) design of ECs in the form of maximum ramping rates (MRRs) and minimum flows (MIFs) for selected hydropower reservoirs; and (iv) identification of Pareto-efficient solutions in terms of grid-wide costs and the Richard-Baker flashiness index for subdaily hydrologic alteration (SDHA). The framework was applied to Chile's main power grid, assessing 25 EC cases, involving five MIFs and five MRRs. Each case was run for a dry, normal, and wet water year type. Three Pareto-efficient ECs are found, with remarkably small cost increase below 2% and a SDHA improvement between 28% and 90%. While the case involving the highest MIF worsens the flashiness of another basin, the other two have no negative effect on other basins and can be recommended for implementation.

  15. Pareto optimality in organelle energy metabolism analysis.

    Science.gov (United States)

    Angione, Claudio; Carapezza, Giovanni; Costanza, Jole; Lió, Pietro; Nicosia, Giuseppe

    2013-01-01

    In low and high eukaryotes, energy is collected or transformed in compartments, the organelles. The rich variety of size, characteristics, and density of the organelles makes it difficult to build a general picture. In this paper, we make use of the Pareto-front analysis to investigate the optimization of energy metabolism in mitochondria and chloroplasts. Using the Pareto optimality principle, we compare models of organelle metabolism on the basis of single- and multiobjective optimization, approximation techniques (the Bayesian Automatic Relevance Determination), robustness, and pathway sensitivity analysis. Finally, we report the first analysis of the metabolic model for the hydrogenosome of Trichomonas vaginalis, which is found in several protozoan parasites. Our analysis has shown the importance of the Pareto optimality for such comparison and for insights into the evolution of the metabolism from cytoplasmic to organelle bound, involving a model order reduction. We report that Pareto fronts represent an asymptotic analysis useful to describe the metabolism of an organism aimed at maximizing concurrently two or more metabolite concentrations.

  16. Analysis of a Pareto Mixture Distribution for Maritime Surveillance Radar

    Directory of Open Access Journals (Sweden)

    Graham V. Weinberg

    2012-01-01

    Full Text Available The Pareto distribution has been shown to be an excellent model for X-band high-resolution maritime surveillance radar clutter returns. Given the success of mixture distributions in radar, it is thus of interest to consider the effect of Pareto mixture models. This paper introduces a formulation of a Pareto intensity mixture distribution and investigates coherent multilook radar detector performance using this new clutter model. Clutter parameter estimates are derived from data sets produced by the Defence Science and Technology Organisation's Ingara maritime surveillance radar.

  17. Kullback-Leibler divergence and the Pareto-Exponential approximation.

    Science.gov (United States)

    Weinberg, G V

    2016-01-01

    Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation.

  18. Pareto-optimal alloys

    DEFF Research Database (Denmark)

    Bligaard, Thomas; Johannesson, Gisli Holmar; Ruban, Andrei

    2003-01-01

    Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties and the ......Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties...... and the cost. In this letter we present a database consisting of the lattice parameters, bulk moduli, and heats of formation for over 64 000 ordered metallic alloys, which has been established by direct first-principles density-functional-theory calculations. Furthermore, we use a concept from economic theory......, the Pareto-optimal set, to determine optimal alloy solutions for the compromise between low compressibility, high stability, and cost....

  19. Pareto optimal pairwise sequence alignment.

    Science.gov (United States)

    DeRonne, Kevin W; Karypis, George

    2013-01-01

    Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.

  20. Compositional Modelling of Stochastic Hybrid Systems

    NARCIS (Netherlands)

    Strubbe, S.N.

    2005-01-01

    In this thesis we present a modelling framework for compositional modelling of stochastic hybrid systems. Hybrid systems consist of a combination of continuous and discrete dynamics. The state space of a hybrid system is hybrid in the sense that it consists of a continuous component and a discrete

  1. Hybrid dynamics for currency modeling

    OpenAIRE

    Theodosopoulos, Ted; Trifunovic, Alex

    2006-01-01

    We present a simple hybrid dynamical model as a tool to investigate behavioral strategies based on trend following. The multiplicative symbolic dynamics are generated using a lognormal diffusion model for the at-the-money implied volatility term structure. Thus, are model exploits information from derivative markets to obtain qualititative properties of the return distribution for the underlier. We apply our model to the JPY-USD exchange rate and the corresponding 1mo., 3mo., 6mo. and 1yr. im...

  2. Hybrid2 - The hybrid power system simulation model

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, E.I.; Green, H.J.; Dijk, V.A.P. van [National Renewable Energy Lab., Golden, CO (United States); Manwell, J.F. [Univ. of Massachusetts, Amherst, MA (United States)

    1996-12-31

    There is a large-scale need and desire for energy in remote communities, especially in the developing world; however the lack of a user friendly, flexible performance prediction model for hybrid power systems incorporating renewables hindered the analysis of hybrids as options to conventional solutions. A user friendly model was needed with the versatility to simulate the many system locations, widely varying hardware configurations, and differing control options for potential hybrid power systems. To meet these ends, researchers from the National Renewable Energy Laboratory (NREL) and the University of Massachusetts (UMass) developed the Hybrid2 software. This paper provides an overview of the capabilities, features, and functionality of the Hybrid2 code, discusses its validation and future plans. Model availability and technical support provided to Hybrid2 users are also discussed. 12 refs., 3 figs., 4 tabs.

  3. Model Reduction of Hybrid Systems

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza

    gramians. Generalized gramians are the solutions to the observability and controllability Lyapunov inequalities. In the first framework the projection matrices are found based on the common generalized gramians. This framework preserves the stability of the original switched system for all switching...... is guaranteed to be preserved for arbitrary switching signal. To compute the common generalized gramians linear matrix inequalities (LMI’s) need to be solved. These LMI’s are not always feasible. In order to solve the problem of conservatism, the second framework is presented. In this method the projection......High-Technological solutions of today are characterized by complex dynamical models. A lot of these models have inherent hybrid/switching structure. Hybrid/switched systems are powerful models for distributed embedded systems design where discrete controls are applied to continuous processes...

  4. On the size distribution of cities: an economic interpretation of the Pareto coefficient.

    Science.gov (United States)

    Suh, S H

    1987-01-01

    "Both the hierarchy and the stochastic models of size distribution of cities are analyzed in order to explain the Pareto coefficient by economic variables. In hierarchy models, it is found that the rate of variation in the productivity of cities and that in the probability of emergence of cities can explain the Pareto coefficient. In stochastic models, the productivity of cities is found to explain the Pareto coefficient. New city-size distribution functions, in which the Pareto coefficient is decomposed by economic variables, are estimated." excerpt

  5. Hybrid Model of Content Extraction

    DEFF Research Database (Denmark)

    Qureshi, Pir Abdul Rasool; Memon, Nasrullah

    2012-01-01

    We present a hybrid model for content extraction from HTML documents. The model operates on Document Object Model (DOM) tree of the corresponding HTML document. It evaluates each tree node and associated statistical features like link density and text distribution across the node to predict...... significance of the node towards overall content provided by the document. Once significance of the nodes is determined, the formatting characteristics like fonts, styles and the position of the nodes are evaluated to identify the nodes with similar formatting as compared to the significant nodes. The proposed...

  6. On the Truncated Pareto Distribution with applications

    OpenAIRE

    Zaninetti, Lorenzo; Ferraro, Mario

    2008-01-01

    The Pareto probability distribution is widely applied in different fields such us finance, physics, hydrology, geology and astronomy. This note deals with an application of the Pareto distribution to astrophysics and more precisely to the statistical analysis of mass of stars and of diameters of asteroids. In particular a comparison between the usual Pareto distribution and its truncated version is presented. Finally a possible physical mechanism that produces Pareto tails for the distributio...

  7. Record Values of a Pareto Distribution.

    Science.gov (United States)

    Ahsanullah, M.

    The record values of the Pareto distribution, labelled Pareto (II) (alpha, beta, nu), are reviewed. The best linear unbiased estimates of the parameters in terms of the record values are provided. The prediction of the sth record value based on the first m (s>m) record values are obtained. A classical Pareto distribution provides reasonably…

  8. Evaporator modeling - A hybrid approach

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun

    2009-01-01

    In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis

  9. Pareto law and Pareto index in the income distribution of Japanese companies

    OpenAIRE

    Ishikawa, Atushi

    2004-01-01

    In order to study the phenomenon in detail that income distribution follows Pareto law, we analyze the database of high income companies in Japan. We find a quantitative relation between the average capital of the companies and the Pareto index. The larger the average capital becomes, the smaller the Pareto index becomes. From this relation, we can possibly explain that the Pareto index of company income distribution hardly changes, while the Pareto index of personal income distribution chang...

  10. Hybrid model of steam boiler

    International Nuclear Information System (INIS)

    Rusinowski, Henryk; Stanek, Wojciech

    2010-01-01

    In the case of big energy boilers energy efficiency is usually determined with the application of the indirect method. Flue gas losses and unburnt combustible losses have a significant influence on the boiler's efficiency. To estimate these losses the knowledge of the operating parameters influence on the flue gases temperature and the content of combustible particles in the solid combustion products is necessary. A hybrid model of a boiler developed with the application of both analytical modelling and artificial intelligence is described. The analytical part of the model includes the balance equations. The empirical models express the dependence of the flue gas temperature and the mass fraction of the unburnt combustibles in solid combustion products on the operating parameters of a boiler. The empirical models have been worked out by means of neural and regression modelling.

  11. The feasibility of using Pareto fronts for comparison of treatment planning systems and delivery techniques

    International Nuclear Information System (INIS)

    Ottosson, Rickard O.; Sjoestroem, David; Behrens, Claus F.; Karlsson, Anna; Engstroem, Per E.; Knoeoes, Tommy; Ceberg, Crister

    2009-01-01

    Pareto optimality is a concept that formalises the trade-off between a given set of mutually contradicting objectives. A solution is said to be Pareto optimal when it is not possible to improve one objective without deteriorating at least one of the other. A set of Pareto optimal solutions constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics of a treatment planning system (TPS), treatment strategy or delivery technique, Pareto fronts for a given case are likely to differ. The aim of this study was to investigate the feasibility of using Pareto fronts as a comparative tool for TPSs, treatment strategies and delivery techniques. In order to sample Pareto fronts, multiple treatment plans with varying target conformity and dose sparing of OAR were created for a number of prostate and head and neck IMRT cases. The DVHs of each plan were evaluated with respect to target coverage and dose to relevant OAR. Pareto fronts were successfully created for all studied cases. The results did indeed follow the definition of the Pareto concept, i.e. dose sparing of the OAR could not be improved without target coverage being impaired or vice versa. Furthermore, various treatment techniques resulted in distinguished and well separated Pareto fronts. Pareto fronts may be used to evaluate a number of parameters within radiotherapy. Examples are TPS optimization algorithms, the variation between accelerators or delivery techniques and the degradation of a plan during the treatment planning process. The issue of designing a model for unbiased comparison of parameters with such large inherent discrepancies, e.g. different TPSs, is problematic and should be carefully considered

  12. The feasibility of using Pareto fronts for comparison of treatment planning systems and delivery techniques.

    Science.gov (United States)

    Ottosson, Rickard O; Engstrom, Per E; Sjöström, David; Behrens, Claus F; Karlsson, Anna; Knöös, Tommy; Ceberg, Crister

    2009-01-01

    Pareto optimality is a concept that formalises the trade-off between a given set of mutually contradicting objectives. A solution is said to be Pareto optimal when it is not possible to improve one objective without deteriorating at least one of the other. A set of Pareto optimal solutions constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics of a treatment planning system (TPS), treatment strategy or delivery technique, Pareto fronts for a given case are likely to differ. The aim of this study was to investigate the feasibility of using Pareto fronts as a comparative tool for TPSs, treatment strategies and delivery techniques. In order to sample Pareto fronts, multiple treatment plans with varying target conformity and dose sparing of OAR were created for a number of prostate and head & neck IMRT cases. The DVHs of each plan were evaluated with respect to target coverage and dose to relevant OAR. Pareto fronts were successfully created for all studied cases. The results did indeed follow the definition of the Pareto concept, i.e. dose sparing of the OAR could not be improved without target coverage being impaired or vice versa. Furthermore, various treatment techniques resulted in distinguished and well separated Pareto fronts. Pareto fronts may be used to evaluate a number of parameters within radiotherapy. Examples are TPS optimization algorithms, the variation between accelerators or delivery techniques and the degradation of a plan during the treatment planning process. The issue of designing a model for unbiased comparison of parameters with such large inherent discrepancies, e.g. different TPSs, is problematic and should be carefully considered.

  13. Deriving simulators for hybrid Chi models

    NARCIS (Netherlands)

    Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.

    2006-01-01

    The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an

  14. Pareto Optimal Design for Synthetic Biology.

    Science.gov (United States)

    Patanè, Andrea; Santoro, Andrea; Costanza, Jole; Carapezza, Giovanni; Nicosia, Giuseppe

    2015-08-01

    Recent advances in synthetic biology call for robust, flexible and efficient in silico optimization methodologies. We present a Pareto design approach for the bi-level optimization problem associated to the overproduction of specific metabolites in Escherichia coli. Our method efficiently explores the high dimensional genetic manipulation space, finding a number of trade-offs between synthetic and biological objectives, hence furnishing a deeper biological insight to the addressed problem and important results for industrial purposes. We demonstrate the computational capabilities of our Pareto-oriented approach comparing it with state-of-the-art heuristics in the overproduction problems of i) 1,4-butanediol, ii) myristoyl-CoA, i ii) malonyl-CoA , iv) acetate and v) succinate. We show that our algorithms are able to gracefully adapt and scale to more complex models and more biologically-relevant simulations of the genetic manipulations allowed. The Results obtained for 1,4-butanediol overproduction significantly outperform results previously obtained, in terms of 1,4-butanediol to biomass formation ratio and knock-out costs. In particular overproduction percentage is of +662.7%, from 1.425 mmolh⁻¹gDW⁻¹ (wild type) to 10.869 mmolh⁻¹gDW⁻¹, with a knockout cost of 6. Whereas, Pareto-optimal designs we have found in fatty acid optimizations strictly dominate the ones obtained by the other methodologies, e.g., biomass and myristoyl-CoA exportation improvement of +21.43% (0.17 h⁻¹) and +5.19% (1.62 mmolh⁻¹gDW⁻¹), respectively. Furthermore CPU time required by our heuristic approach is more than halved. Finally we implement pathway oriented sensitivity analysis, epsilon-dominance analysis and robustness analysis to enhance our biological understanding of the problem and to improve the optimization algorithm capabilities.

  15. Wealth distribution, Pareto law, and stretched exponential decay of money: Computer simulations analysis of agent-based models

    Science.gov (United States)

    Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf

    2018-01-01

    We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.

  16. Efficient approximation of black-box functions and Pareto sets

    NARCIS (Netherlands)

    Rennen, G.

    2009-01-01

    In the case of time-consuming simulation models or other so-called black-box functions, we determine a metamodel which approximates the relation between the input- and output-variables of the simulation model. To solve multi-objective optimization problems, we approximate the Pareto set, i.e. the

  17. Existence of pareto equilibria for multiobjective games without compactness

    OpenAIRE

    Shiraishi, Yuya; Kuroiwa, Daishi

    2013-01-01

    In this paper, we investigate the existence of Pareto and weak Pareto equilibria for multiobjective games without compactness. By employing an existence theorem of Pareto equilibria due to Yu and Yuan([10]), several existence theorems of Pareto and weak Pareto equilibria for the multiobjective games are established in a similar way to Flores-B´azan.

  18. Word frequencies: A comparison of Pareto type distributions

    Science.gov (United States)

    Wiegand, Martin; Nadarajah, Saralees; Si, Yuancheng

    2018-03-01

    Mehri and Jamaati (2017) [18] used Zipf's law to model word frequencies in Holy Bible translations for one hundred live languages. We compare the fit of Zipf's law to a number of Pareto type distributions. The latter distributions are shown to provide the best fit, as judged by a number of comparative plots and error measures. The fit of Zipf's law appears generally poor.

  19. The exponentiated generalized Pareto distribution | Adeyemi | Ife ...

    African Journals Online (AJOL)

    Recently Gupta et al. (1998) introduced the exponentiated exponential distribution as a generalization of the standard exponential distribution. In this paper, we introduce a three-parameter generalized Pareto distribution, the exponentiated generalized Pareto distribution (EGP). We present a comprehensive treatment of the ...

  20. Computing gap free Pareto front approximations with stochastic search algorithms.

    Science.gov (United States)

    Schütze, Oliver; Laumanns, Marco; Tantar, Emilia; Coello, Carlos A Coello; Talbi, El-Ghazali

    2010-01-01

    Recently, a convergence proof of stochastic search algorithms toward finite size Pareto set approximations of continuous multi-objective optimization problems has been given. The focus was on obtaining a finite approximation that captures the entire solution set in some suitable sense, which was defined by the concept of epsilon-dominance. Though bounds on the quality of the limit approximation-which are entirely determined by the archiving strategy and the value of epsilon-have been obtained, the strategies do not guarantee to obtain a gap free approximation of the Pareto front. That is, such approximations A can reveal gaps in the sense that points f in the Pareto front can exist such that the distance of f to any image point F(a), a epsilon A, is "large." Since such gap free approximations are desirable in certain applications, and the related archiving strategies can be advantageous when memetic strategies are included in the search process, we are aiming in this work for such methods. We present two novel strategies that accomplish this task in the probabilistic sense and under mild assumptions on the stochastic search algorithm. In addition to the convergence proofs, we give some numerical results to visualize the behavior of the different archiving strategies. Finally, we demonstrate the potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategy-multi-objective continuation methods-by showing that the concept of epsilon-dominance can be integrated into this approach in a suitable way.

  1. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  2. HYbrid Coordinate Ocean Model (HYCOM): Global

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Global HYbrid Coordinate Ocean Model (HYCOM) and U.S. Navy Coupled Ocean Data Assimilation (NCODA) 3-day, daily forecast at approximately 9-km (1/12-degree)...

  3. Travelling Waves in Hybrid Chemotaxis Models

    KAUST Repository

    Franz, Benjamin

    2013-12-18

    Hybrid models of chemotaxis combine agent-based models of cells with partial differential equation models of extracellular chemical signals. In this paper, travelling wave properties of hybrid models of bacterial chemotaxis are investigated. Bacteria are modelled using an agent-based (individual-based) approach with internal dynamics describing signal transduction. In addition to the chemotactic behaviour of the bacteria, the individual-based model also includes cell proliferation and death. Cells consume the extracellular nutrient field (chemoattractant), which is modelled using a partial differential equation. Mesoscopic and macroscopic equations representing the behaviour of the hybrid model are derived and the existence of travelling wave solutions for these models is established. It is shown that cell proliferation is necessary for the existence of non-transient (stationary) travelling waves in hybrid models. Additionally, a numerical comparison between the wave speeds of the continuum models and the hybrid models shows good agreement in the case of weak chemotaxis and qualitative agreement for the strong chemotaxis case. In the case of slow cell adaptation, we detect oscillating behaviour of the wave, which cannot be explained by mean-field approximations. © 2013 Society for Mathematical Biology.

  4. Hybrid rocket engine, theoretical model and experiment

    Science.gov (United States)

    Chelaru, Teodor-Viorel; Mingireanu, Florin

    2011-06-01

    The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.

  5. Towards Modelling of Hybrid Systems

    DEFF Research Database (Denmark)

    Wisniewski, Rafal

    2006-01-01

    system consists of a number of dynamical systems that are glued together according to information encoded in the discrete part of the system. We develop a definition of a hybrid system as a functor from the category generated by a transition system to the category of directed topological spaces. Its...

  6. Hybrid computer modelling in plasma physics

    International Nuclear Information System (INIS)

    Hromadka, J; Ibehej, T; Hrach, R

    2016-01-01

    Our contribution is devoted to development of hybrid modelling techniques. We investigate sheath structures in the vicinity of solids immersed in low temperature argon plasma of different pressures by means of particle and fluid computer models. We discuss the differences in results obtained by these methods and try to propose a way to improve the results of fluid models in the low pressure area. There is a possibility to employ Chapman-Enskog method to find appropriate closure relations of fluid equations in a case when particle distribution function is not Maxwellian. We try to follow this way to enhance fluid model and to use it in hybrid plasma model further. (paper)

  7. Reactor systems modeling for ICF hybrids

    International Nuclear Information System (INIS)

    Berwald, D.H.; Meier, W.R.

    1980-10-01

    The computational models of ICF reactor subsystems developed by LLNL and TRW are described and a computer program was incorporated for use in the EPRI-sponsored Feasibility Assessment of Fusion-Fission Hybrids. Representative parametric variations have been examined. Many of the ICF subsystem models are very preliminary and more quantitative models need to be developed and included in the code

  8. Feasibility of identification of gamma knife planning strategies by identification of pareto optimal gamma knife plans.

    Science.gov (United States)

    Giller, C A

    2011-12-01

    The use of conformity indices to optimize Gamma Knife planning is common, but does not address important tradeoffs between dose to tumor and normal tissue. Pareto analysis has been used for this purpose in other applications, but not for Gamma Knife (GK) planning. The goal of this work is to use computer models to show that Pareto analysis may be feasible for GK planning to identify dosimetric tradeoffs. We define a GK plan A to be Pareto dominant to B if the prescription isodose volume of A covers more tumor but not more normal tissue than B, or if A covers less normal tissue but not less tumor than B. A plan is Pareto optimal if it is not dominated by any other plan. Two different Pareto optimal plans represent different tradeoffs between dose to tumor and normal tissue, because neither plan dominates the other. 'GK simulator' software calculated dose distributions for GK plans, and was called repetitively by a genetic algorithm to calculate Pareto dominant plans. Three irregular tumor shapes were tested in 17 trials using various combinations of shots. The mean number of Pareto dominant plans/trial was 59 ± 17 (sd). Different planning strategies were identified by large differences in shot positions, and 70 of the 153 coordinate plots (46%) showed differences of 5mm or more. The Pareto dominant plans dominated other nearby plans. Pareto dominant plans represent different dosimetric tradeoffs and can be systematically calculated using genetic algorithms. Automatic identification of non-intuitive planning strategies may be feasible with these methods.

  9. A hybrid mammalian cell cycle model

    Directory of Open Access Journals (Sweden)

    Vincent Noël

    2013-08-01

    Full Text Available Hybrid modeling provides an effective solution to cope with multiple time scales dynamics in systems biology. Among the applications of this method, one of the most important is the cell cycle regulation. The machinery of the cell cycle, leading to cell division and proliferation, combines slow growth, spatio-temporal re-organisation of the cell, and rapid changes of regulatory proteins concentrations induced by post-translational modifications. The advancement through the cell cycle comprises a well defined sequence of stages, separated by checkpoint transitions. The combination of continuous and discrete changes justifies hybrid modelling approaches to cell cycle dynamics. We present a piecewise-smooth version of a mammalian cell cycle model, obtained by hybridization from a smooth biochemical model. The approximate hybridization scheme, leading to simplified reaction rates and binary event location functions, is based on learning from a training set of trajectories of the smooth model. We discuss several learning strategies for the parameters of the hybrid model.

  10. A Hybrid 3D Indoor Space Model

    Directory of Open Access Journals (Sweden)

    A. Jamali

    2016-10-01

    Full Text Available GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM, Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.

  11. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  12. Modelling dependable systems using hybrid Bayesian networks

    International Nuclear Information System (INIS)

    Neil, Martin; Tailor, Manesh; Marquez, David; Fenton, Norman; Hearty, Peter

    2008-01-01

    A hybrid Bayesian network (BN) is one that incorporates both discrete and continuous nodes. In our extensive applications of BNs for system dependability assessment, the models are invariably hybrid and the need for efficient and accurate computation is paramount. We apply a new iterative algorithm that efficiently combines dynamic discretisation with robust propagation algorithms on junction tree structures to perform inference in hybrid BNs. We illustrate its use in the field of dependability with two example of reliability estimation. Firstly we estimate the reliability of a simple single system and next we implement a hierarchical Bayesian model. In the hierarchical model we compute the reliability of two unknown subsystems from data collected on historically similar subsystems and then input the result into a reliability block model to compute system level reliability. We conclude that dynamic discretisation can be used as an alternative to analytical or Monte Carlo methods with high precision and can be applied to a wide range of dependability problems

  13. Kinetics of wealth and the Pareto law.

    Science.gov (United States)

    Boghosian, Bruce M

    2014-04-01

    An important class of economic models involve agents whose wealth changes due to transactions with other agents. Several authors have pointed out an analogy with kinetic theory, which describes molecules whose momentum and energy change due to interactions with other molecules. We pursue this analogy and derive a Boltzmann equation for the time evolution of the wealth distribution of a population of agents for the so-called Yard-Sale Model of wealth exchange. We examine the solutions to this equation by a combination of analytical and numerical methods and investigate its long-time limit. We study an important limit of this equation for small transaction sizes and derive a partial integrodifferential equation governing the evolution of the wealth distribution in a closed economy. We then describe how this model can be extended to include features such as inflation, production, and taxation. In particular, we show that the model with taxation exhibits the basic features of the Pareto law, namely, a lower cutoff to the wealth density at small values of wealth, and approximate power-law behavior at large values of wealth.

  14. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  15. Income inequality in Romania: The exponential-Pareto distribution

    Science.gov (United States)

    Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan

    2017-03-01

    We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.

  16. [Origination of Pareto distribution in complex dynamic systems].

    Science.gov (United States)

    Chernavskiĭ, D S; Nikitin, A P; Chernavskaia, O D

    2008-01-01

    The Pareto distribution, whose probability density function can be approximated at sufficiently great chi as rho(chi) - chi(-alpha), where alpha > or = 2, is of crucial importance from both the theoretical and practical point of view. The main reason is its qualitative distinction from the normal (Gaussian) distribution. Namely, the probability of high deviations appears to be significantly higher. The conception of the universal applicability of the Gauss law remains to be widely distributed despite the lack of objective confirmation of this notion in a variety of application areas. The origin of the Pareto distribution in dynamic systems located in the gaussian noise field is considered. A simple one-dimensional model is discussed where the system response in a rather wide interval of the variable can be quite precisely approximated by this distribution.

  17. Pareto fronts in clinical practice for pinnacle.

    Science.gov (United States)

    Janssen, Tomas; van Kesteren, Zdenko; Franssen, Gijs; Damen, Eugène; van Vliet, Corine

    2013-03-01

    Our aim was to develop a framework to objectively perform treatment planning studies using Pareto fronts. The Pareto front represents all optimal possible tradeoffs among several conflicting criteria and is an ideal tool with which to study the possibilities of a given treatment technique. The framework should require minimal user interaction and should resemble and be applicable to daily clinical practice. To generate the Pareto fronts, we used the native scripting language of Pinnacle(3) (Philips Healthcare, Andover, MA). The framework generates thousands of plans automatically from which the Pareto front is generated. As an example, the framework is applied to compare intensity modulated radiation therapy (IMRT) with volumetric modulated arc therapy (VMAT) for prostate cancer patients. For each patient and each technique, 3000 plans are generated, resulting in a total of 60,000 plans. The comparison is based on 5-dimensional Pareto fronts. Generating 3000 plans for 10 patients in parallel requires on average 96 h for IMRT and 483 hours for VMAT. Using VMAT, compared to IMRT, the maximum dose of the boost PTV was reduced by 0.4 Gy (P=.074), the mean dose in the anal sphincter by 1.6 Gy (P=.055), the conformity index of the 95% isodose (CI(95%)) by 0.02 (P=.005), and the rectal wall V(65 Gy) by 1.1% (P=.008). We showed the feasibility of automatically generating Pareto fronts with Pinnacle(3). Pareto fronts provide a valuable tool for performing objective comparative treatment planning studies. We compared VMAT with IMRT in prostate patients and found VMAT had a dosimetric advantage over IMRT. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Pareto Fronts in Clinical Practice for Pinnacle

    International Nuclear Information System (INIS)

    Janssen, Tomas; Kesteren, Zdenko van; Franssen, Gijs; Damen, Eugène; Vliet, Corine van

    2013-01-01

    Purpose: Our aim was to develop a framework to objectively perform treatment planning studies using Pareto fronts. The Pareto front represents all optimal possible tradeoffs among several conflicting criteria and is an ideal tool with which to study the possibilities of a given treatment technique. The framework should require minimal user interaction and should resemble and be applicable to daily clinical practice. Methods and Materials: To generate the Pareto fronts, we used the native scripting language of Pinnacle 3 (Philips Healthcare, Andover, MA). The framework generates thousands of plans automatically from which the Pareto front is generated. As an example, the framework is applied to compare intensity modulated radiation therapy (IMRT) with volumetric modulated arc therapy (VMAT) for prostate cancer patients. For each patient and each technique, 3000 plans are generated, resulting in a total of 60,000 plans. The comparison is based on 5-dimensional Pareto fronts. Results: Generating 3000 plans for 10 patients in parallel requires on average 96 h for IMRT and 483 hours for VMAT. Using VMAT, compared to IMRT, the maximum dose of the boost PTV was reduced by 0.4 Gy (P=.074), the mean dose in the anal sphincter by 1.6 Gy (P=.055), the conformity index of the 95% isodose (CI 95% ) by 0.02 (P=.005), and the rectal wall V 65 Gy by 1.1% (P=.008). Conclusions: We showed the feasibility of automatically generating Pareto fronts with Pinnacle 3 . Pareto fronts provide a valuable tool for performing objective comparative treatment planning studies. We compared VMAT with IMRT in prostate patients and found VMAT had a dosimetric advantage over IMRT

  19. Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Xuanhua; Luo, Xuan; Liang, Junling; Zhao, Peng; Di, Sheng; He, Bingsheng; Jin, Hai

    2018-01-01

    GPUs have been increasingly used to accelerate graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or atomic operations, leading to significant penalties/overheads when implemented on GPUs. As such, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. Common coloring algorithms, however, may suffer from low parallelism because of a large number of colors generally required for processing a large-scale graph with billions of vertices. We propose a light-weight asynchronous processing framework called Frog with a preprocessing/hybrid coloring model. The fundamental idea is based on Pareto principle (or 80-20 rule) about coloring algorithms as we observed through masses of realworld graph coloring cases. We find that a majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution separates the processing of the vertices based on the distribution of colors. In this work, we mainly answer three questions: (1) how to partition the vertices in a sparse graph with maximized parallelism, (2) how to process large-scale graphs that cannot fit into GPU memory, and (3) how to reduce the overhead of data transfers on PCIe while processing each partition. We conduct experiments on real-world data (Amazon, DBLP, YouTube, RoadNet-CA, WikiTalk and Twitter) to evaluate our approach and make comparisons with well-known non-preprocessed (such as Totem, Medusa, MapGraph and Gunrock) and preprocessed (Cusha) approaches, by testing four classical algorithms (BFS, PageRank, SSSP and CC). On all the tested applications and

  20. Hybrid quantum teleportation: A theoretical model

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; Yoshikawa, Jun-ichi; Yonezawa, Hidehiro; Furusawa, Akira [Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2014-12-04

    Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.

  1. Post Pareto optimization-A case

    Science.gov (United States)

    Popov, Stoyan; Baeva, Silvia; Marinova, Daniela

    2017-12-01

    Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.

  2. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  3. A Novel Hybrid Similarity Calculation Model

    Directory of Open Access Journals (Sweden)

    Xiaoping Fan

    2017-01-01

    Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.

  4. TopN-Pareto Front Search

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-21

    The JMP Add-In TopN-PFS provides an automated tool for finding layered Pareto front to identify the top N solutions from an enumerated list of candidates subject to optimizing multiple criteria. The approach constructs the N layers of Pareto fronts, and then provides a suite of graphical tools to explore the alternatives based on different prioritizations of the criteria. The tool is designed to provide a set of alternatives from which the decision-maker can select the best option for their study goals.

  5. Hybrid Energy System Modeling in Modelica

    Energy Technology Data Exchange (ETDEWEB)

    William R. Binder; Christiaan J. J. Paredis; Humberto E. Garcia

    2014-03-01

    In this paper, a Hybrid Energy System (HES) configuration is modeled in Modelica. Hybrid Energy Systems (HES) have as their defining characteristic the use of one or more energy inputs, combined with the potential for multiple energy outputs. Compared to traditional energy systems, HES provide additional operational flexibility so that high variability in both energy production and consumption levels can be absorbed more effectively. This is particularly important when including renewable energy sources, whose output levels are inherently variable, determined by nature. The specific HES configuration modeled in this paper include two energy inputs: a nuclear plant, and a series of wind turbines. In addition, the system produces two energy outputs: electricity and synthetic fuel. The models are verified through simulations of the individual components, and the system as a whole. The simulations are performed for a range of component sizes, operating conditions, and control schemes.

  6. Mathematical Modeling of Hybrid Electrical Engineering Systems

    Directory of Open Access Journals (Sweden)

    A. A. Lobaty

    2016-01-01

    Full Text Available A large class of systems that have found application in various industries and households, electrified transportation facilities and energy sector has been classified as electrical engineering systems. Their characteristic feature is a combination of continuous and discontinuous modes of operation, which is reflected in the appearance of a relatively new term “hybrid systems”. A wide class of hybrid systems is pulsed DC converters operating in a pulse width modulation, which are non-linear systems with variable structure. Using various methods for linearization it is possible to obtain linear mathematical models that rather accurately simulate behavior of such systems. However, the presence in the mathematical models of exponential nonlinearities creates considerable difficulties in the implementation of digital hardware. The solution can be found while using an approximation of exponential functions by polynomials of the first order, that, however, violates the rigor accordance of the analytical model with characteristics of a real object. There are two practical approaches to synthesize algorithms for control of hybrid systems. The first approach is based on the representation of the whole system by a discrete model which is described by difference equations that makes it possible to synthesize discrete algorithms. The second approach is based on description of the system by differential equations. The equations describe synthesis of continuous algorithms and their further implementation in a digital computer included in the control loop system. The paper considers modeling of a hybrid electrical engineering system using differential equations. Neglecting the pulse duration, it has been proposed to describe behavior of vector components in phase coordinates of the hybrid system by stochastic differential equations containing generally non-linear differentiable random functions. A stochastic vector-matrix equation describing dynamics of the

  7. The Burr X Pareto Distribution: Properties, Applications and VaR Estimation

    Directory of Open Access Journals (Sweden)

    Mustafa Ç. Korkmaz

    2017-12-01

    Full Text Available In this paper, a new three-parameter Pareto distribution is introduced and studied. We discuss various mathematical and statistical properties of the new model. Some estimation methods of the model parameters are performed. Moreover, the peaks-over-threshold method is used to estimate Value-at-Risk (VaR by means of the proposed distribution. We compare the distribution with a few other models to show its versatility in modelling data with heavy tails. VaR estimation with the Burr X Pareto distribution is presented using time series data, and the new model could be considered as an alternative VaR model against the generalized Pareto model for financial institutions.

  8. Optimization of hybrid model on hajj travel

    Science.gov (United States)

    Cahyandari, R.; Ariany, R. L.; Sukono

    2018-03-01

    Hajj travel insurance is an insurance product offered by the insurance company in preparing funds to perform the pilgrimage. This insurance product helps would-be pilgrims to set aside a fund of saving hajj with regularly, but also provides funds of profit sharing (mudharabah) and insurance protection. Scheme of insurance product fund management is largely using the hybrid model, which is the fund from would-be pilgrims will be divided into three account management, that is personal account, tabarru’, and ujrah. Scheme of hybrid model on hajj travel insurance was already discussed at the earlier paper with titled “The Hybrid Model Algorithm on Sharia Insurance”, taking the example case of Mitra Mabrur Plus product from Bumiputera company. On these advanced paper, will be made the previous optimization model design, with partition of benefit the tabarru’ account. Benefits such as compensation for 40 critical illness which initially only for participants of insurance only, on optimization is intended for participants of the insurance and his heir, also to benefit the hospital bills. Meanwhile, the benefits of death benefit is given if the participant is fixed die.

  9. Axiomatizations of Pareto Equilibria in Multicriteria Games

    NARCIS (Netherlands)

    Voorneveld, M.; Vermeulen, D.; Borm, P.E.M.

    1997-01-01

    We focus on axiomatizations of the Pareto equilibrium concept in multicriteria games based on consistency.Axiomatizations of the Nash equilibrium concept by Peleg and Tijs (1996) and Peleg, Potters, and Tijs (1996) have immediate generalizations.The axiomatization of Norde et al.(1996) cannot be

  10. How Well Do We Know Pareto Optimality?

    Science.gov (United States)

    Mathur, Vijay K.

    1991-01-01

    Identifies sources of ambiguity in economics textbooks' discussion of the condition for efficient output mix. Points out that diverse statements without accompanying explanations create confusion among students. Argues that conflicting views concerning the concept of Pareto optimality as one source of ambiguity. Suggests clarifying additions to…

  11. Performance-based Pareto optimal design

    NARCIS (Netherlands)

    Sariyildiz, I.S.; Bittermann, M.S.; Ciftcioglu, O.

    2008-01-01

    A novel approach for performance-based design is presented, where Pareto optimality is pursued. Design requirements may contain linguistic information, which is difficult to bring into computation or make consistent their impartial estimations from case to case. Fuzzy logic and soft computing are

  12. Gravitational waves in hybrid quintessential inflationary models

    Energy Technology Data Exchange (ETDEWEB)

    Sa, Paulo M [Departamento de Fisica, Faculdade de Ciencias e Tecnologia, Universidade do Algarve, Campus de Gambelas, 8005-139 Faro (Portugal); Henriques, Alfredo B, E-mail: pmsa@ualg.pt, E-mail: alfredo.henriques@ist.utl.pt [Centro Multidisciplinar de Astrofisica - CENTRA and Departamento de Fisica, Instituto Superior Tecnico, UTL, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)

    2011-09-22

    The generation of primordial gravitational waves is investigated within the hybrid quintessential inflationary model. Using the method of continuous Bogoliubov coefficients, we calculate the full gravitational-wave energy spectrum. The post-inflationary kination period, characteristic of quintessential inflationary models, leaves a clear signature on the spectrum, namely, a sharp rise of the gravitational-wave spectral energy density {Omega}{sub GW} at high frequencies. For appropriate values of the parameters of the model, {Omega}{sub GW} can be as high as 10{sup -12} in the MHz-GHz range of frequencies.

  13. Gravitational waves in hybrid quintessential inflationary models

    International Nuclear Information System (INIS)

    Sa, Paulo M; Henriques, Alfredo B

    2011-01-01

    The generation of primordial gravitational waves is investigated within the hybrid quintessential inflationary model. Using the method of continuous Bogoliubov coefficients, we calculate the full gravitational-wave energy spectrum. The post-inflationary kination period, characteristic of quintessential inflationary models, leaves a clear signature on the spectrum, namely, a sharp rise of the gravitational-wave spectral energy density Ω GW at high frequencies. For appropriate values of the parameters of the model, Ω GW can be as high as 10 -12 in the MHz-GHz range of frequencies.

  14. Modelling Chemical Preservation of Plantain Hybrid Fruits

    Directory of Open Access Journals (Sweden)

    Ogueri Nwaiwu

    2017-08-01

    Full Text Available New plantain hybrids plants have been developed but not much has been done on the post-harvest keeping quality of the fruits and how they are affected by microbial colonization. Hence fruits from a tetraploid hybrid PITA 2 (TMPx 548-9 obtained by crossing plantain varieties Obino l’Ewai and Calcutta 4 (AA and two local triploid (AAB plantain landraces Agbagba and Obino l’Ewai were subjected to various concentrations of acetic, sorbic and propionic acid to determine the impact of chemical concentration, chemical type and plantain variety on ripening and weight loss of plantain fruits. Analysis of titratable acidity, moisture content and total soluble solids showed that there were no significant differences between fruits of hybrid and local varieties. The longest time to ripening from harvest (24 days was achieved with fruits of Agbagba treated with 3% propionic acid. However, fruits of PITA 2 hybrid treated with propionic and sorbic acid at 3% showed the longest green life which indicated that the chemicals may work better at higher concentrations. The Obino l’Ewai cultivar had the highest weight loss for all chemical types used. Modelling data obtained showed that plantain variety had the most significant effect on ripening and indicates that ripening of the fruits may depend on the plantain variety. It appears that weight loss of fruits from the plantain hybrid and local cultivars was not affected by the plantain variety, chemical type. The chemicals at higher concentrations may have an effect on ripening of the fruits and will need further investigation.

  15. Estimation of the shape parameter of a generalized Pareto distribution based on a transformation to Pareto distributed variables

    OpenAIRE

    van Zyl, J. Martin

    2012-01-01

    Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good wit...

  16. Income dynamics with a stationary double Pareto distribution.

    Science.gov (United States)

    Toda, Alexis Akira

    2011-04-01

    Once controlled for the trend, the distribution of personal income appears to be double Pareto, a distribution that obeys the power law exactly in both the upper and the lower tails. I propose a model of income dynamics with a stationary distribution that is consistent with this fact. Using US male wage data for 1970-1993, I estimate the power law exponent in two ways--(i) from each cross section, assuming that the distribution has converged to the stationary distribution, and (ii) from a panel directly estimating the parameters of the income dynamics model--and obtain the same value of 8.4.

  17. Comments On Clock Models In Hybrid Automata And Hybrid Control Systems

    Directory of Open Access Journals (Sweden)

    Virginia Ecaterina OLTEAN

    2001-12-01

    Full Text Available Hybrid systems have received a lot of attention in the past decade and a number of different models have been proposed in order to establish mathematical framework that is able to handle both continuous and discrete aspects. This contribution is focused on two models: hybrid automata and hybrid control systems with continuous-discrete interface and the importance of clock models is emphasized. Simple and relevant examples, some taken from the literature, accompany the presentation.

  18. Mathematical Modeling and a Hybrid NSGA-II Algorithm for Process Planning Problem Considering Machining Cost and Carbon Emission

    Directory of Open Access Journals (Sweden)

    Jin Huang

    2017-09-01

    Full Text Available Process planning is an important function in a manufacturing system; it specifies the manufacturing requirements and details for the shop floor to convert a part from raw material to the finished form. However, considering only economical criterion with technological constraints is not enough in sustainable manufacturing practice; formerly, criteria about low carbon emission awareness have seldom been taken into account in process planning optimization. In this paper, a mathematical model that considers both machining costs reduction as well as carbon emission reduction is established for the process planning problem. However, due to various flexibilities together with complex precedence constraints between operations, the process planning problem is a non-deterministic polynomial-time (NP hard problem. Aiming at the distinctive feature of the multi-objectives process planning optimization, we then developed a hybrid non-dominated sorting genetic algorithm (NSGA-II to tackle this problem. A local search method that considers both the total cost criterion and the carbon emission criterion are introduced into the proposed algorithm to avoid being trapped into local optima. Moreover, the technique for order preference by similarity to an ideal solution (TOPSIS method is also adopted to determine the best solution from the Pareto front. Experiments have been conducted using Kim’s benchmark. Computational results show that process plan schemes with low carbon emission can be captured, and, more importantly, the proposed hybrid NSGA-II algorithm can obtain more promising optimal Pareto front than the plain NSGA-II algorithm. Meanwhile, according to the computational results of Kim’s benchmark, we find that both of the total machining cost and carbon emission are roughly proportional to the number of operations, and a process plan with less operation may be more satisfactory. This study will draw references for the further research on green

  19. Distributed approximation of Pareto surfaces in multicriteria radiation therapy treatment planning

    International Nuclear Information System (INIS)

    Bokrantz, Rasmus

    2013-01-01

    We consider multicriteria radiation therapy treatment planning by navigation over the Pareto surface, implemented by interpolation between discrete treatment plans. Current state of the art for calculation of a discrete representation of the Pareto surface is to sandwich this set between inner and outer approximations that are updated one point at a time. In this paper, we generalize this sequential method to an algorithm that permits parallelization. The principle of the generalization is to apply the sequential method to an approximation of an inexpensive model of the Pareto surface. The information gathered from the model is sub-sequently used for the calculation of points from the exact Pareto surface, which are processed in parallel. The model is constructed according to the current inner and outer approximations, and given a shape that is difficult to approximate, in order to avoid that parts of the Pareto surface are incorrectly disregarded. Approximations of comparable quality to those generated by the sequential method are demonstrated when the degree of parallelization is up to twice the number of dimensions of the objective space. For practical applications, the number of dimensions is typically at least five, so that a speed-up of one order of magnitude is obtained. (paper)

  20. Distributed approximation of Pareto surfaces in multicriteria radiation therapy treatment planning.

    Science.gov (United States)

    Bokrantz, Rasmus

    2013-06-07

    We consider multicriteria radiation therapy treatment planning by navigation over the Pareto surface, implemented by interpolation between discrete treatment plans. Current state of the art for calculation of a discrete representation of the Pareto surface is to sandwich this set between inner and outer approximations that are updated one point at a time. In this paper, we generalize this sequential method to an algorithm that permits parallelization. The principle of the generalization is to apply the sequential method to an approximation of an inexpensive model of the Pareto surface. The information gathered from the model is sub-sequently used for the calculation of points from the exact Pareto surface, which are processed in parallel. The model is constructed according to the current inner and outer approximations, and given a shape that is difficult to approximate, in order to avoid that parts of the Pareto surface are incorrectly disregarded. Approximations of comparable quality to those generated by the sequential method are demonstrated when the degree of parallelization is up to twice the number of dimensions of the objective space. For practical applications, the number of dimensions is typically at least five, so that a speed-up of one order of magnitude is obtained.

  1. On Usage of Pareto curves to Select Wind Turbine Controller Tunings to the Wind Turbulence Level

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh

    2015-01-01

    Model predictive control has in recently publications shown its potential for lowering of cost of energy of modern wind turbines. Pareto curves can be used to evaluate performance of these controllers with multiple conflicting objectives of power and fatigue loads. In this paper an approach...... to update an model predictive wind turbine controller tuning as the wind turbulence increases, as increased turbulence levels results in higher loads for the same controller tuning. In this paper the Pareto curves are computed using an industrial high fidelity aero-elastic model. Simulations show...

  2. Design of Xen Hybrid Multiple Police Model

    Science.gov (United States)

    Sun, Lei; Lin, Renhao; Zhu, Xianwei

    2017-10-01

    Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.

  3. Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2015-09-01

    Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.

  4. A cost-emission model for fuel cell/PV/battery hybrid energy system in the presence of demand response program: ε-constraint method and fuzzy satisfying approach

    International Nuclear Information System (INIS)

    Nojavan, Sayyad; Majidi, Majid; Najafi-Ghalelou, Afshin; Ghahramani, Mehrdad; Zare, Kazem

    2017-01-01

    Highlights: • Cost-emission performance of PV/battery/fuel cell hybrid energy system is studied. • Multi-objective optimization model for cost-emission performance is proposed. • ε-constraint method is proposed to produce Pareto solutions of multi-objective model. • Fuzzy satisfying approach selected the best optimal solution from Pareto solutions. • Demand response program is proposed to reduce both cost and emission. - Abstract: Optimal operation of hybrid energy systems is a big challenge in power systems. Nowadays, in addition to the optimum performance of energy systems, their pollution issue has been a hot topic between researchers. In this paper, a multi-objective model is proposed for economic and environmental operation of a battery/fuel cell/photovoltaic (PV) hybrid energy system in the presence of demand response program (DRP). In the proposed paper, the first objective function is minimization of total cost of hybrid energy system. The second objective function is minimization of total CO_2 emission which is in conflict with the first objective function. So, a multi-objective optimization model is presented to model the hybrid system’s optimal and environmental performance problem with considering DRP. The proposed multi-objective model is solved by ε-constraint method and then fuzzy satisfying technique is employed to select the best possible solution. Also, positive effects of DRP on the economic and environmental performance of hybrid system are analyzed. A mixed-integer linear program is used to simulate the proposed model and the obtained results are compared with weighted sum approach to show the effectiveness of proposed method.

  5. Evaluation of Preanalytical Quality Indicators by Six Sigma and Pareto`s Principle.

    Science.gov (United States)

    Kulkarni, Sweta; Ramesh, R; Srinivasan, A R; Silvia, C R Wilma Delphine

    2018-01-01

    Preanalytical steps are the major sources of error in clinical laboratory. The analytical errors can be corrected by quality control procedures but there is a need for stringent quality checks in preanalytical area as these processes are done outside the laboratory. Sigma value depicts the performance of laboratory and its quality measures. Hence in the present study six sigma and Pareto principle was applied to preanalytical quality indicators to evaluate the clinical biochemistry laboratory performance. This observational study was carried out for a period of 1 year from November 2015-2016. A total of 1,44,208 samples and 54,265 test requisition forms were screened for preanalytical errors like missing patient information, sample collection details in forms and hemolysed, lipemic, inappropriate, insufficient samples and total number of errors were calculated and converted into defects per million and sigma scale. Pareto`s chart was drawn using total number of errors and cumulative percentage. In 75% test requisition forms diagnosis was not mentioned and sigma value of 0.9 was obtained and for other errors like sample receiving time, stat and type of sample sigma values were 2.9, 2.6, and 2.8 respectively. For insufficient sample and improper ratio of blood to anticoagulant sigma value was 4.3. Pareto`s chart depicts out of 80% of errors in requisition forms, 20% is contributed by missing information like diagnosis. The development of quality indicators, application of six sigma and Pareto`s principle are quality measures by which not only preanalytical, the total testing process can be improved.

  6. Infectious disease modeling a hybrid system approach

    CERN Document Server

    Liu, Xinzhi

    2017-01-01

    This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.

  7. Craniomandibular form and body size variation of first generation mouse hybrids: A model for hominin hybridization.

    Science.gov (United States)

    Warren, Kerryn A; Ritzman, Terrence B; Humphreys, Robyn A; Percival, Christopher J; Hallgrímsson, Benedikt; Ackermann, Rebecca Rogers

    2018-03-01

    Hybridization occurs in a number of mammalian lineages, including among primate taxa. Analyses of ancient genomes have shown that hybridization between our lineage and other archaic hominins in Eurasia occurred numerous times in the past. However, we still have limited empirical data on what a hybrid skeleton looks like, or how to spot patterns of hybridization among fossils for which there are no genetic data. Here we use experimental mouse models to supplement previous studies of primates. We characterize size and shape variation in the cranium and mandible of three wild-derived inbred mouse strains and their first generation (F 1 ) hybrids. The three parent taxa in our analysis represent lineages that diverged over approximately the same period as the human/Neanderthal/Denisovan lineages and their hybrids are variably successful in the wild. Comparisons of body size, as quantified by long bone measurements, are also presented to determine whether the identified phenotypic effects of hybridization are localized to the cranium or represent overall body size changes. The results indicate that hybrid cranial and mandibular sizes, as well as limb length, exceed that of the parent taxa in all cases. All three F 1 hybrid crosses display similar patterns of size and form variation. These results are generally consistent with earlier studies on primates and other mammals, suggesting that the effects of hybridization may be similar across very different scenarios of hybridization, including different levels of hybrid fitness. This paper serves to supplement previous studies aimed at identifying F 1 hybrids in the fossil record and to introduce further research that will explore hybrid morphologies using mice as a proxy for better understanding hybridization in the hominin fossil record. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Pareto front estimation for decision making.

    Science.gov (United States)

    Giagkiozis, Ioannis; Fleming, Peter J

    2014-01-01

    The set of available multi-objective optimisation algorithms continues to grow. This fact can be partially attributed to their widespread use and applicability. However, this increase also suggests several issues remain to be addressed satisfactorily. One such issue is the diversity and the number of solutions available to the decision maker (DM). Even for algorithms very well suited for a particular problem, it is difficult-mainly due to the computational cost-to use a population large enough to ensure the likelihood of obtaining a solution close to the DM's preferences. In this paper we present a novel methodology that produces additional Pareto optimal solutions from a Pareto optimal set obtained at the end run of any multi-objective optimisation algorithm for two-objective and three-objective problem instances.

  9. Multiclass gene selection using Pareto-fronts.

    Science.gov (United States)

    Rajapakse, Jagath C; Mundra, Piyushkumar A

    2013-01-01

    Filter methods are often used for selection of genes in multiclass sample classification by using microarray data. Such techniques usually tend to bias toward a few classes that are easily distinguishable from other classes due to imbalances of strong features and sample sizes of different classes. It could therefore lead to selection of redundant genes while missing the relevant genes, leading to poor classification of tissue samples. In this manuscript, we propose to decompose multiclass ranking statistics into class-specific statistics and then use Pareto-front analysis for selection of genes. This alleviates the bias induced by class intrinsic characteristics of dominating classes. The use of Pareto-front analysis is demonstrated on two filter criteria commonly used for gene selection: F-score and KW-score. A significant improvement in classification performance and reduction in redundancy among top-ranked genes were achieved in experiments with both synthetic and real-benchmark data sets.

  10. A hybrid modeling approach for option pricing

    Science.gov (United States)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  11. Optimization of Wind Turbine Airfoil Using Nondominated Sorting Genetic Algorithm and Pareto Optimal Front

    Directory of Open Access Journals (Sweden)

    Ziaul Huque

    2012-01-01

    Full Text Available A Computational Fluid Dynamics (CFD and response surface-based multiobjective design optimization were performed for six different 2D airfoil profiles, and the Pareto optimal front of each airfoil is presented. FLUENT, which is a commercial CFD simulation code, was used to determine the relevant aerodynamic loads. The Lift Coefficient (CL and Drag Coefficient (CD data at a range of 0° to 12° angles of attack (α and at three different Reynolds numbers (Re=68,459, 479, 210, and 958, 422 for all the six airfoils were obtained. Realizable k-ε turbulence model with a second-order upwind solution method was used in the simulations. The standard least square method was used to generate response surface by the statistical code JMP. Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II was used to determine the Pareto optimal set based on the response surfaces. Each Pareto optimal solution represents a different compromise between design objectives. This gives the designer a choice to select a design compromise that best suits the requirements from a set of optimal solutions. The Pareto solution set is presented in the form of a Pareto optimal front.

  12. Pareto joint inversion of 2D magnetotelluric and gravity data

    Science.gov (United States)

    Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek

    2015-04-01

    In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where

  13. Pareto vs Simmel: residui ed emozioni

    Directory of Open Access Journals (Sweden)

    Silvia Fornari

    2017-08-01

    Full Text Available A cento anni dalla pubblicazione del Trattato di sociologia generale (Pareto 1988 siamo a mantenere vivo ed attuale lo studio paretiano con una rilettura contemporanea del suo pensiero. Ricordato per la grande versatilità intellettuale dagli economisti, rimane lo scienziato rigoroso ed analitico i cui contributi sono ancora discussi a livello internazionale. Noi ne analizzeremo gli aspetti che l’hanno portato ad avvicinarsi all’approccio sociologico, con l’introduzione della nota distinzione dell’azione sociale: logica e non-logica. Una dicotomia utilizzata per dare conto dei cambiamenti sociali riguardanti le modalità d’azione degli uomini e delle donne. Com’è noto le azioni logiche sono quelle che riguardano comportamenti mossi da logicità e raziocinio, in cui vi è una diretta relazione causa-effetto, azioni oggetto di studio degli economisti, e di cui non si occupano i sociologi. Le azioni non-logiche riguardano tutte le tipologie di agire umano che rientrano nel novero delle scienze sociali, e che rappresentano la parte più ampia dell’agire sociale. Sono le azioni guidate dai sentimenti, dall’emotività, dalla superstizione, ecc., illustrate da Pareto nel Trattato di sociologia generale e in saggi successivi, dove riprende anche il concetto di eterogenesi dei fini, formulato per la prima volta da Giambattista Vico. Concetto secondo il quale la storia umana, pur conservando in potenza la realizzazione di certi fini, non è lineare e lungo il suo percorso evolutivo può accadere che l’uomo nel tentativo di raggiungere una finalità arrivi a conclusioni opposte. Pareto collega la definizione del filosofo napoletano alle tipologie di azione sociale e alla loro distinzione (logiche, non-logiche. L’eterogenesi dei fini per Pareto è dunque l’esito di un particolare tipo di azione non-logica dell’essere umano e della collettività.

  14. Monopoly, Pareto and Ramsey mark-ups

    OpenAIRE

    Ten Raa, T.

    2009-01-01

    Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear demand or, even within the constant elasticity framework, dependence is introduced. The analysis provides a single Generalized Inverse Elasticity Rule for the problems of monopoly, Pareto and Ramsey.

  15. Fluid and hybrid models for streamers

    Science.gov (United States)

    Bonaventura, Zdeněk

    2016-09-01

    Streamers are contracted ionizing waves with self-generated field enhancement that propagate into a low-ionized medium exposed to high electric field leaving filamentary trails of plasma behind. The widely used model to study streamer dynamics is based on drift-diffusion equations for electrons and ions, assuming local field approximation, coupled with Poisson's equation. For problems where presence of energetic electrons become important a fluid approach needs to be extended by a particle model, accompanied also with Monte Carlo Collision technique, that takes care of motion of these electrons. A combined fluid-particle approach is used to study an influence of surface emission processes on a fast-pulsed dielectric barrier discharge in air at atmospheric pressure. It is found that fluid-only model predicts substantially faster reignition dynamics compared to coupled fluid-particle model. Furthermore, a hybrid model can be created in which the population of electrons is divided in the energy space into two distinct groups: (1) low energy `bulk' electrons that are treated with fluid model, and (2) high energy `beam' electrons, followed as particles. The hybrid model is then capable not only to deal with streamer discharges in laboratory conditions, but also allows us to study electron acceleration in streamer zone of lighting leaders. There, the production of fast electrons from streamers is investigated, since these (runaway) electrons act as seeds for the relativistic runaway electron avalanche (RREA) mechanism, important for high-energy atmospheric physics phenomena. Results suggest that high energy electrons effect the streamer propagation, namely the velocity, the peak electric field, and thus also the production rate of runaway electrons. This work has been supported by the Czech Science Foundation research project 15-04023S.

  16. Modeling, simulation, and concept design for hybrid-electric medium-size military trucks

    Science.gov (United States)

    Rizzoni, Giorgio; Josephson, John R.; Soliman, Ahmed; Hubert, Christopher; Cantemir, Codrin-Gruie; Dembski, Nicholas; Pisu, Pierluigi; Mikesell, David; Serrao, Lorenzo; Russell, James; Carroll, Mark

    2005-05-01

    A large scale design space exploration can provide valuable insight into vehicle design tradeoffs being considered for the U.S. Army"s FMTV (Family of Medium Tactical Vehicles). Through a grant from TACOM (Tank-automotive and Armaments Command), researchers have generated detailed road, surface, and grade conditions representative of the performance criteria of this medium-sized truck and constructed a virtual powertrain simulator for both conventional and hybrid variants. The simulator incorporates the latest technology among vehicle design options, including scalable ultracapacitor and NiMH battery packs as well as a variety of generator and traction motor configurations. An energy management control strategy has also been developed to provide efficiency and performance. A design space exploration for the family of vehicles involves running a large number of simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles and multiple attributes of performance are measured. The resulting designs are filtered to remove dominated designs, exposing the multi-criterial surface of optimality (Pareto optimal designs), and revealing the design tradeoffs as they impact vehicle performance and economy. The results are not yet definitive because ride and drivability measures were not included, and work is not finished on fine-tuning the modeled dynamics of some powertrain components. However, the work so far completed demonstrates the effectiveness of the approach to design space exploration, and the results to date suggest the powertrain configuration best suited to the FMTV mission.

  17. Modeling of renewable hybrid energy sources

    Directory of Open Access Journals (Sweden)

    Dumitru Cristian Dragos

    2009-12-01

    Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.

  18. Hybrid Models of Alternative Current Filter for Hvdc

    Directory of Open Access Journals (Sweden)

    Ufa Ruslan A.

    2017-01-01

    Full Text Available Based on a hybrid simulation concept of HVDC, the developed hybrid AC filter models, providing the sufficiently full and adequate modeling of all single continuous spectrum of quasi-steady-state and transient processes in the filter, are presented. The obtained results suggest that usage of the hybrid simulation approach is carried out a methodically accurate with guaranteed instrumental error solution of differential equation systems of mathematical models of HVDC.

  19. Prediction in Partial Duration Series With Generalized Pareto-Distributed Exceedances

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik; Rasmussen, Peter Funder

    1992-01-01

    As a generalization of the common assumption of exponential distribution of the exceedances in Partial duration series the generalized Pareto distribution has been adopted. Estimators for the parameters are presented using estimation by both method of moments and probability-weighted moments......-weighted moments. Maintaining the generalized Pareto distribution as the parent exceedance distribution the T-year event is estimated assuming the exceedances to be exponentially distributed. For moderately long-tailed exceedance distributions and small to moderate sample sizes it is found, by comparing mean...... square errors of the T-year event estimators, that the exponential distribution is preferable to the correct generalized Pareto distribution despite the introduced model error and despite a possible rejection of the exponential hypothesis by a test of significance. For moderately short-tailed exceedance...

  20. Solving multi-objective job shop problem using nature-based algorithms: new Pareto approximation features

    Directory of Open Access Journals (Sweden)

    Jarosław Rudy

    2015-01-01

    Full Text Available In this paper the job shop scheduling problem (JSP with minimizing two criteria simultaneously is considered. JSP is frequently used model in real world applications of combinatorial optimization. Multi-objective job shop problems (MOJSP were rarely studied. We implement and compare two multi-agent nature-based methods, namely ant colony optimization (ACO and genetic algorithm (GA for MOJSP. Both of those methods employ certain technique, taken from the multi-criteria decision analysis in order to establish ranking of solutions. ACO and GA differ in a method of keeping information about previously found solutions and their quality, which affects the course of the search. In result, new features of Pareto approximations provided by said algorithms are observed: aside from the slight superiority of the ACO method the Pareto frontier approximations provided by both methods are disjoint sets. Thus, both methods can be used to search mutually exclusive areas of the Pareto frontier.

  1. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  2. Analysis of chromosome aberration data by hybrid-scale models

    International Nuclear Information System (INIS)

    Indrawati, Iwiq; Kumazawa, Shigeru

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  3. A hybrid model for electricity spot prices

    International Nuclear Information System (INIS)

    Anderson, C.L.D.

    2004-01-01

    Electricity prices were highly regulated prior to the deregulation of the electric power industry. Prices were predictable, allowing generators and wholesalers to calculate their production costs and revenues. With deregulation, electricity has become the most volatile of all commodities. Electricity must be consumed as soon as it is generated due to the inability to store it in any sufficient quantity. Economic uncertainty exists because the supply of electricity cannot shift as quickly as the demand, which is highly variable. When demand increases quickly, the price must respond. Therefore, price spikes occur that are orders of magnitude higher than the base electricity price. This paper presents a robust and realistic model for spot market electricity prices used to manage risk in volatile markets. The model is a hybrid of a top down data driven method commonly used for financial applications, and a bottom up system driven method commonly used in regulated electricity markets. The advantage of the model is that it incorporates primary system drivers and demonstrates their effects on final prices. The 4 primary modules of the model are: (1) a model for forced outages, (2) a model for maintenance outages, (3) an electrical load model, and (4) a price model which combines the results of the previous 3 models. The performance of each model was tested. The forced outage model is the first of its kind to simulate the system on an aggregate basis using Weibull distributions. The overall spot price model was calibrated to, and tested with, data from the electricity market in Pennsylvania, New Jersey and Maryland. The model performed well in simulated market prices and adapted readily to changing system conditions and new electricity markets. This study examined the pricing of derivative contracts on electrical power. It also compared a range of portfolio scenarios using a Cash Flow at Risk approach

  4. A hybrid model for electricity spot prices

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, C.L.D.

    2004-07-01

    Electricity prices were highly regulated prior to the deregulation of the electric power industry. Prices were predictable, allowing generators and wholesalers to calculate their production costs and revenues. With deregulation, electricity has become the most volatile of all commodities. Electricity must be consumed as soon as it is generated due to the inability to store it in any sufficient quantity. Economic uncertainty exists because the supply of electricity cannot shift as quickly as the demand, which is highly variable. When demand increases quickly, the price must respond. Therefore, price spikes occur that are orders of magnitude higher than the base electricity price. This paper presents a robust and realistic model for spot market electricity prices used to manage risk in volatile markets. The model is a hybrid of a top down data driven method commonly used for financial applications, and a bottom up system driven method commonly used in regulated electricity markets. The advantage of the model is that it incorporates primary system drivers and demonstrates their effects on final prices. The 4 primary modules of the model are: (1) a model for forced outages, (2) a model for maintenance outages, (3) an electrical load model, and (4) a price model which combines the results of the previous 3 models. The performance of each model was tested. The forced outage model is the first of its kind to simulate the system on an aggregate basis using Weibull distributions. The overall spot price model was calibrated to, and tested with, data from the electricity market in Pennsylvania, New Jersey and Maryland. The model performed well in simulated market prices and adapted readily to changing system conditions and new electricity markets. This study examined the pricing of derivative contracts on electrical power. It also compared a range of portfolio scenarios using a Cash Flow at Risk approach.

  5. A Hybrid Teaching and Learning Model

    Science.gov (United States)

    Juhary, Jowati Binti

    This paper aims at analysing the needs for a specific teaching and learning model for the National Defence University of Malaysia (NDUM). The main argument is that whether there are differences between teaching and learning for academic component versus military component at the university. It is further argued that in order to achieve excellence, there should be one teaching and learning culture. Data were collected through interviews with military cadets. It is found that there are variations of teaching and learning strategies for academic courses, in comparison to a dominant teaching and learning style for military courses. Thus, in the interest of delivering quality education and training for students at the university, the paper argues that possibly a hybrid model for teaching and learning is fundamental in order to generate a one culture of academic and military excellence for the NDUM.

  6. Modelling supervisory controller for hybrid power systems

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, A; Bindner, H; Lundsager, P [Risoe National Lab., Roskilde (Denmark); Jannerup, O [Technical Univ. of Denmark, Dept. of Automation, Lyngby (Denmark)

    1999-03-01

    Supervisory controllers are important to achieve optimal operation of hybrid power systems. The performance and economics of such systems depend mainly on the control strategy for switching on/off components. The modular concept described in this paper is an attempt to design standard supervisory controllers that could be used in different applications, such as village power and telecommunication applications. This paper presents some basic aspects of modelling and design of modular supervisory controllers using the object-oriented modelling technique. The functional abstraction hierarchy technique is used to formulate the control requirements and identify the functions of the control system. The modular algorithm is generic and flexible enough to be used with any system configuration and several goals (different applications). The modularity includes accepting modification of system configuration and goals during operation with minor or no changes in the supervisory controller. (au)

  7. A New Generalization of the Pareto Distribution and Its Application to Insurance Data

    Directory of Open Access Journals (Sweden)

    Mohamed E. Ghitany

    2018-02-01

    Full Text Available The Pareto classical distribution is one of the most attractive in statistics and particularly in the scenario of actuarial statistics and finance. For example, it is widely used when calculating reinsurance premiums. In the last years, many alternative distributions have been proposed to obtain better adjustments especially when the tail of the empirical distribution of the data is very long. In this work, an alternative generalization of the Pareto distribution is proposed and its properties are studied. Finally, application of the proposed model to the earthquake insurance data set is presented.

  8. RNA-Pareto: interactive analysis of Pareto-optimal RNA sequence-structure alignments.

    Science.gov (United States)

    Schnattinger, Thomas; Schöning, Uwe; Marchfelder, Anita; Kestler, Hans A

    2013-12-01

    Incorporating secondary structure information into the alignment process improves the quality of RNA sequence alignments. Instead of using fixed weighting parameters, sequence and structure components can be treated as different objectives and optimized simultaneously. The result is not a single, but a Pareto-set of equally optimal solutions, which all represent different possible weighting parameters. We now provide the interactive graphical software tool RNA-Pareto, which allows a direct inspection of all feasible results to the pairwise RNA sequence-structure alignment problem and greatly facilitates the exploration of the optimal solution set.

  9. A Hybrid Tsunami Risk Model for Japan

    Science.gov (United States)

    Haseemkunju, A. V.; Smith, D. F.; Khater, M.; Khemici, O.; Betov, B.; Scott, J.

    2014-12-01

    Around the margins of the Pacific Ocean, denser oceanic plates slipping under continental plates cause subduction earthquakes generating large tsunami waves. The subducting Pacific and Philippine Sea plates create damaging interplate earthquakes followed by huge tsunami waves. It was a rupture of the Japan Trench subduction zone (JTSZ) and the resultant M9.0 Tohoku-Oki earthquake that caused the unprecedented tsunami along the Pacific coast of Japan on March 11, 2011. EQECAT's Japan Earthquake model is a fully probabilistic model which includes a seismo-tectonic model describing the geometries, magnitudes, and frequencies of all potential earthquake events; a ground motion model; and a tsunami model. Within the much larger set of all modeled earthquake events, fault rupture parameters for about 24000 stochastic and 25 historical tsunamigenic earthquake events are defined to simulate tsunami footprints using the numerical tsunami model COMCOT. A hybrid approach using COMCOT simulated tsunami waves is used to generate inundation footprints, including the impact of tides and flood defenses. Modeled tsunami waves of major historical events are validated against observed data. Modeled tsunami flood depths on 30 m grids together with tsunami vulnerability and financial models are then used to estimate insured loss in Japan from the 2011 tsunami. The primary direct report of damage from the 2011 tsunami is in terms of the number of buildings damaged by municipality in the tsunami affected area. Modeled loss in Japan from the 2011 tsunami is proportional to the number of buildings damaged. A 1000-year return period map of tsunami waves shows high hazard along the west coast of southern Honshu, on the Pacific coast of Shikoku, and on the east coast of Kyushu, primarily associated with major earthquake events on the Nankai Trough subduction zone (NTSZ). The highest tsunami hazard of more than 20m is seen on the Sanriku coast in northern Honshu, associated with the JTSZ.

  10. Evolutionary tradeoffs, Pareto optimality and the morphology of ammonite shells.

    Science.gov (United States)

    Tendler, Avichai; Mayo, Avraham; Alon, Uri

    2015-03-07

    Organisms that need to perform multiple tasks face a fundamental tradeoff: no design can be optimal at all tasks at once. Recent theory based on Pareto optimality showed that such tradeoffs lead to a highly defined range of phenotypes, which lie in low-dimensional polyhedra in the space of traits. The vertices of these polyhedra are called archetypes- the phenotypes that are optimal at a single task. To rigorously test this theory requires measurements of thousands of species over hundreds of millions of years of evolution. Ammonoid fossil shells provide an excellent model system for this purpose. Ammonoids have a well-defined geometry that can be parameterized using three dimensionless features of their logarithmic-spiral-shaped shells. Their evolutionary history includes repeated mass extinctions. We find that ammonoids fill out a pyramid in morphospace, suggesting five specific tasks - one for each vertex of the pyramid. After mass extinctions, surviving species evolve to refill essentially the same pyramid, suggesting that the tasks are unchanging. We infer putative tasks for each archetype, related to economy of shell material, rapid shell growth, hydrodynamics and compactness. These results support Pareto optimality theory as an approach to study evolutionary tradeoffs, and demonstrate how this approach can be used to infer the putative tasks that may shape the natural selection of phenotypes.

  11. A muscle model for hybrid muscle activation

    Directory of Open Access Journals (Sweden)

    Klauer Christian

    2015-09-01

    Full Text Available To develop model-based control strategies for Functional Electrical Stimulation (FES in order to support weak voluntary muscle contractions, a hybrid model for describing joint motions induced by concurrent voluntary-and FES induced muscle activation is proposed. It is based on a Hammerstein model – as commonly used in feedback controlled FES – and exemplarily applied to describe the shoulder abduction joint angle. Main component of a Hammerstein muscle model is usually a static input nonlinearity depending on the stimulation intensity. To additionally incorporate voluntary contributions, we extended the static non-linearity by a second input describing the intensity of the voluntary contribution that is estimated by electromyography (EMG measurements – even during active FES. An Artificial Neural Network (ANN is used to describe the static input non-linearity. The output of the ANN drives a second-order linear dynamical system that describes the combined muscle activation and joint angle dynamics. The tunable parameters are adapted to the individual subject by a system identification approach using previously recorded I/O-data. The model has been validated in two healthy subjects yielding RMS values for the joint angle error of 3.56° and 3.44°, respectively.

  12. Hybrid discrete choice models: Gained insights versus increasing effort

    International Nuclear Information System (INIS)

    Mariel, Petr; Meyerhoff, Jürgen

    2016-01-01

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  13. Hybrid discrete choice models: Gained insights versus increasing effort

    Energy Technology Data Exchange (ETDEWEB)

    Mariel, Petr, E-mail: petr.mariel@ehu.es [UPV/EHU, Economía Aplicada III, Avda. Lehendakari Aguire, 83, 48015 Bilbao (Spain); Meyerhoff, Jürgen [Institute for Landscape Architecture and Environmental Planning, Technical University of Berlin, D-10623 Berlin, Germany and The Kiel Institute for the World Economy, Duesternbrooker Weg 120, 24105 Kiel (Germany)

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  14. A Pareto-Improving Minimum Wage

    OpenAIRE

    Eliav Danziger; Leif Danziger

    2014-01-01

    This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...

  15. Exploratory Topology Modelling of Form-Active Hybrid Structures

    DEFF Research Database (Denmark)

    Holden Deleuran, Anders; Pauly, Mark; Tamke, Martin

    2016-01-01

    The development of novel form-active hybrid structures (FAHS) is impeded by a lack of modelling tools that allow for exploratory topology modelling of shaped assemblies. We present a flexible and real-time computational design modelling pipeline developed for the exploratory modelling of FAHS...... that enables designers and engineers to iteratively construct and manipulate form-active hybrid assembly topology on the fly. The pipeline implements Kangaroo2's projection-based methods for modelling hybrid structures consisting of slender beams and cable networks. A selection of design modelling sketches...

  16. Sensitivity versus accuracy in multiclass problems using memetic Pareto evolutionary neural networks.

    Science.gov (United States)

    Fernández Caballero, Juan Carlos; Martínez, Francisco José; Hervás, César; Gutiérrez, Pedro Antonio

    2010-05-01

    This paper proposes a multiclassification algorithm using multilayer perceptron neural network models. It tries to boost two conflicting main objectives of multiclassifiers: a high correct classification rate level and a high classification rate for each class. This last objective is not usually optimized in classification, but is considered here given the need to obtain high precision in each class in real problems. To solve this machine learning problem, we use a Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm. We consider a memetic Pareto evolutionary approach based on the NSGA2 evolutionary algorithm (MPENSGA2). Once the Pareto front is built, two strategies or automatic individual selection are used: the best model in accuracy and the best model in sensitivity (extremes in the Pareto front). These methodologies are applied to solve 17 classification benchmark problems obtained from the University of California at Irvine (UCI) repository and one complex real classification problem. The models obtained show high accuracy and a high classification rate for each class.

  17. Pareto-Optimal Estimates of California Precipitation Change

    Science.gov (United States)

    Langenbrunner, Baird; Neelin, J. David

    2017-12-01

    In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.

  18. Pareto Improving Price Regulation when the Asset Market is Incomplete

    NARCIS (Netherlands)

    Herings, P.J.J.; Polemarchakis, H.M.

    1999-01-01

    When the asset market is incomplete, competitive equilibria are constrained suboptimal, which provides a scope for pareto improving interventions. Price regulation can be such a pareto improving policy, even when the welfare effects of rationing are taken into account. An appealing aspect of price

  19. Pareto optimality in infinite horizon linear quadratic differential games

    NARCIS (Netherlands)

    Reddy, P.V.; Engwerda, J.C.

    2013-01-01

    In this article we derive conditions for the existence of Pareto optimal solutions for linear quadratic infinite horizon cooperative differential games. First, we present a necessary and sufficient characterization for Pareto optimality which translates to solving a set of constrained optimal

  20. Pareto 80/20 Law: Derivation via Random Partitioning

    Science.gov (United States)

    Lipovetsky, Stan

    2009-01-01

    The Pareto 80/20 Rule, also known as the Pareto principle or law, states that a small number of causes (20%) is responsible for a large percentage (80%) of the effect. Although widely recognized as a heuristic rule, this proportion has not been theoretically based. The article considers derivation of this 80/20 rule and some other standard…

  1. The exponential age distribution and the Pareto firm size distribution

    OpenAIRE

    Coad, Alex

    2008-01-01

    Recent work drawing on data for large and small firms has shown a Pareto distribution of firm size. We mix a Gibrat-type growth process among incumbents with an exponential distribution of firm’s age, to obtain the empirical Pareto distribution.

  2. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  3. Model predictive control of hybrid systems : stability and robustness

    NARCIS (Netherlands)

    Lazar, M.

    2006-01-01

    This thesis considers the stabilization and the robust stabilization of certain classes of hybrid systems using model predictive control. Hybrid systems represent a broad class of dynamical systems in which discrete behavior (usually described by a finite state machine) and continuous behavior

  4. Transient Model of Hybrid Concentrated Photovoltaic with Thermoelectric Generator

    DEFF Research Database (Denmark)

    Mahmoudi Nezhad, Sajjad; Qing, Shaowei; Rezaniakolaei, Alireza

    2017-01-01

    Transient performance of a concentrated photovoltaic thermoelectric (CPV-TEG) hybrid system is modeled and investigated. A heat sink with water, as the working fluid has been implemented as the cold reservoir of the hybrid system to harvest the heat loss from CPV cell and to increase the efficiency...

  5. Origin of Pareto-like spatial distributions in ecosystems.

    Science.gov (United States)

    Manor, Alon; Shnerb, Nadav M

    2008-12-31

    Recent studies of cluster distribution in various ecosystems revealed Pareto statistics for the size of spatial colonies. These results were supported by cellular automata simulations that yield robust criticality for endogenous pattern formation based on positive feedback. We show that this patch statistics is a manifestation of the law of proportionate effect. Mapping the stochastic model to a Markov birth-death process, the transition rates are shown to scale linearly with cluster size. This mapping provides a connection between patch statistics and the dynamics of the ecosystem; the "first passage time" for different colonies emerges as a powerful tool that discriminates between endogenous and exogenous clustering mechanisms. Imminent catastrophic shifts (such as desertification) manifest themselves in a drastic change of the stability properties of spatial colonies.

  6. Searching for the Pareto frontier in multi-objective protein design.

    Science.gov (United States)

    Nanda, Vikas; Belure, Sandeep V; Shir, Ofer M

    2017-08-01

    The goal of protein engineering and design is to identify sequences that adopt three-dimensional structures of desired function. Often, this is treated as a single-objective optimization problem, identifying the sequence-structure solution with the lowest computed free energy of folding. However, many design problems are multi-state, multi-specificity, or otherwise require concurrent optimization of multiple objectives. There may be tradeoffs among objectives, where improving one feature requires compromising another. The challenge lies in determining solutions that are part of the Pareto optimal set-designs where no further improvement can be achieved in any of the objectives without degrading one of the others. Pareto optimality problems are found in all areas of study, from economics to engineering to biology, and computational methods have been developed specifically to identify the Pareto frontier. We review progress in multi-objective protein design, the development of Pareto optimization methods, and present a specific case study using multi-objective optimization methods to model the tradeoff between three parameters, stability, specificity, and complexity, of a set of interacting synthetic collagen peptides.

  7. Design, analysis and modeling of a novel hybrid powertrain system based on hybridized automated manual transmission

    Science.gov (United States)

    Wu, Guang; Dong, Zuomin

    2017-09-01

    Hybrid electric vehicles are widely accepted as a promising short to mid-term technical solution due to noticeably improved efficiency and lower emissions at competitive costs. In recent years, various hybrid powertrain systems were proposed and implemented based on different types of conventional transmission. Power-split system, including Toyota Hybrid System and Ford Hybrid System, are well-known examples. However, their relatively low torque capacity, and the drive of alternative and more advanced designs encouraged other innovative hybrid system designs. In this work, a new type of hybrid powertrain system based hybridized automated manual transmission (HAMT) is proposed. By using the concept of torque gap filler (TGF), this new hybrid powertrain type has the potential to overcome issue of torque gap during gearshift. The HAMT design (patent pending) is described in details, from gear layout and design of gear ratios (EV mode and HEV mode) to torque paths at different gears. As an analytical tool, mutli-body model of vehicle equipped with this HAMT was built to analyze powertrain dynamics at various steady and transient modes. A gearshift was decomposed and analyzed based basic modes. Furthermore, a Simulink-SimDriveline hybrid vehicle model was built for the new transmission, driveline and vehicle modular. Control strategy has also been built to harmonically coordinate different powertrain components to realize TGF function. A vehicle launch simulation test has been completed under 30% of accelerator pedal position to reveal details during gearshift. Simulation results showed that this HAMT can eliminate most torque gap that has been persistent issue of traditional AMT, improving both drivability and performance. This work demonstrated a new type of transmission that features high torque capacity, high efficiency and improved drivability.

  8. Tractable Pareto Optimization of Temporal Preferences

    Science.gov (United States)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  9. Bond graph model-based fault diagnosis of hybrid systems

    CERN Document Server

    Borutzky, Wolfgang

    2015-01-01

    This book presents a bond graph model-based approach to fault diagnosis in mechatronic systems appropriately represented by a hybrid model. The book begins by giving a survey of the fundamentals of fault diagnosis and failure prognosis, then recalls state-of-art developments referring to latest publications, and goes on to discuss various bond graph representations of hybrid system models, equations formulation for switched systems, and simulation of their dynamic behavior. The structured text: • focuses on bond graph model-based fault detection and isolation in hybrid systems; • addresses isolation of multiple parametric faults in hybrid systems; • considers system mode identification; • provides a number of elaborated case studies that consider fault scenarios for switched power electronic systems commonly used in a variety of applications; and • indicates that bond graph modelling can also be used for failure prognosis. In order to facilitate the understanding of fault diagnosis and the presented...

  10. Hybrid model for simulation of plasma jet injection in tokamak

    Science.gov (United States)

    Galkin, Sergei A.; Bogatu, I. N.

    2016-10-01

    Hybrid kinetic model of plasma treats the ions as kinetic particles and the electrons as charge neutralizing massless fluid. The model is essentially applicable when most of the energy is concentrated in the ions rather than in the electrons, i.e. it is well suited for the high-density hyper-velocity C60 plasma jet. The hybrid model separates the slower ion time scale from the faster electron time scale, which becomes disregardable. That is why hybrid codes consistently outperform the traditional PIC codes in computational efficiency, still resolving kinetic ions effects. We discuss 2D hybrid model and code with exact energy conservation numerical algorithm and present some results of its application to simulation of C60 plasma jet penetration through tokamak-like magnetic barrier. We also examine the 3D model/code extension and its possible applications to tokamak and ionospheric plasmas. The work is supported in part by US DOE DE-SC0015776 Grant.

  11. Optimal transmitter power of an intersatellite optical communication system with reciprocal Pareto fading.

    Science.gov (United States)

    Liu, Xian

    2010-02-10

    This paper shows that optical signal transmission over intersatellite links with swaying transmitters can be described as an equivalent fading model. In this model, the instantaneous signal-to-noise ratio is stochastic and follows the reciprocal Pareto distribution. With this model, we show that the transmitter power can be minimized, subject to a specified outage probability, by appropriately adjusting some system parameters, such as the transmitter gain.

  12. A Structural Model Decomposition Framework for Hybrid Systems Diagnosis

    Science.gov (United States)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2015-01-01

    Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.

  13. Model for optimum design of standalone hybrid renewable energy ...

    African Journals Online (AJOL)

    An optimization model for the design of a hybrid renewable energy microgrid ... and increasing the rated power of the wind energy conversion system (WECS) or solar ... a 70% reduction in gas emissions and an 80% reduction in energy costs.

  14. Hybrid Modelling of Individual Movement and Collective Behaviour

    KAUST Repository

    Franz, Benjamin

    2013-01-01

    Mathematical models of dispersal in biological systems are often written in terms of partial differential equations (PDEs) which describe the time evolution of population-level variables (concentrations, densities). A more detailed modelling approach is given by individual-based (agent-based) models which describe the behaviour of each organism. In recent years, an intermediate modelling methodology - hybrid modelling - has been applied to a number of biological systems. These hybrid models couple an individual-based description of cells/animals with a PDE-model of their environment. In this chapter, we overview hybrid models in the literature with the focus on the mathematical challenges of this modelling approach. The detailed analysis is presented using the example of chemotaxis, where cells move according to extracellular chemicals that can be altered by the cells themselves. In this case, individual-based models of cells are coupled with PDEs for extracellular chemical signals. Travelling waves in these hybrid models are investigated. In particular, we show that in contrary to the PDEs, hybrid chemotaxis models only develop a transient travelling wave. © 2013 Springer-Verlag Berlin Heidelberg.

  15. Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina

    2012-09-01

    The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.

  16. Projections onto the Pareto surface in multicriteria radiation therapy optimization.

    Science.gov (United States)

    Bokrantz, Rasmus; Miettinen, Kaisa

    2015-10-01

    To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose-volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose-volume histogram constraints were used. No consistent improvements in target homogeneity were observed. There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.

  17. Projections onto the Pareto surface in multicriteria radiation therapy optimization

    International Nuclear Information System (INIS)

    Bokrantz, Rasmus; Miettinen, Kaisa

    2015-01-01

    Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose–volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose–volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan

  18. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    Science.gov (United States)

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  19. A hybrid Scatter/Transform cloaking model

    Directory of Open Access Journals (Sweden)

    Gad Licht

    2015-01-01

    Full Text Available A new Scatter/Transform cloak is developed that combines the light bending of refraction characteristic of a Transform cloak with the scatter cancellation characteristic of a Scatter cloak. The hybrid cloak incorporates both Transform’s variable index of refraction with modified linear intrusions to maximize the Scatter cloak effect. Scatter/Transform improved the scattering cross-section of cloaking in a 2-dimensional space to 51.7% compared to only 39.6% or 45.1% respectively with either Scatter or Transform alone. Metamaterials developed with characteristics based on the new ST hybrid cloak will exhibit superior cloaking capabilities.

  20. Improving predicted protein loop structure ranking using a Pareto-optimality consensus method.

    Science.gov (United States)

    Li, Yaohang; Rata, Ionel; Chiu, See-wing; Jakobsson, Eric

    2010-07-20

    Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of approximately 20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set.

  1. Superconductivity in the periodic Anderson model with anisotropic hybridization

    International Nuclear Information System (INIS)

    Sarasua, L.G.; Continentino, Mucio A.

    2003-01-01

    In this work we study superconductivity in the periodic Anderson model with both on-site and intersite hybridization, including the interband Coulomb repulsion. We show that the presence of the intersite hybridization together with the on-site hybridization significantly affects the superconducting properties of the system. The symmetry of the hybridization has a strong influence in the symmetry of the superconducting order parameter of the ground state. The interband Coulomb repulsion may increase or decrease the superconducting critical temperature at small values of this interaction, while is detrimental to superconductivity for strong values. We show that the present model can give rise to positive or negative values of dT c /dP, depending on the values of the system parameters

  2. Modelling and Verifying Communication Failure of Hybrid Systems in HCSP

    DEFF Research Database (Denmark)

    Wang, Shuling; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    Hybrid systems are dynamic systems with interacting discrete computation and continuous physical processes. They have become ubiquitous in our daily life, e.g. automotive, aerospace and medical systems, and in particular, many of them are safety-critical. For a safety-critical hybrid system......, in the presence of communication failure, the expected control from the controller will get lost and as a consequence the physical process cannot behave as expected. In this paper, we mainly consider the communication failure caused by the non-engagement of one party in communication action, i.......e. the communication itself fails to occur. To address this issue, this paper proposes a formal framework by extending HCSP, a formal modeling language for hybrid systems, for modeling and verifying hybrid systems in the absence of receiving messages due to communication failure. We present two inference systems...

  3. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    Science.gov (United States)

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  4. A Pareto Optimal Auction Mechanism for Carbon Emission Rights

    Directory of Open Access Journals (Sweden)

    Mingxi Wang

    2014-01-01

    Full Text Available The carbon emission rights do not fit well into the framework of existing multi-item auction mechanisms because of their own unique features. This paper proposes a new auction mechanism which converges to a unique Pareto optimal equilibrium in a finite number of periods. In the proposed auction mechanism, the assignment outcome is Pareto efficient and the carbon emission rights’ resources are efficiently used. For commercial application and theoretical completeness, both discrete and continuous markets—represented by discrete and continuous bid prices, respectively—are examined, and the results show the existence of a Pareto optimal equilibrium under the constraint of individual rationality. With no ties, the Pareto optimal equilibrium can be further proven to be unique.

  5. Approximating convex Pareto surfaces in multiobjective radiotherapy planning

    International Nuclear Information System (INIS)

    Craft, David L.; Halabi, Tarek F.; Shih, Helen A.; Bortfeld, Thomas R.

    2006-01-01

    Radiotherapy planning involves inherent tradeoffs: the primary mission, to treat the tumor with a high, uniform dose, is in conflict with normal tissue sparing. We seek to understand these tradeoffs on a case-to-case basis, by computing for each patient a database of Pareto optimal plans. A treatment plan is Pareto optimal if there does not exist another plan which is better in every measurable dimension. The set of all such plans is called the Pareto optimal surface. This article presents an algorithm for computing well distributed points on the (convex) Pareto optimal surface of a multiobjective programming problem. The algorithm is applied to intensity-modulated radiation therapy inverse planning problems, and results of a prostate case and a skull base case are presented, in three and four dimensions, investigating tradeoffs between tumor coverage and critical organ sparing

  6. Fluid Survival Tool: A Model Checker for Hybrid Petri Nets

    NARCIS (Netherlands)

    Postema, Björn Frits; Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Ghasemieh, Hamed

    2014-01-01

    Recently, algorithms for model checking Stochastic Time Logic (STL) on Hybrid Petri nets with a single general one-shot transition (HPNG) have been introduced. This paper presents a tool for model checking HPNG models against STL formulas. A graphical user interface (GUI) not only helps to

  7. Nuclear Hybrid Energy System Model Stability Testing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwood, Michael Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-04-01

    A Nuclear Hybrid Energy System (NHES) uses a nuclear reactor as the basic power generation unit, and the power generated is used by multiple customers as combinations of thermal power or electrical power. The definition and architecture of a particular NHES can be adapted based on the needs and opportunities of different localities and markets. For example, locations in need of potable water may be best served by coupling a desalination plant to the NHES. Similarly, a location near oil refineries may have a need for emission-free hydrogen production. Using the flexible, multi-domain capabilities of Modelica, Argonne National Laboratory, Idaho National Laboratory, and Oak Ridge National Laboratory are investigating the dynamics (e.g., thermal hydraulics and electrical generation/consumption) and cost of a hybrid system. This paper examines the NHES work underway, emphasizing the control system developed for individual subsystems and the overall supervisory control system.

  8. Hybrid Modeling and Optimization of Yogurt Starter Culture Continuous Fermentation

    Directory of Open Access Journals (Sweden)

    Silviya Popova

    2009-10-01

    Full Text Available The present paper presents a hybrid model of yogurt starter mixed culture fermentation. The main nonlinearities within a classical structure of continuous process model are replaced by neural networks. The new hybrid model accounts for the dependence of the two microorganisms' kinetics from the on-line measured characteristics of the culture medium - pH. Then the model was used further for calculation of the optimal time profile of pH. The obtained results are with agreement with the experimental once.

  9. Phase transitions in Pareto optimal complex networks.

    Science.gov (United States)

    Seoane, Luís F; Solé, Ricard

    2015-09-01

    The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem, finding phase transitions of different kinds. Distinct phases are associated with different arrangements of the connections, but the need of drastic topological changes does not determine the presence or the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.

  10. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  11. Pareto-optimal phylogenetic tree reconciliation.

    Science.gov (United States)

    Libeskind-Hadas, Ran; Wu, Yi-Chieh; Bansal, Mukul S; Kellis, Manolis

    2014-06-15

    Phylogenetic tree reconciliation is a widely used method for reconstructing the evolutionary histories of gene families and species, hosts and parasites and other dependent pairs of entities. Reconciliation is typically performed using maximum parsimony, in which each evolutionary event type is assigned a cost and the objective is to find a reconciliation of minimum total cost. It is generally understood that reconciliations are sensitive to event costs, but little is understood about the relationship between event costs and solutions. Moreover, choosing appropriate event costs is a notoriously difficult problem. We address this problem by giving an efficient algorithm for computing Pareto-optimal sets of reconciliations, thus providing the first systematic method for understanding the relationship between event costs and reconciliations. This, in turn, results in new techniques for computing event support values and, for cophylogenetic analyses, performing robust statistical tests. We provide new software tools and demonstrate their use on a number of datasets from evolutionary genomic and cophylogenetic studies. Our Python tools are freely available at www.cs.hmc.edu/∼hadas/xscape. . © The Author 2014. Published by Oxford University Press.

  12. Mechanisms Underlying Mammalian Hybrid Sterility in Two Feline Interspecies Models.

    Science.gov (United States)

    Davis, Brian W; Seabury, Christopher M; Brashear, Wesley A; Li, Gang; Roelke-Parker, Melody; Murphy, William J

    2015-10-01

    The phenomenon of male sterility in interspecies hybrids has been observed for over a century, however, few genes influencing this recurrent phenotype have been identified. Genetic investigations have been primarily limited to a small number of model organisms, thus limiting our understanding of the underlying molecular basis of this well-documented "rule of speciation." We utilized two interspecies hybrid cat breeds in a genome-wide association study employing the Illumina 63 K single-nucleotide polymorphism array. Collectively, we identified eight autosomal genes/gene regions underlying associations with hybrid male sterility (HMS) involved in the function of the blood-testis barrier, gamete structural development, and transcriptional regulation. We also identified several candidate hybrid sterility regions on the X chromosome, with most residing in close proximity to complex duplicated regions. Differential gene expression analyses revealed significant chromosome-wide upregulation of X chromosome transcripts in testes of sterile hybrids, which were enriched for genes involved in chromatin regulation of gene expression. Our expression results parallel those reported in Mus hybrids, supporting the "Large X-Effect" in mammalian HMS and the potential epigenetic basis for this phenomenon. These results support the value of the interspecies feline model as a powerful tool for comparison to rodent models of HMS, demonstrating unique aspects and potential commonalities that underpin mammalian reproductive isolation. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Pareto-optimal estimates that constrain mean California precipitation change

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  14. Some hybrid models applicable to dose-response relationships

    International Nuclear Information System (INIS)

    Kumazawa, Shigeru

    1992-01-01

    A new type of models of dose-response relationships has been studied as an initial stage to explore a reliable extrapolation of the relationships decided by high dose data to the range of low dose covered by radiation protection. The approach is to use a 'hybrid scale' of linear and logarithmic scales; the first model is that the normalized surviving fraction (ρ S > 0) in a hybrid scale decreases linearly with dose in a linear scale, and the second is that the induction in a log scale increases linearly with the normalized dose (τ D > 0) in a hybrid scale. The hybrid scale may reflect an overall effectiveness of a complex system against adverse events caused by various agents. Some data of leukemia in the atomic bomb survivors and of rodent experiments were used to show the applicability of hybrid scale models. The results proved that proposed models fit these data not less than the popular linear-quadratic models, providing the possible interpretation of shapes of dose-response curves, e.g. shouldered survival curves varied by recovery time. (author)

  15. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  16. The Forbes 400, the Pareto power-law and efficient markets

    Science.gov (United States)

    Klass, O. S.; Biham, O.; Levy, M.; Malcai, O.; Solomon, S.

    2007-01-01

    Statistical regularities at the top end of the wealth distribution in the United States are examined using the Forbes 400 lists of richest Americans, published between 1988 and 2003. It is found that the wealths are distributed according to a power-law (Pareto) distribution. This result is explained using a simple stochastic model of multiple investors that incorporates the efficient market hypothesis as well as the multiplicative nature of financial market fluctuations.

  17. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    International Nuclear Information System (INIS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-01-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL ® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0–238 N s m −1 through the viscous and electromagnetic components, respectively. (paper)

  18. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    Science.gov (United States)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  19. Hybrid programming model for implicit PDE simulations on multicore architectures

    KAUST Repository

    Kaushik, Dinesh; Keyes, David E.; Balay, Satish; Smith, Barry F.

    2011-01-01

    The complexity of programming modern multicore processor based clusters is rapidly rising, with GPUs adding further demand for fine-grained parallelism. This paper analyzes the performance of the hybrid (MPI+OpenMP) programming model in the context of an implicit unstructured mesh CFD code. At the implementation level, the effects of cache locality, update management, work division, and synchronization frequency are studied. The hybrid model presents interesting algorithmic opportunities as well: the convergence of linear system solver is quicker than the pure MPI case since the parallel preconditioner stays stronger when hybrid model is used. This implies significant savings in the cost of communication and synchronization (explicit and implicit). Even though OpenMP based parallelism is easier to implement (with in a subdomain assigned to one MPI process for simplicity), getting good performance needs attention to data partitioning issues similar to those in the message-passing case. © 2011 Springer-Verlag.

  20. Applying Pareto multi-criteria decision making in concurrent engineering: A case study of polyethylene industry

    Directory of Open Access Journals (Sweden)

    Akbar A. Tabriz

    2011-07-01

    Full Text Available Concurrent engineering (CE is one of the widest known techniques for simultaneous planning of product and process design. In concurrent engineering, design processes are often complicated with multiple conflicting criteria and discrete sets of feasible alternatives. Thus multi-criteria decision making (MCDM techniques are integrated into CE to perform concurrent design. This paper proposes a design framework governed by MCDM technique, which are in conflict in the sense of competing for common resources to achieve variously different performance objectives such as financial, functional, environmental, etc. The Pareto MCDM model is applied to polyethylene pipe concurrent design governed by four criteria to determine the best alternative design to Pareto-compromise design.

  1. Use of the truncated shifted Pareto distribution in assessing size distribution of oil and gas fields

    Science.gov (United States)

    Houghton, J.C.

    1988-01-01

    The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.

  2. Nuclear Hybrid Energy Systems FY16 Modeling Efforts at ORNL

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, Michael Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Harrison, Thomas J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Qualls, A. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Guler Yigitoglu, Askin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-01

    A nuclear hybrid system uses a nuclear reactor as the basic power generation unit. The power generated by the nuclear reactor is utilized by one or more power customers as either thermal power, electrical power, or both. In general, a nuclear hybrid system will couple the nuclear reactor to at least one thermal power user in addition to the power conversion system. The definition and architecture of a particular nuclear hybrid system is flexible depending on local markets needs and opportunities. For example, locations in need of potable water may be best served by coupling a desalination plant to the nuclear system. Similarly, an area near oil refineries may have a need for emission-free hydrogen production. A nuclear hybrid system expands the nuclear power plant from its more familiar central power station role by diversifying its immediately and directly connected customer base. The definition, design, analysis, and optimization work currently performed with respect to the nuclear hybrid systems represents the work of three national laboratories. Idaho National Laboratory (INL) is the lead lab working with Argonne National Laboratory (ANL) and Oak Ridge National Laboratory. Each laboratory is providing modeling and simulation expertise for the integration of the hybrid system.

  3. Static stiffness modeling of a novel hybrid redundant robot machine

    International Nuclear Information System (INIS)

    Li Ming; Wu Huapeng; Handroos, Heikki

    2011-01-01

    This paper presents a modeling method to study the stiffness of a hybrid serial-parallel robot IWR (Intersector Welding Robot) for the assembly of ITER vacuum vessel. The stiffness matrix of the basic element in the robot is evaluated using matrix structural analysis (MSA); the stiffness of the parallel mechanism is investigated by taking account of the deformations of both hydraulic limbs and joints; the stiffness of the whole integrated robot is evaluated by employing the virtual joint method and the principle of virtual work. The obtained stiffness model of the hybrid robot is analytical and the deformation results of the robot workspace under certain external load are presented.

  4. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    Science.gov (United States)

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of

  5. Modeling of hybrid vehicle fuel economy and fuel engine efficiency

    Science.gov (United States)

    Wu, Wei

    "Near-CV" (i.e., near-conventional vehicle) hybrid vehicles, with an internal combustion engine, and a supplementary storage with low-weight, low-energy but high-power capacity, are analyzed. This design avoids the shortcoming of the "near-EV" and the "dual-mode" hybrid vehicles that need a large energy storage system (in terms of energy capacity and weight). The small storage is used to optimize engine energy management and can provide power when needed. The energy advantage of the "near-CV" design is to reduce reliance on the engine at low power, to enable regenerative braking, and to provide good performance with a small engine. The fuel consumption of internal combustion engines, which might be applied to hybrid vehicles, is analyzed by building simple analytical models that reflect the engines' energy loss characteristics. Both diesel and gasoline engines are modeled. The simple analytical models describe engine fuel consumption at any speed and load point by describing the engine's indicated efficiency and friction. The engine's indicated efficiency and heat loss are described in terms of several easy-to-obtain engine parameters, e.g., compression ratio, displacement, bore and stroke. Engine friction is described in terms of parameters obtained by fitting available fuel measurements on several diesel and spark-ignition engines. The engine models developed are shown to conform closely to experimental fuel consumption and motored friction data. A model of the energy use of "near-CV" hybrid vehicles with different storage mechanism is created, based on simple algebraic description of the components. With powertrain downsizing and hybridization, a "near-CV" hybrid vehicle can obtain a factor of approximately two in overall fuel efficiency (mpg) improvement, without considering reductions in the vehicle load.

  6. Hybrid attacks on model-based social recommender systems

    Science.gov (United States)

    Yu, Junliang; Gao, Min; Rong, Wenge; Li, Wentao; Xiong, Qingyu; Wen, Junhao

    2017-10-01

    With the growing popularity of the online social platform, the social network based approaches to recommendation emerged. However, because of the open nature of rating systems and social networks, the social recommender systems are susceptible to malicious attacks. In this paper, we present a certain novel attack, which inherits characteristics of the rating attack and the relation attack, and term it hybrid attack. Furtherly, we explore the impact of the hybrid attack on model-based social recommender systems in multiple aspects. The experimental results show that, the hybrid attack is more destructive than the rating attack in most cases. In addition, users and items with fewer ratings will be influenced more when attacked. Last but not the least, the findings suggest that spammers do not depend on the feedback links from normal users to become more powerful, the unilateral links can make the hybrid attack effective enough. Since unilateral links are much cheaper, the hybrid attack will be a great threat to model-based social recommender systems.

  7. Hybrid photovoltaic–thermal solar collectors dynamic modeling

    International Nuclear Information System (INIS)

    Amrizal, N.; Chemisana, D.; Rosell, J.I.

    2013-01-01

    Highlights: ► A hybrid photovoltaic/thermal dynamic model is presented. ► The model, once calibrated, can predict the power output for any set of climate data. ► The physical electrical model includes explicitly thermal and irradiance dependences. ► The results agree with those obtained through steady-state characterization. ► The model approaches the junction cell temperature through the system energy balance. -- Abstract: A hybrid photovoltaic/thermal transient model has been developed and validated experimentally. The methodology extends the quasi-dynamic thermal model stated in the EN 12975 in order to involve the electrical performance and consider the dynamic behavior minimizing constraints when characterizing the collector. A backward moving average filtering procedure has been applied to improve the model response for variable working conditions. Concerning the electrical part, the model includes the thermal and radiation dependences in its variables. The results revealed that the characteristic parameters included in the model agree reasonably well with the experimental values obtained from the standard steady-state and IV characteristic curve measurements. After a calibration process, the model is a suitable tool to predict the thermal and electrical performance of a hybrid solar collector, for a specific weather data set.

  8. Classification as clustering: a Pareto cooperative-competitive GP approach.

    Science.gov (United States)

    McIntyre, Andrew R; Heywood, Malcolm I

    2011-01-01

    Intuitively population based algorithms such as genetic programming provide a natural environment for supporting solutions that learn to decompose the overall task between multiple individuals, or a team. This work presents a framework for evolving teams without recourse to prespecifying the number of cooperating individuals. To do so, each individual evolves a mapping to a distribution of outcomes that, following clustering, establishes the parameterization of a (Gaussian) local membership function. This gives individuals the opportunity to represent subsets of tasks, where the overall task is that of classification under the supervised learning domain. Thus, rather than each team member representing an entire class, individuals are free to identify unique subsets of the overall classification task. The framework is supported by techniques from evolutionary multiobjective optimization (EMO) and Pareto competitive coevolution. EMO establishes the basis for encouraging individuals to provide accurate yet nonoverlaping behaviors; whereas competitive coevolution provides the mechanism for scaling to potentially large unbalanced datasets. Benchmarking is performed against recent examples of nonlinear SVM classifiers over 12 UCI datasets with between 150 and 200,000 training instances. Solutions from the proposed coevolutionary multiobjective GP framework appear to provide a good balance between classification performance and model complexity, especially as the dataset instance count increases.

  9. A New Model for Baryogenesis through Hybrid Inflation

    International Nuclear Information System (INIS)

    Delepine, D.; Prieto, C. Martinez; Lopez, L. A. Urena

    2009-01-01

    We propose a hybrid inflation model with a complex waterfall field which contains an interaction term that breaks the U(1) global symmetry associated to the waterfall field charge. The asymmetric evolution of the real and imaginary parts of the complex field during the phase transition at the end of inflation translates into a charge asymmetry.

  10. Model Predictive Control of the Hybrid Ventilation for Livestock

    DEFF Research Database (Denmark)

    Wu, Zhuang; Stoustrup, Jakob; Trangbæk, Klaus

    2006-01-01

    In this paper, design and simulation results of Model Predictive Control (MPC) strategy for livestock hybrid ventilation systems and associated indoor climate through variable valve openings and exhaust fans are presented. The design is based on thermal comfort parameters for poultry in barns...

  11. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  12. Evaluation of models generated via hybrid evolutionary algorithms ...

    African Journals Online (AJOL)

    2016-04-02

    Apr 2, 2016 ... Evaluation of models generated via hybrid evolutionary algorithms for the prediction of Microcystis ... evolutionary algorithms (HEA) proved to be highly applica- ble to the hypertrophic reservoirs of South Africa. .... discovered and optimised using a large-scale parallel computational device and relevant soft-.

  13. New Models of Hybrid Leadership in Global Higher Education

    Science.gov (United States)

    Tonini, Donna C.; Burbules, Nicholas C.; Gunsalus, C. K.

    2016-01-01

    This manuscript highlights the development of a leadership preparation program known as the Nanyang Technological University Leadership Academy (NTULA), exploring the leadership challenges unique to a university undergoing rapid growth in a highly multicultural context, and the hybrid model of leadership it developed in response to globalization.…

  14. Modeling of Hybrid Growth Wastewater Bio-reactor

    International Nuclear Information System (INIS)

    EI Nashaei, S.; Garhyan, P.; Prasad, P.; Abdel Halim, H.S.; Ibrahim, G.

    2004-01-01

    The attached/suspended growth mixed reactors are considered one of the recently tried approaches to improve the performance of the biological treatment by increasing the volume of the accumulated biomass in terms of attached growth as well as suspended growth. Moreover, the domestic WW can be easily mixed with a high strength non-hazardous industrial wastewater and treated together in these bio-reactors if the need arises. Modeling of Hybrid hybrid growth wastewater reactor addresses the need of understanding the rational of such system in order to achieve better design and operation parameters. This paper aims at developing a heterogeneous mathematical model for hybrid growth system considering the effect of diffusion, external mass transfer, and power input to the system in a rational manner. The model will be based on distinguishing between liquid/solid phase (bio-film and bio-floc). This model would be a step ahead to the fine tuning the design of hybrid systems based on the experimental data of a pilot plant to be implemented in near future

  15. Hybrid time/frequency domain modeling of nonlinear components

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    This paper presents a novel, three-phase hybrid time/frequency methodology for modelling of nonlinear components. The algorithm has been implemented in the DIgSILENT PowerFactory software using the DIgSILENT Programming Language (DPL), as a part of the work described in [1]. Modified HVDC benchmark...

  16. Efficient Proof Engines for Bounded Model Checking of Hybrid Systems

    DEFF Research Database (Denmark)

    Fränzle, Martin; Herde, Christian

    2005-01-01

    In this paper we present HySat, a new bounded model checker for linear hybrid systems, incorporating a tight integration of a DPLL-based pseudo-Boolean SAT solver and a linear programming routine as core engine. In contrast to related tools like MathSAT, ICS, or CVC, our tool exploits all...

  17. Travelling Waves in Hybrid Chemotaxis Models

    KAUST Repository

    Franz, Benjamin; Xue, Chuan; Painter, Kevin J.; Erban, Radek

    2013-01-01

    . Bacteria are modelled using an agent-based (individual-based) approach with internal dynamics describing signal transduction. In addition to the chemotactic behaviour of the bacteria, the individual-based model also includes cell proliferation and death

  18. Hybrid modelling of soil-structure interaction for embedded structures

    International Nuclear Information System (INIS)

    Gupta, S.; Penzien, J.

    1981-01-01

    The basic methods currently being used for the analysis of soil-structure interaction fail to properly model three-dimensional embedded structures with flexible foundations. A hybrid model for the analysis of soil-structure interaction is developed in this investigation which takes advantage of the desirable features of both the finite element and substructure methods and which minimizes their undesirable features. The hybrid model is obtained by partitioning the total soil-structure system into a nearfield and a far-field with a smooth hemispherical interface. The near-field consists of the structure and a finite region of soil immediately surrounding its base. The entire near-field may be modelled in three-dimensional form using the finite element method; thus, taking advantage of its ability to model irregular geometries, and the non-linear soil behavior in the immediate vicinity of the structure. (orig./WL)

  19. A hybrid parallel framework for the cellular Potts model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yi [Los Alamos National Laboratory; He, Kejing [SOUTH CHINA UNIV; Dong, Shoubin [SOUTH CHINA UNIV

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approach achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).

  20. Modelling and analysis of real-time and hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Olivero, A

    1994-09-29

    This work deals with the modelling and analysis of real-time and hybrid systems. We first present the timed-graphs as model for the real-time systems and we recall the basic notions of the analysis of real-time systems. We describe the temporal properties on the timed-graphs using TCTL formulas. We consider two methods for property verification: in one hand we study the symbolic model-checking (based on backward analysis) and in the other hand we propose a verification method derived of the construction of the simulation graph (based on forward analysis). Both methods have been implemented within the KRONOS verification tool. Their application for the automatic verification on several real-time systems confirms the practical interest of our approach. In a second part we study the hybrid systems, systems combining discrete components with continuous ones. As in the general case the analysis of this king of systems is not decidable, we identify two sub-classes of hybrid systems and we give a construction based method for the generation of a timed-graph from an element into the sub-classes. We prove that in one case the timed-graph obtained is bi-similar with the considered system and that there exists a simulation in the other case. These relationships allow the application of the described technics on the hybrid systems into the defined sub-classes. (authors). 60 refs., 43 figs., 8 tabs., 2 annexes.

  1. A Pareto upper tail for capital income distribution

    Science.gov (United States)

    Oancea, Bogdan; Pirjol, Dan; Andrei, Tudorel

    2018-02-01

    We present a study of the capital income distribution and of its contribution to the total income (capital income share) using individual tax income data in Romania, for 2013 and 2014. Using a parametric representation we show that the capital income is Pareto distributed in the upper tail, with a Pareto coefficient α ∼ 1 . 44 which is much smaller than the corresponding coefficient for wage- and non-wage-income (excluding capital income), of α ∼ 2 . 53. Including the capital income contribution has the effect of increasing the overall inequality measures.

  2. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    International Nuclear Information System (INIS)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-01-01

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  3. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.

    Science.gov (United States)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-01

    In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number

  4. Designing Pareto-superior demand-response rate options

    International Nuclear Information System (INIS)

    Horowitz, I.; Woo, C.K.

    2006-01-01

    We explore three voluntary service options-real-time pricing, time-of-use pricing, and curtailable/interruptible service-that a local distribution company might offer its customers in order to encourage them to alter their electricity usage in response to changes in the electricity-spot-market price. These options are simple and practical, and make minimal information demands. We show that each of the options is Pareto-superior ex ante, in that it benefits both the participants and the company offering it, while not affecting the non-participants. The options are shown to be Pareto-superior ex post as well, except under certain exceptional circumstances. (author)

  5. Pareto-Zipf law in growing systems with multiplicative interactions

    Science.gov (United States)

    Ohtsuki, Toshiya; Tanimoto, Satoshi; Sekiyama, Makoto; Fujihara, Akihiro; Yamamoto, Hiroshi

    2018-06-01

    Numerical simulations of multiplicatively interacting stochastic processes with weighted selections were conducted. A feedback mechanism to control the weight w of selections was proposed. It becomes evident that when w is moderately controlled around 0, such systems spontaneously exhibit the Pareto-Zipf distribution. The simulation results are universal in the sense that microscopic details, such as parameter values and the type of control and weight, are irrelevant. The central ingredient of the Pareto-Zipf law is argued to be the mild control of interactions.

  6. A 'simple' hybrid model for power derivatives

    International Nuclear Information System (INIS)

    Lyle, Matthew R.; Elliott, Robert J.

    2009-01-01

    This paper presents a method for valuing power derivatives using a supply-demand approach. Our method extends work in the field by incorporating randomness into the base load portion of the supply stack function and equating it with a noisy demand process. We obtain closed form solutions for European option prices written on average spot prices considering two different supply models: a mean-reverting model and a Markov chain model. The results are extensions of the classic Black-Scholes equation. The model provides a relatively simple approach to describe the complicated price behaviour observed in electricity spot markets and also allows for computationally efficient derivatives pricing. (author)

  7. Statement of Problem of Pareto Frontier Management and Its Solution in the Analysis and Synthesis of Optimal Systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2015-01-01

    Full Text Available The article research concerns the multi-criteria optimization (MCO, which assumes that operation quality criteria of the system are independent and specifies a way to improve values of these criteria. Mutual contradiction of some criteria is a major problem in MCO. One of the most important areas of research is to obtain the so-called Pareto - optimal options.The subject of research is Pareto front, also called the Pareto frontier. The article discusses front classifications by its geometric representation for the case of two-criterion task. It presents a mathematical description of the front characteristics using the gradients and their projections. A review of current domestic and foreign literature has revealed that the aim of works in constructing the Pareto frontier is to conduct research in conditions of uncertainty, in the stochastic statement, with no restrictions. A topology both in two- and in three-dimensional case is under consideration. The targets of modern applications are multi-agent systems and groups of players in differential games. However, all considered works have no task to provide an active management of the front.The objective of this article is to discuss the research problem the Pareto frontier in a new production, namely, with the active co-developers of the systems and (or the decision makers (DM in the management of the Pareto frontier. It notes that such formulation differs from the traditionally accepted approach based on the analysis of already existing solutions.The article discusses three ways to describe a quality of the object management system. The first way is to use the direct quality criteria for the model of a closed system as the vibrational level of the General form. The second one is to study a specific two-loop system of an aircraft control using the angular velocity and normal acceleration loops. The third is the use of the integrated quality criteria. In all three cases, the selected criteria are

  8. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  9. A hybrid society model for simulating residential electricity consumption

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Minjie [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China); State Power Economic Research Institute, Beijing (China); Hu, Zhaoguang [State Power Economic Research Institute, Beijing (China); Wu, Junyong; Zhou, Yuhui [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China)

    2008-12-15

    In this paper, a hybrid social model of econometric model and social influence model is proposed for evaluating the influence of pricing policy and public education policy on residential habit of electricity using in power resources management. And, a hybrid society simulation platform based on the proposed model, called residential electricity consumption multi-agent systems (RECMAS), is designed for simulating residential electricity consumption by multi-agent system. RECMAS is composed of consumer agent, power supplier agent, and policy maker agent. It provides the policy makers with a useful tool to evaluate power price policies and public education campaigns in different scenarios. According to an influenced diffusion mechanism, RECMAS can simulate the residential electricity demand-supply chain and analyze impacts of the factors on residential electricity consumption. Finally, the proposed method is used to simulate urban residential electricity consumption in China. (author)

  10. A hybrid society model for simulating residential electricity consumption

    International Nuclear Information System (INIS)

    Xu, Minjie; Hu, Zhaoguang; Wu, Junyong; Zhou, Yuhui

    2008-01-01

    In this paper, a hybrid social model of econometric model and social influence model is proposed for evaluating the influence of pricing policy and public education policy on residential habit of electricity using in power resources management. And, a hybrid society simulation platform based on the proposed model, called residential electricity consumption multi-agent systems (RECMAS), is designed for simulating residential electricity consumption by multi-agent system. RECMAS is composed of consumer agent, power supplier agent, and policy maker agent. It provides the policy makers with a useful tool to evaluate power price policies and public education campaigns in different scenarios. According to an influenced diffusion mechanism, RECMAS can simulate the residential electricity demand-supply chain and analyze impacts of the factors on residential electricity consumption. Finally, the proposed method is used to simulate urban residential electricity consumption in China. (author)

  11. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  12. The semantics of hybrid process models

    NARCIS (Netherlands)

    Slaats, T.; Schunselaar, D.M.M.; Maggi, F.M.; Reijers, H.A.; Debruyne, C.; Panetto, H.; Meersman, R.; Dillon, T.; Kuhn, E.; O'Sullivan, D.; Agostino Ardagna, C.

    2016-01-01

    In the area of business process modelling, declarative notations have been proposed as alternatives to notations that follow the dominant, imperative paradigm. Yet, the choice between an imperative or declarative style of modelling is not always easy to make. Instead, a mixture of these styles is

  13. A computational model for lower hybrid current drive

    International Nuclear Information System (INIS)

    Englade, R.C.; Bonoli, P.T.; Porkolab, M.

    1983-01-01

    A detailed simulation model for lower hybrid (LH) current drive in toroidal devices is discussed. This model accounts reasonably well for the magnitude of radio frequency (RF) current observed in the PLT and Alcator C devices. It also reproduces the experimental dependencies of RF current generation on toroidal magnetic field and has provided insights about mechanisms which may underlie the observed density limit of current drive. (author)

  14. A Hybrid Model for Forecasting Sales in Turkish Paint Industry

    OpenAIRE

    Alp Ustundag

    2009-01-01

    Sales forecasting is important for facilitating effective and efficient allocation of scarce resources. However, how to best model and forecast sales has been a long-standing issue. There is no best forecasting method that is applicable in all circumstances. Therefore, confidence in the accuracy of sales forecasts is achieved by corroborating the results using two or more methods. This paper proposes a hybrid forecasting model that uses an artificial intelligence method (AI) w...

  15. Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model

    Directory of Open Access Journals (Sweden)

    Bogdan Gliwa

    2011-01-01

    Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.

  16. Apricot - An Object-Oriented Modeling Language for Hybrid Systems

    OpenAIRE

    Fang, Huixing; Zhu, Huibiao; Shi, Jianqi

    2013-01-01

    We propose Apricot as an object-oriented language for modeling hybrid systems. The language combines the features in domain specific language and object-oriented language, that fills the gap between design and implementation, as a result, we put forward the modeling language with simple and distinct syntax, structure and semantics. In addition, we introduce the concept of design by convention into Apricot.As the characteristic of object-oriented and the component architecture in Apricot, we c...

  17. Hierarchical models and iterative optimization of hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Rasina, Irina V. [Ailamazyan Program Systems Institute, Russian Academy of Sciences, Peter One str. 4a, Pereslavl-Zalessky, 152021 (Russian Federation); Baturina, Olga V. [Trapeznikov Control Sciences Institute, Russian Academy of Sciences, Profsoyuznaya str. 65, 117997, Moscow (Russian Federation); Nasatueva, Soelma N. [Buryat State University, Smolina str.24a, Ulan-Ude, 670000 (Russian Federation)

    2016-06-08

    A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.

  18. Can we reach Pareto optimal outcomes using bottom-up approaches?

    NARCIS (Netherlands)

    V. Sanchez-Anguix (Victor); R. Aydoğan (Reyhan); T. Baarslag (Tim); C.M. Jonker (Catholijn)

    2016-01-01

    textabstractClassically, disciplines like negotiation and decision making have focused on reaching Pareto optimal solutions due to its stability and efficiency properties. Despite the fact that many practical and theoretical algorithms have successfully attempted to provide Pareto optimal solutions,

  19. A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2018-02-01

    Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.

  20. A model for particle acceleration in lower hybrid collapse

    International Nuclear Information System (INIS)

    Retterer, J.M.

    1997-01-01

    A model for particle acceleration during the nonlinear collapse of lower hybrid waves is described. Using the Musher-Sturman wave equation to describe the effects of nonlinear processes and a velocity diffusion equation for the particle velocity distribution, the model self-consistently describes the exchange of energy between the fields and the particles in the local plasma. Two-dimensional solutions are presented for the modulational instability of a plane wave and the collapse of a cylindrical wave packet. These calculations were motivated by sounding rocket observations in the vicinity of auroral arcs in the Earth close-quote s ionosphere, which have revealed the existence of large-amplitude lower-hybrid wave packets associated with ions accelerated to energies of 100 eV. The scaling of the sizes of these wave packets is consistent with the theory of lower-hybrid collapse and the observed lower-hybrid field amplitudes are adequate to accelerate the ionospheric ions to the observed energies

  1. Hybrid reduced order modeling for assembly calculations

    International Nuclear Information System (INIS)

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur

    2015-01-01

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  2. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Youngsuk, E-mail: ysbang00@fnctech.com [FNC Technology, Co. Ltd., Yongin-si (Korea, Republic of); Abdel-Khalik, Hany S., E-mail: abdelkhalik@purdue.edu [Purdue University, West Lafayette, IN (United States); Jessee, Matthew A., E-mail: jesseema@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Mertyurek, Ugur, E-mail: mertyurek@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2015-12-15

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  3. A Hybrid Model for Forecasting Sales in Turkish Paint Industry

    Directory of Open Access Journals (Sweden)

    Alp Ustundag

    2009-12-01

    Full Text Available Sales forecasting is important for facilitating effective and efficient allocation of scarce resources. However, how to best model and forecast sales has been a long-standing issue. There is no best forecasting method that is applicable in all circumstances. Therefore, confidence in the accuracy of sales forecasts is achieved by corroborating the results using two or more methods. This paper proposes a hybrid forecasting model that uses an artificial intelligence method (AI with multiple linear regression (MLR to predict product sales for the largest Turkish paint producer. In the hybrid model, three different AI methods, fuzzy rule-based system (FRBS, artificial neural network (ANN and adaptive neuro fuzzy network (ANFIS, are used and compared to each other. The results indicate that FRBS yields better forecasting accuracy in terms of root mean squared error (RMSE and mean absolute percentage error (MAPE.

  4. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Y.; Abdel-Khalik, H. S. [North Carolina State University, Raleigh, NC (United States); Jessee, M. A.; Mertyurek, U. [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2013-07-01

    While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system. (authors)

  5. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  6. The Cheshire Cat principle for hybrid bag models

    International Nuclear Information System (INIS)

    Nielsen, H.B.

    1987-05-01

    The Cheshire Cat point of view where the bag in the chiral bag model has no physical significance, but only a notational one is argued for. It is explained how a fermion - in, say, a 1+1 dimensional exact Cheshire Cat model - escapes the bag by means of an anomaly. The possibility to construct sophisticated hybrid bag models is suggested which use the lack of physical significance of the bag to fix the many parameters so as to anyway give hope of a phenomenologically sensible model. (orig.)

  7. Modelling of a Hybrid Energy System for Autonomous Application

    Directory of Open Access Journals (Sweden)

    Yang He

    2013-10-01

    Full Text Available A hybrid energy system (HES is a trending power supply solution for autonomous devices. With the help of an accurate system model, the HES development will be efficient and oriented. In spite of various precise unit models, a HES system is hardly developed. This paper proposes a system modelling approach, which applies the power flux conservation as the governing equation and adapts and modifies unit models of solar cells, piezoelectric generators, a Li-ion battery and a super-capacitor. A generalized power harvest, storage and management strategy is also suggested to adapt to various application scenarios.

  8. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    Science.gov (United States)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  9. Multi-agent Pareto appointment exchanging in hospital patient scheduling

    NARCIS (Netherlands)

    I.B. Vermeulen (Ivan); S.M. Bohte (Sander); D.J.A. Somefun (Koye); J.A. La Poutré (Han)

    2007-01-01

    htmlabstractWe present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment

  10. Multi-agent Pareto appointment exchanging in hospital patient scheduling

    NARCIS (Netherlands)

    Vermeulen, I.B.; Bohté, S.M.; Somefun, D.J.A.; Poutré, La J.A.

    2007-01-01

    We present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment exchanging algorithm:

  11. An Evolutionary Efficiency Alternative to the Notion of Pareto Efficiency

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2012-01-01

    textabstractThe paper argues that the notion of Pareto efficiency builds on two normative assumptions: the more general consequentialist norm of any efficiency criterion, and the strong no-harm principle of the prohibition of any redistribution during the economic process that hurts at least one

  12. Tsallis-Pareto like distributions in hadron-hadron collisions

    International Nuclear Information System (INIS)

    Barnafoeldi, G G; Uermoessy, K; Biro, T S

    2011-01-01

    Non-extensive thermodynamics is a novel approach in high energy physics. In high-energy heavy-ion, and especially in proton-proton collisions we are far from a canonical thermal state, described by the Boltzmann-Gibbs statistic. In these reactions low and intermediate transverse momentum spectra are extremely well reproduced by the Tsallis-Pareto distribution, but the physical origin of Tsallis parameters is still an unsettled question. Here, we analyze whether Tsallis-Pareto energy distribution do overlap with hadron spectra at high-pT. We fitted data, measured in proton-proton (proton-antiproton) collisions in wide center of mass energy range from 200 GeV RHIC up to 7 TeV LHC energies. Furthermore, our test is extended to an investigation of a possible √s-dependence of the power in the Tsallis-Pareto distribution, motivated by QCD evolution equations. We found that Tsallis-Pareto distributions fit well high-pT data, in the wide center of mass energy range. Deviance from the fits appears at p T > 20-30 GeV/c, especially on CDF data. Introducing a pT-scaling ansatz, the fits at low and intermediate transverse momenta still remain good, and the deviations tend to disappear at the highest-pT data.

  13. COMPROMISE, OPTIMAL AND TRACTIONAL ACCOUNTS ON PARETO SET

    Directory of Open Access Journals (Sweden)

    V. V. Lahuta

    2010-11-01

    Full Text Available The problem of optimum traction calculations is considered as a problem about optimum distribution of a resource. The dynamic programming solution is based on a step-by-step calculation of set of points of Pareto-optimum values of a criterion function (energy expenses and a resource (time.

  14. HYBRID WAYS OF DOING: A MODEL FOR TEACHING PUBLIC SPACE

    Directory of Open Access Journals (Sweden)

    Gabrielle Bendiner-Viani

    2010-07-01

    Full Text Available This paper addresses an exploratory practice undertaken by the authors in a co-taught class to hybridize theory, research and practice. This experiment in critical transdisciplinary design education took the form of a “critical studio + practice-based seminar on public space”, two interlinked classes co-taught by landscape architect Elliott Maltby and environmental psychologist Gabrielle Bendiner-Viani at the Parsons, The New School for Design. This design process was grounded in the political and social context of the contested East River waterfront of New York City and valued both intensive study (using a range of social science and design methods and a partnership with a local community organization, engaging with the politics, issues and human needs of a complex site. The paper considers how we encouraged interdisciplinary collaboration and dialogue between teachers as well as between liberal arts and design students and developed strategies to overcome preconceived notions of traditional “studio” and “seminar” work. By exploring the challenges and adjustments made during the semester and the process of teaching this class, this paper addresses how we moved from a model of intertwining theory, research and practice, to a hybrid model of multiple ways of doing, a model particularly apt for teaching public space. Through examples developed for and during our course, the paper suggests practical ways of supporting this transdisciplinary hybrid model.

  15. Modelling the solar wind interaction with Mercury by a quasi-neutral hybrid model

    Directory of Open Access Journals (Sweden)

    E. Kallio

    2003-11-01

    Full Text Available Quasi-neutral hybrid model is a self-consistent modelling approach that includes positively charged particles and an electron fluid. The approach has received an increasing interest in space plasma physics research because it makes it possible to study several plasma physical processes that are difficult or impossible to model by self-consistent fluid models, such as the effects associated with the ions’ finite gyroradius, the velocity difference between different ion species, or the non-Maxwellian velocity distribution function. By now quasi-neutral hybrid models have been used to study the solar wind interaction with the non-magnetised Solar System bodies of Mars, Venus, Titan and comets. Localized, two-dimensional hybrid model runs have also been made to study terrestrial dayside magnetosheath. However, the Hermean plasma environment has not yet been analysed by a global quasi-neutral hybrid model. In this paper we present a new quasi-neutral hybrid model developed to study various processes associated with the Mercury-solar wind interaction. Emphasis is placed on addressing advantages and disadvantages of the approach to study different plasma physical processes near the planet. The basic assumptions of the approach and the algorithms used in the new model are thoroughly presented. Finally, some of the first three-dimensional hybrid model runs made for Mercury are presented. The resulting macroscopic plasma parameters and the morphology of the magnetic field demonstrate the applicability of the new approach to study the Mercury-solar wind interaction globally. In addition, the real advantage of the kinetic hybrid model approach is to study the property of individual ions, and the study clearly demonstrates the large potential of the approach to address these more detailed issues by a quasi-neutral hybrid model in the future.Key words. Magnetospheric physics (planetary magnetospheres; solar wind-magnetosphere interactions – Space plasma

  16. Properties of hybrid stars in an extended MIT bag model

    International Nuclear Information System (INIS)

    Bao Tmurbagan; Liu Guangzhou; Zhu Mingfeng

    2009-01-01

    The properties of hybrid stars are investigated in the framework of the relativistic mean field theory (RMFT) and an MIT bag model with density-dependent bag constant to describe the hadron phase (HP) and quark phase (QP), respectively. We find that the density-dependent B(ρ) decreases with baryon density ρ; this decrement makes the strange quark matter become more energetically favorable than ever; which makes the threshold densities of the hadron-quark phase transition lower than those of the original bag constant case. In this case, the hyperon degrees of freedom can not be considered. As a result, the equations of state of a star in the mixed phase (MP) become softer whereas those in the QP become stiffer, and the radii of the star obviously decrease. This indicates that the extended MIT bag model is more suitable to describe hybrid stars with small radii. (authors)

  17. A light neutralino in hybrid models of supersymmetry breaking

    CERN Document Server

    Dudas, Emilian; Parmentier, Jeanne; 10.1016

    2008-01-01

    We show that in gauge mediation models where heavy messenger masses are provided by the adjoint Higgs field of an underlying SU(5) theory, a generalized gauge mediation spectrum arises with the characteristic feature of having a neutralino much lighter than in the standard gauge or gravity mediation schemes. This naturally fits in a hybrid scenario where gravity mediation, while subdominant with respect to gauge mediation, provides mu and B mu parameters in the TeV range.

  18. A Novel of Hybrid Maintenance Management Models for Industrial Applications

    OpenAIRE

    Tahir, Zulkifli

    2010-01-01

    It is observed through empirical studies that the effectiveness of industrial process have been increased by a well organized of machines maintenance structure. In current research, a novel of maintenance concept has been designed by hybrid several maintenance management models with Decision Making Grid (DMG), Analytic Hierarchy Process (AHP) and Fuzzy Logic. The concept is designed for maintenance personnel to evaluate and benchmark the maintenance operations and to reveal important maintena...

  19. A light neutralino in hybrid models of supersymmetry breaking

    International Nuclear Information System (INIS)

    Dudas, Emilian; Lavignac, Stephane; Parmentier, Jeanne

    2009-01-01

    We show that in gauge mediation models where heavy messenger masses are provided by the adjoint Higgs field of an underlying SU(5) theory, a generalized gauge mediation spectrum arises with the characteristic feature of having a neutralino LSP much lighter than in the standard gauge or gravity mediation schemes. This naturally fits in a hybrid scenario where gravity mediation, while subdominant with respect to gauge mediation, provides μ and Bμ parameters of the appropriate size for electroweak symmetry breaking

  20. Hybrid Model for e-Learning Quality Evaluation

    Directory of Open Access Journals (Sweden)

    Suzana M. Savic

    2012-02-01

    Full Text Available E-learning is becoming increasingly important for the competitive advantage of economic organizations and higher education institutions. Therefore, it is becoming a significant aspect of quality which has to be integrated into the management system of every organization or institution. The paper examines e-learning quality characteristics, standards, criteria and indicators and presents a multi-criteria hybrid model for e-learning quality evaluation based on the method of Analytic Hierarchy Process, trend analysis, and data comparison.

  1. Hybrid Speaker Recognition Using Universal Acoustic Model

    Science.gov (United States)

    Nishimura, Jun; Kuroda, Tadahiro

    We propose a novel speaker recognition approach using a speaker-independent universal acoustic model (UAM) for sensornet applications. In sensornet applications such as “Business Microscope”, interactions among knowledge workers in an organization can be visualized by sensing face-to-face communication using wearable sensor nodes. In conventional studies, speakers are detected by comparing energy of input speech signals among the nodes. However, there are often synchronization errors among the nodes which degrade the speaker recognition performance. By focusing on property of the speaker's acoustic channel, UAM can provide robustness against the synchronization error. The overall speaker recognition accuracy is improved by combining UAM with the energy-based approach. For 0.1s speech inputs and 4 subjects, speaker recognition accuracy of 94% is achieved at the synchronization error less than 100ms.

  2. Stochastic linear hybrid systems: Modeling, estimation, and application

    Science.gov (United States)

    Seah, Chze Eng

    Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS

  3. The Cheshire Cat principle applied to hybrid bag models

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Wirzba, A.

    1987-05-01

    Here is argued for the Cheshire Cat point of view according to which the bag (itself) has only notational, but no physical significance. It is explained in a 1+1 dimensional exact Cheshire Cat model how a fermion can escape from the bag by means of an anomaly. We also suggest that suitably constructed hybrid bag models may be used to fix such parameters of effective Lagrangians that can otherwise be obtained from experiments only. This idea is illustrated in a calculation of the mass of the pseudoscalar η' meson in 1+1 dimension. Thus there is hope to find a construction principle for a phenomenologically sensible model. (orig.)

  4. Hybrid model for the decay of nuclear giant resonances

    International Nuclear Information System (INIS)

    Hussein, M.S.

    1986-12-01

    The decay properties of nuclear giant multipole resonances are discussed within a hybrid model that incorporates, in a unitary consistent way, both the coherent and statistical features. It is suggested that the 'direct' decay of the GR is described with continuum first RPA and the statistical decay calculated with a modified Hauser-Feshbach model. Application is made to the decay of the giant monopole resonance in 208 Pb. Suggestions are made concerning the calculation of the mixing parameter using the statistical properties of the shell model eigenstates at high excitation energies. (Author) [pt

  5. Model Predictive Control for Connected Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Kaijiang Yu

    2015-01-01

    Full Text Available This paper presents a new model predictive control system for connected hybrid electric vehicles to improve fuel economy. The new features of this study are as follows. First, the battery charge and discharge profile and the driving velocity profile are simultaneously optimized. One is energy management for HEV for Pbatt; the other is for the energy consumption minimizing problem of acc control of two vehicles. Second, a system for connected hybrid electric vehicles has been developed considering varying drag coefficients and the road gradients. Third, the fuel model of a typical hybrid electric vehicle is developed using the maps of the engine efficiency characteristics. Fourth, simulations and analysis (under different parameters, i.e., road conditions, vehicle state of charge, etc. are conducted to verify the effectiveness of the method to achieve higher fuel efficiency. The model predictive control problem is solved using numerical computation method: continuation and generalized minimum residual method. Computer simulation results reveal improvements in fuel economy using the proposed control method.

  6. Birds shed RNA-viruses according to the pareto principle.

    Science.gov (United States)

    Jankowski, Mark D; Williams, Christopher J; Fair, Jeanne M; Owen, Jennifer C

    2013-01-01

    A major challenge in disease ecology is to understand the role of individual variation of infection load on disease transmission dynamics and how this influences the evolution of resistance or tolerance mechanisms. Such information will improve our capacity to understand, predict, and mitigate pathogen-associated disease in all organisms. In many host-pathogen systems, particularly macroparasites and sexually transmitted diseases, it has been found that approximately 20% of the population is responsible for approximately 80% of the transmission events. Although host contact rates can account for some of this pattern, pathogen transmission dynamics also depend upon host infectiousness, an area that has received relatively little attention. Therefore, we conducted a meta-analysis of pathogen shedding rates of 24 host (avian) - pathogen (RNA-virus) studies, including 17 bird species and five important zoonotic viruses. We determined that viral count data followed the Weibull distribution, the mean Gini coefficient (an index of inequality) was 0.687 (0.036 SEM), and that 22.0% (0.90 SEM) of the birds shed 80% of the virus across all studies, suggesting an adherence of viral shedding counts to the Pareto Principle. The relative position of a bird in a distribution of viral counts was affected by factors extrinsic to the host, such as exposure to corticosterone and to a lesser extent reduced food availability, but not to intrinsic host factors including age, sex, and migratory status. These data provide a quantitative view of heterogeneous virus shedding in birds that may be used to better parameterize epidemiological models and understand transmission dynamics.

  7. Birds shed RNA-viruses according to the pareto principle.

    Directory of Open Access Journals (Sweden)

    Mark D Jankowski

    Full Text Available A major challenge in disease ecology is to understand the role of individual variation of infection load on disease transmission dynamics and how this influences the evolution of resistance or tolerance mechanisms. Such information will improve our capacity to understand, predict, and mitigate pathogen-associated disease in all organisms. In many host-pathogen systems, particularly macroparasites and sexually transmitted diseases, it has been found that approximately 20% of the population is responsible for approximately 80% of the transmission events. Although host contact rates can account for some of this pattern, pathogen transmission dynamics also depend upon host infectiousness, an area that has received relatively little attention. Therefore, we conducted a meta-analysis of pathogen shedding rates of 24 host (avian - pathogen (RNA-virus studies, including 17 bird species and five important zoonotic viruses. We determined that viral count data followed the Weibull distribution, the mean Gini coefficient (an index of inequality was 0.687 (0.036 SEM, and that 22.0% (0.90 SEM of the birds shed 80% of the virus across all studies, suggesting an adherence of viral shedding counts to the Pareto Principle. The relative position of a bird in a distribution of viral counts was affected by factors extrinsic to the host, such as exposure to corticosterone and to a lesser extent reduced food availability, but not to intrinsic host factors including age, sex, and migratory status. These data provide a quantitative view of heterogeneous virus shedding in birds that may be used to better parameterize epidemiological models and understand transmission dynamics.

  8. Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling

    Directory of Open Access Journals (Sweden)

    Samar Hayat Khan Tareen

    2015-07-01

    Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model

  9. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  10. Numerical modeling of hybrid fiber-reinforced concrete (hyfrc)

    International Nuclear Information System (INIS)

    Hameed, R.; Turatsinze, A.

    2015-01-01

    A model for numerical simulation of mechanical response of concrete reinforced with slipping and non slipping metallic fibers in hybrid form is presented in this paper. Constitutive law used to model plain concrete behaviour is based on plasticity and damage theories, and is capable to determine localized crack opening in three dimensional (3-D) systems. Behaviour law used for slipping metallic fibers is formulated based on effective stress carried by these fibers after when concrete matrix is cracked. A continuous approach is proposed to model the effect of addition of non-slipping metallic fibers in plain concrete. This approach considers the constitutive law of concrete matrix with increased fracture energy in tension obtained experimentally in direct tension tests on Fiber Reinforced Concrete (FRC). To simulate the mechanical behaviour of hybrid fiber-reinforced concrete (HyFRC), proposed approaches to model non-slipping metallic fibers and constitutive law of plain concrete and slipping fibers are used simultaneously without any additive equation. All the parameters used by the proposed model have physical meanings and are determined through experiments or drawn from literature. The model was implemented in Finite Element (FE) Code CASTEM and tested on FRC prismatic notched specimens in flexure. Model prediction showed good agreement with experimental results. (author)

  11. Modelling and control of a light-duty hybrid electric truck

    OpenAIRE

    Park, Jong-Kyu

    2006-01-01

    This study is concentrated on modelling and developing the controller for the light-duty hybrid electric truck. The hybrid electric vehicle has advantages in fuel economy. However, there have been relatively few studies on commercial HEVs, whilst a considerable number of studies on the hybrid electric system have been conducted in the field of passenger cars. So the current status and the methodologies to develop the LD hybrid electric truck model have been studied through the ...

  12. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  13. Computing the Distribution of Pareto Sums Using Laplace Transformation and Stehfest Inversion

    Science.gov (United States)

    Harris, C. K.; Bourne, S. J.

    2017-05-01

    In statistical seismology, the properties of distributions of total seismic moment are important for constraining seismological models, such as the strain partitioning model (Bourne et al. J Geophys Res Solid Earth 119(12): 8991-9015, 2014). This work was motivated by the need to develop appropriate seismological models for the Groningen gas field in the northeastern Netherlands, in order to address the issue of production-induced seismicity. The total seismic moment is the sum of the moments of individual seismic events, which in common with many other natural processes, are governed by Pareto or "power law" distributions. The maximum possible moment for an induced seismic event can be constrained by geomechanical considerations, but rather poorly, and for Groningen it cannot be reliably inferred from the frequency distribution of moment magnitude pertaining to the catalogue of observed events. In such cases it is usual to work with the simplest form of the Pareto distribution without an upper bound, and we follow the same approach here. In the case of seismicity, the exponent β appearing in the power-law relation is small enough for the variance of the unbounded Pareto distribution to be infinite, which renders standard statistical methods concerning sums of statistical variables, based on the central limit theorem, inapplicable. Determinations of the properties of sums of moderate to large numbers of Pareto-distributed variables with infinite variance have traditionally been addressed using intensive Monte Carlo simulations. This paper presents a novel method for accurate determination of the properties of such sums that is accurate, fast and easily implemented, and is applicable to Pareto-distributed variables for which the power-law exponent β lies within the interval [0, 1]. It is based on shifting the original variables so that a non-zero density is obtained exclusively for non-negative values of the parameter and is identically zero elsewhere, a property

  14. MODEL APLIKASI FIKIH MUAMALAH PADA FORMULASI HYBRID CONTRACT

    Directory of Open Access Journals (Sweden)

    Ali Murtadho

    2013-10-01

    Full Text Available Modern literatures of fiqh mu’āmalah talk alot about various contract formulation with capability of maximizing profit in shariah finance industry. This new contract modification is the synthesis among existing contracts which is formulated in such a way to be an integrated contract. This formulation is known as a hybrid contract or multicontract (al-'uqūd al-murakkabah. Some of them are, bay' bi thaman 'ājil, Ijārah muntahiyah bi ’l-tamlīk dan mushārakah mutanāqiṣah. This study intends to further describe models of hybrid contract, and explore the shari'ah principles in modern financial institutions. This study found a potential shift from the ideal values of the spirit of shari'ah into the spirit of competition based shari'ah formally.

  15. A Simple Hybrid Model for Short-Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Suseelatha Annamareddi

    2013-01-01

    Full Text Available The paper proposes a simple hybrid model to forecast the electrical load data based on the wavelet transform technique and double exponential smoothing. The historical noisy load series data is decomposed into deterministic and fluctuation components using suitable wavelet coefficient thresholds and wavelet reconstruction method. The variation characteristics of the resulting series are analyzed to arrive at reasonable thresholds that yield good denoising results. The constitutive series are then forecasted using appropriate exponential adaptive smoothing models. A case study performed on California energy market data demonstrates that the proposed method can offer high forecasting precision for very short-term forecasts, considering a time horizon of two weeks.

  16. Calibrated and Interactive Modelling of Form-Active Hybrid Structures

    DEFF Research Database (Denmark)

    Quinn, Gregory; Holden Deleuran, Anders; Piker, Daniel

    2016-01-01

    Form-active hybrid structures (FAHS) couple two or more different structural elements of low self weight and low or negligible bending flexural stiffness (such as slender beams, cables and membranes) into one structural assembly of high global stiffness. They offer high load-bearing capacity...... software packages which introduce interruptions and data exchange issues in the modelling pipeline. The mechanical precision, stability and open software architecture of Kangaroo has facilitated the development of proof-of-concept modelling pipelines which tackle this challenge and enable powerful...... materially-informed sketching. Making use of a projection-based dynamic relaxation solver for structural analysis, explorative design has proven to be highly effective....

  17. Analysis of extreme drinking in patients with alcohol dependence using Pareto regression.

    Science.gov (United States)

    Das, Sourish; Harel, Ofer; Dey, Dipak K; Covault, Jonathan; Kranzler, Henry R

    2010-05-20

    We developed a novel Pareto regression model with an unknown shape parameter to analyze extreme drinking in patients with Alcohol Dependence (AD). We used the generalized linear model (GLM) framework and the log-link to include the covariate information through the scale parameter of the generalized Pareto distribution. We proposed a Bayesian method based on Ridge prior and Zellner's g-prior for the regression coefficients. Simulation study indicated that the proposed Bayesian method performs better than the existing likelihood-based inference for the Pareto regression.We examined two issues of importance in the study of AD. First, we tested whether a single nucleotide polymorphism within GABRA2 gene, which encodes a subunit of the GABA(A) receptor, and that has been associated with AD, influences 'extreme' alcohol intake and second, the efficacy of three psychotherapies for alcoholism in treating extreme drinking behavior. We found an association between extreme drinking behavior and GABRA2. We also found that, at baseline, men with a high-risk GABRA2 allele had a significantly higher probability of extreme drinking than men with no high-risk allele. However, men with a high-risk allele responded to the therapy better than those with two copies of the low-risk allele. Women with high-risk alleles also responded to the therapy better than those with two copies of the low-risk allele, while women who received the cognitive behavioral therapy had better outcomes than those receiving either of the other two therapies. Among men, motivational enhancement therapy was the best for the treatment of the extreme drinking behavior. Copyright 2010 John Wiley & Sons, Ltd.

  18. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    Science.gov (United States)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  19. Design, test and model of a hybrid magnetostrictive hydraulic actuator

    International Nuclear Information System (INIS)

    Chaudhuri, Anirban; Yoo, Jin-Hyeong; Wereley, Norman M

    2009-01-01

    The basic operation of hybrid hydraulic actuators involves high frequency bi-directional operation of an active material that is converted to uni-directional motion of hydraulic fluid using valves. A hybrid actuator was developed using magnetostrictive material Terfenol-D as the driving element and hydraulic oil as the working fluid. Two different lengths of Terfenol-D rod, 51 and 102 mm, with the same diameter, 12.7 mm, were used. Tests with no load and with load were carried out to measure the performance for uni-directional motion of the output piston at different pumping frequencies. The maximum no-load flow rates were 24.8 cm 3 s −1 and 22.7 cm 3 s −1 with the 51 mm and 102 mm long rods respectively, and the peaks were noted around 325 Hz pumping frequency. The blocked force of the actuator was close to 89 N in both cases. A key observation was that, at these high pumping frequencies, the inertial effects of the fluid mass dominate over the viscous effects and the problem becomes unsteady in nature. In this study, we also develop a mathematical model of the hydraulic hybrid actuator in the time domain to show the basic operational principle under varying conditions and to capture phenomena affecting system performance. Governing equations for the pumping piston and output shaft were obtained from force equilibrium considerations, while compressibility of the working fluid was taken into account by incorporating the bulk modulus. Fluid inertia was represented by a lumped parameter approach to the transmission line model, giving rise to strongly coupled ordinary differential equations. The model was then used to calculate the no-load velocities of the actuator at different pumping frequencies and simulation results were compared with experimental data for model validation

  20. Pareto-depth for multiple-query image retrieval.

    Science.gov (United States)

    Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O

    2015-02-01

    Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.

  1. Decomposition and Simplification of Multivariate Data using Pareto Sets.

    Science.gov (United States)

    Huettenberger, Lars; Heine, Christian; Garth, Christoph

    2014-12-01

    Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.

  2. JOB SHOP SCHEDULING BIOBJETIVO MEDIANTE ENFRIAMIENTO SIMULADO Y ENFOQUE DE PARETO JOB-SHOP SCHEDULING: BIO-OBJECTIVE THROUGH SIMULATED COOLING AND PARETO PRINCIPLE

    Directory of Open Access Journals (Sweden)

    Juan Carlos Osorio

    2012-12-01

    Full Text Available El problema del scheduling es uno de los problemas más ampliamente tratados en la literatura; sin embargo, es un problema complejo NP hard. Cuando, además, se involucra más de un objetivo, este problema se convierte en uno de los más complejos en el campo de la investigación de operaciones. Se presenta entonces un modelo biobjetivo para el job shop scheduling que incluye el makespan y el tiempo de flujo medio. Para resolver el modelo se ha utilizado una propuesta que incluye el uso del meta-heurístico Recocido Simulado (SA y el enfoque de Pareto. Este modelo es evaluado en tres problemas presentados en la literatura de tamaños 6x6, 10x5 y 10x10. Los resultados del modelo se comparan con otros meta-heurísticos y se encuentra que este modelo presenta buenos resultados en los tres problemas evaluados.The scheduling problem is one of the most widely treated problems in literature; however, it is an NP hard complex problem. Also, when more than one objective is involved, this problem becomes one of the most complex ones in the field of operations research. A bio-objective model is then emerged for the Job-Shop Scheduling, including makespan and mean flow time. For solving the model a proposal which includes the use of Simulated Annealing (SA metaheuristic and Pareto Principle. This model is evaluated in three problems described in literature with the following sizes: 6x6, 10x5 and 10x10. Results of the model are compared to other metaheuristics and it has been found that this model shows good results in the three problems evaluated.

  3. Using the Pareto Distribution to Improve Estimates of Topcoded Earnings

    OpenAIRE

    Philip Armour; Richard V. Burkhauser; Jeff Larrimore

    2014-01-01

    Inconsistent censoring in the public-use March Current Population Survey (CPS) limits its usefulness in measuring labor earnings trends. Using Pareto estimation methods with less-censored internal CPS data, we create an enhanced cell-mean series to capture top earnings in the public-use CPS. We find that previous approaches for imputing topcoded earnings systematically understate top earnings. Annual earnings inequality trends since 1963 using our series closely approximate those found by Kop...

  4. Accelerated life testing design using geometric process for pareto distribution

    OpenAIRE

    Mustafa Kamal; Shazia Zarrin; Arif Ul Islam

    2013-01-01

    In this paper the geometric process is used for the analysis of accelerated life testing under constant stress for Pareto Distribution. Assuming that the lifetimes under increasing stress levels form a geometric process, estimates of the parameters are obtained by using the maximum likelihood method for complete data. In addition, asymptotic interval estimates of the parameters of the distribution using Fisher information matrix are also obtained. The statistical properties of the parameters ...

  5. Small Sample Robust Testing for Normality against Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2012-01-01

    Roč. 41, č. 7 (2012), s. 1167-1194 ISSN 0361-0918 Grant - others:Aktion(CZ-AT) 51p7, 54p21, 50p14, 54p13 Institutional research plan: CEZ:AV0Z10300504 Keywords : consistency * Hill estimator * t-Hill estimator * location functional * Pareto tail * power comparison * returns * robust tests for normality Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.295, year: 2012

  6. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  7. Bayesian inference for hybrid discrete-continuous stochastic kinetic models

    International Nuclear Information System (INIS)

    Sherlock, Chris; Golightly, Andrew; Gillespie, Colin S

    2014-01-01

    We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process (MJP), computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ‘fast’ or ‘slow’ with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a MJP with time-dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. (paper)

  8. Pareto optimal design of sectored toroidal superconducting magnet for SMES

    Energy Technology Data Exchange (ETDEWEB)

    Bhunia, Uttam, E-mail: ubhunia@vecc.gov.in; Saha, Subimal; Chakrabarti, Alok

    2014-10-15

    Highlights: • The optimization approach minimizes both the magnet size and necessary cable length of a sectored toroidal SMES unit. • Design approach is suitable for low temperature superconducting cable suitable for medium size SMES unit. • It investigates coil parameters with respect to practical engineering aspects. - Abstract: A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium–titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy.

  9. Multicriteria Similarity-Based Anomaly Detection Using Pareto Depth Analysis.

    Science.gov (United States)

    Hsiao, Ko-Jen; Xu, Kevin S; Calder, Jeff; Hero, Alfred O

    2016-06-01

    We consider the problem of identifying patterns in a data set that exhibits anomalous behavior, often referred to as anomaly detection. Similarity-based anomaly detection algorithms detect abnormally large amounts of similarity or dissimilarity, e.g., as measured by the nearest neighbor Euclidean distances between a test sample and the training samples. In many application domains, there may not exist a single dissimilarity measure that captures all possible anomalous patterns. In such cases, multiple dissimilarity measures can be defined, including nonmetric measures, and one can test for anomalies by scalarizing using a nonnegative linear combination of them. If the relative importance of the different dissimilarity measures are not known in advance, as in many anomaly detection applications, the anomaly detection algorithm may need to be executed multiple times with different choices of weights in the linear combination. In this paper, we propose a method for similarity-based anomaly detection using a novel multicriteria dissimilarity measure, the Pareto depth. The proposed Pareto depth analysis (PDA) anomaly detection algorithm uses the concept of Pareto optimality to detect anomalies under multiple criteria without having to run an algorithm multiple times with different choices of weights. The proposed PDA approach is provably better than using linear combinations of the criteria, and shows superior performance on experiments with synthetic and real data sets.

  10. Pareto optimal design of sectored toroidal superconducting magnet for SMES

    International Nuclear Information System (INIS)

    Bhunia, Uttam; Saha, Subimal; Chakrabarti, Alok

    2014-01-01

    Highlights: • The optimization approach minimizes both the magnet size and necessary cable length of a sectored toroidal SMES unit. • Design approach is suitable for low temperature superconducting cable suitable for medium size SMES unit. • It investigates coil parameters with respect to practical engineering aspects. - Abstract: A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium–titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy

  11. Pareto optimal design of sectored toroidal superconducting magnet for SMES

    Science.gov (United States)

    Bhunia, Uttam; Saha, Subimal; Chakrabarti, Alok

    2014-10-01

    A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium-titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy.

  12. Generalized Pareto optimum and semi-classical spinors

    Science.gov (United States)

    Rouleux, M.

    2018-02-01

    In 1971, S. Smale presented a generalization of Pareto optimum he called the critical Pareto set. The underlying motivation was to extend Morse theory to several functions, i.e. to find a Morse theory for m differentiable functions defined on a manifold M of dimension ℓ. We use this framework to take a 2 × 2 Hamiltonian ℋ = ℋ(p) ∈ 2 C ∞(T * R 2) to its normal form near a singular point of the Fresnel surface. Namely we say that ℋ has the Pareto property if it decomposes, locally, up to a conjugation with regular matrices, as ℋ(p) = u ‧(p)C(p)(u ‧(p))*, where u : R 2 → R 2 has singularities of codimension 1 or 2, and C(p) is a regular Hermitian matrix (“integrating factor”). In particular this applies in certain cases to the matrix Hamiltonian of Elasticity theory and its (relative) perturbations of order 3 in momentum at the origin.

  13. A viable D-term hybrid inflation model

    Science.gov (United States)

    Kadota, Kenji; Kobayashi, Tatsuo; Sumita, Keigo

    2017-11-01

    We propose a new model of the D-term hybrid inflation in the framework of supergravity. Although our model introduces, analogously to the conventional D-term inflation, the inflaton and a pair of scalar fields charged under a U(1) gauge symmetry, we study the logarithmic and exponential dependence on the inflaton field, respectively, for the Kähler and superpotential. This results in a characteristic one-loop scalar potential consisting of linear and exponential terms, which realizes the small-field inflation dominated by the Fayet-Iliopoulos term. With the reasonable values for the coupling coefficients and, in particular, with the U(1) gauge coupling constant comparable to that of the Standard Model, our D-term inflation model can solve the notorious problems in the conventional D-term inflation, namely, the CMB constraints on the spectral index and the generation of cosmic strings.

  14. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  15. A hybrid spatiotemporal drought forecasting model for operational use

    Science.gov (United States)

    Vasiliades, L.; Loukas, A.

    2010-09-01

    Drought forecasting plays an important role in the planning and management of natural resources and water resource systems in a river basin. Early and timelines forecasting of a drought event can help to take proactive measures and set out drought mitigation strategies to alleviate the impacts of drought. Spatiotemporal data mining is the extraction of unknown and implicit knowledge, structures, spatiotemporal relationships, or patterns not explicitly stored in spatiotemporal databases. As one of data mining techniques, forecasting is widely used to predict the unknown future based upon the patterns hidden in the current and past data. This study develops a hybrid spatiotemporal scheme for integrated spatial and temporal forecasting. Temporal forecasting is achieved using feed-forward neural networks and the temporal forecasts are extended to the spatial dimension using a spatial recurrent neural network model. The methodology is demonstrated for an operational meteorological drought index the Standardized Precipitation Index (SPI) calculated at multiple timescales. 48 precipitation stations and 18 independent precipitation stations, located at Pinios river basin in Thessaly region, Greece, were used for the development and spatiotemporal validation of the hybrid spatiotemporal scheme. Several quantitative temporal and spatial statistical indices were considered for the performance evaluation of the models. Furthermore, qualitative statistical criteria based on contingency tables between observed and forecasted drought episodes were calculated. The results show that the lead time of forecasting for operational use depends on the SPI timescale. The hybrid spatiotemporal drought forecasting model could be operationally used for forecasting up to three months ahead for SPI short timescales (e.g. 3-6 months) up to six months ahead for large SPI timescales (e.g. 24 months). The above findings could be useful in developing a drought preparedness plan in the region.

  16. On The Modelling Of Hybrid Aerostatic - Gas Journal Bearings

    DEFF Research Database (Denmark)

    Morosi, Stefano; Santos, Ilmar

    2011-01-01

    modeling for hybrid lubrication of a compressible fluid film journal bearing. Additional forces are generated by injecting pressurized air into the bearing gap through orifices located on the bearing walls. A modified form of the compressible Reynolds equation for active lubrication is derived. By solving......Gas journal bearing have been increasingly adopted in modern turbo-machinery applications, as they meet the demands of operation at higher rotational speeds, in clean environment and great efficiency. Due to the fact that gaseous lubricants, typically air, have much lower viscosity than more...

  17. Active diagnosis of hybrid systems - A model predictive approach

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Ravn, Anders P.; Izadi-Zamanabadi, Roozbeh

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and fault...... can be used as a test signal for sanity check at the commissioning or for detection of faults hidden by regulatory actions of the controller. The method is tested on the two tank benchmark example. ©2009 IEEE....

  18. Software development infrastructure for the HYBRID modeling and simulation project

    International Nuclear Information System (INIS)

    Epiney, Aaron S.; Kinoshita, Robert A.; Kim, Jong Suk; Rabiti, Cristian; Greenwood, M. Scott

    2016-01-01

    One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers

  19. The Hybrid Airline Model. Generating Quality for Passengers

    Directory of Open Access Journals (Sweden)

    Bogdan AVRAM

    2017-12-01

    Full Text Available This research aims to investigate the different strategies adopted by the airline companies in adapting to the ongoing changes while developing products and services for passengers in order to increase their yield, load factor and passenger satisfaction. Finding a balance between costs and services quality in the airline industry is a crucial task for every airline wanting to gain a competitive advantage on the market. Also, the rise of the hybrid business operating model has brought up many challenges for airlines as the line between legacy carriers and low-cost carriers is getting thinner in terms of costs and innovative ideas to create a superior product for the passengers.

  20. Software development infrastructure for the HYBRID modeling and simulation project

    Energy Technology Data Exchange (ETDEWEB)

    Epiney, Aaron S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinoshita, Robert A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, Jong Suk [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Greenwood, M. Scott [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers

  1. Characterizing the Incentive Compatible and Pareto Optimal Efficiency Space for Two Players, k Items, Public Budget and Quasilinear Utilities

    Directory of Open Access Journals (Sweden)

    Anat Lerner

    2014-04-01

    Full Text Available We characterize the efficiency space of deterministic, dominant-strategy incentive compatible, individually rational and Pareto-optimal combinatorial auctions in a model with two players and k nonidentical items. We examine a model with multidimensional types, private values and quasilinear preferences for the players with one relaxation: one of the players is subject to a publicly known budget constraint. We show that if it is publicly known that the valuation for the largest bundle is less than the budget for at least one of the players, then Vickrey-Clarke-Groves (VCG uniquely fulfills the basic properties of being deterministic, dominant-strategy incentive compatible, individually rational and Pareto optimal. Our characterization of the efficient space for deterministic budget constrained combinatorial auctions is similar in spirit to that of Maskin 2000 for Bayesian single-item constrained efficiency auctions and comparable with Ausubel and Milgrom 2002 for non-constrained combinatorial auctions.

  2. A Lookahead Behavior Model for Multi-Agent Hybrid Simulation

    Directory of Open Access Journals (Sweden)

    Mei Yang

    2017-10-01

    Full Text Available In the military field, multi-agent simulation (MAS plays an important role in studying wars statistically. For a military simulation system, which involves large-scale entities and generates a very large number of interactions during the runtime, the issue of how to improve the running efficiency is of great concern for researchers. Current solutions mainly use hybrid simulation to gain fewer updates and synchronizations, where some important continuous models are maintained implicitly to keep the system dynamics, and partial resynchronization (PR is chosen as the preferable state update mechanism. However, problems, such as resynchronization interval selection and cyclic dependency, remain unsolved in PR, which easily lead to low update efficiency and infinite looping of the state update process. To address these problems, this paper proposes a lookahead behavior model (LBM to implement a PR-based hybrid simulation. In LBM, a minimal safe time window is used to predict the interactions between implicit models, upon which the resynchronization interval can be efficiently determined. Moreover, the LBM gives an estimated state value in the lookahead process so as to break the state-dependent cycle. The simulation results show that, compared with traditional mechanisms, LBM requires fewer updates and synchronizations.

  3. Causality in Psychiatry: A Hybrid Symptom Network Construct Model

    Directory of Open Access Journals (Sweden)

    Gerald eYoung

    2015-11-01

    Full Text Available Causality or etiology in psychiatry is marked by standard biomedical, reductionistic models (symptoms reflect the construct involved that inform approaches to nosology, or classification, such as in the DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition; American Psychiatric Association, 2013. However, network approaches to symptom interaction (i.e., symptoms are formative of the construct; e.g., McNally, Robinaugh, Wu, Wang, Deserno, & Borsboom, 2014, for PTSD (posttraumatic stress disorder are being developed that speak to bottom-up processes in mental disorder, in contrast to the typical top-down psychological construct approach. The present article presents a hybrid top-down, bottom-up model of the relationship between symptoms and mental disorder, viewing symptom expression and their causal complex as a reciprocally dynamic system with multiple levels, from lower-order symptoms in interaction to higher-order constructs affecting them. The hybrid model hinges on good understanding of systems theory in which it is embedded, so that the article reviews in depth nonlinear dynamical systems theory (NLDST. The article applies the concept of emergent circular causality (Young, 2011 to symptom development, as well. Conclusions consider that symptoms vary over several dimensions, including: subjectivity; objectivity; conscious motivation effort; and unconscious influences, and the degree to which individual (e.g., meaning and universal (e.g., causal processes are involved. The opposition between science and skepticism is a complex one that the article addresses in final comments.

  4. Hybrid CFD/CAA Modeling for Liftoff Acoustic Predictions

    Science.gov (United States)

    Strutzenberg, Louise L.; Liever, Peter A.

    2011-01-01

    This paper presents development efforts at the NASA Marshall Space flight Center to establish a hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) simulation system for launch vehicle liftoff acoustics environment analysis. Acoustic prediction engineering tools based on empirical jet acoustic strength and directivity models or scaled historical measurements are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. CFD based modeling approaches are now able to capture the important details of vehicle specific plume flow environment, identifY the noise generation sources, and allow assessment of the influence of launch pad geometric details and sound mitigation measures such as water injection. However, CFD methodologies are numerically too dissipative to accurately capture the propagation of the acoustic waves in the large CFD models. The hybrid CFD/CAA approach combines the high-fidelity CFD analysis capable of identifYing the acoustic sources with a fast and efficient Boundary Element Method (BEM) that accurately propagates the acoustic field from the source locations. The BEM approach was chosen for its ability to properly account for reflections and scattering of acoustic waves from launch pad structures. The paper will present an overview of the technology components of the CFD/CAA framework and discuss plans for demonstration and validation against test data.

  5. Efficient Vaccine Distribution Based on a Hybrid Compartmental Model.

    Directory of Open Access Journals (Sweden)

    Zhiwen Yu

    Full Text Available To effectively and efficiently reduce the morbidity and mortality that may be caused by outbreaks of emerging infectious diseases, it is very important for public health agencies to make informed decisions for controlling the spread of the disease. Such decisions must incorporate various kinds of intervention strategies, such as vaccinations, school closures and border restrictions. Recently, researchers have paid increased attention to searching for effective vaccine distribution strategies for reducing the effects of pandemic outbreaks when resources are limited. Most of the existing research work has been focused on how to design an effective age-structured epidemic model and to select a suitable vaccine distribution strategy to prevent the propagation of an infectious virus. Models that evaluate age structure effects are common, but models that additionally evaluate geographical effects are less common. In this paper, we propose a new SEIR (susceptible-exposed-infectious šC recovered model, named the hybrid SEIR-V model (HSEIR-V, which considers not only the dynamics of infection prevalence in several age-specific host populations, but also seeks to characterize the dynamics by which a virus spreads in various geographic districts. Several vaccination strategies such as different kinds of vaccine coverage, different vaccine releasing times and different vaccine deployment methods are incorporated into the HSEIR-V compartmental model. We also design four hybrid vaccination distribution strategies (based on population size, contact pattern matrix, infection rate and infectious risk for controlling the spread of viral infections. Based on data from the 2009-2010 H1N1 influenza epidemic, we evaluate the effectiveness of our proposed HSEIR-V model and study the effects of different types of human behaviour in responding to epidemics.

  6. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  7. Hybrid Modeling Method for a DEP Based Particle Manipulation

    Directory of Open Access Journals (Sweden)

    Mohamad Sawan

    2013-01-01

    Full Text Available In this paper, a new modeling approach for Dielectrophoresis (DEP based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.

  8. The influence of nonlocal hybridization on ground-state properties of the Falicov-Kimball model

    International Nuclear Information System (INIS)

    Farkasovsky, Pavol

    2005-01-01

    The density matrix renormalization group is used to examine effects of nonlocal hybridization on ground-state properties of the Falicov-Kimball model (FKM) in one dimension. Special attention is devoted to the problem of hybridization-induced insulator-metal transition. It is shown that the picture of insulator-metal transitions found for the FKM with nonlocal hybridization strongly differs from one found for the FKM without hybridization (as well as with local hybridization). The effect of nonlocal hybridization is so strong that it can induce the insulator-metal transition, even in the half-filled band case where the ground states of the FKM without hybridization are insulating for all finite Coulomb interactions. Outside the half-filled band case the metal-insulator transition driven by pressure is found for finite values of nonlocal hybridization

  9. Variational principle for the Pareto power law.

    Science.gov (United States)

    Chakraborti, Anirban; Patriarca, Marco

    2009-11-27

    A mechanism is proposed for the appearance of power-law distributions in various complex systems. It is shown that in a conservative mechanical system composed of subsystems with different numbers of degrees of freedom a robust power-law tail can appear in the equilibrium distribution of energy as a result of certain superpositions of the canonical equilibrium energy densities of the subsystems. The derivation only uses a variational principle based on the Boltzmann entropy, without assumptions outside the framework of canonical equilibrium statistical mechanics. Two examples are discussed, free diffusion on a complex network and a kinetic model of wealth exchange. The mechanism is illustrated in the general case through an exactly solvable mechanical model of a dimensionally heterogeneous system.

  10. Kantian Optimization, Social Ethos, and Pareto Efficiency

    OpenAIRE

    John E. Roemer

    2012-01-01

    Although evidence accrues in biology, anthropology and experimental economics that homo sapiens is a cooperative species, the reigning assumption in economic theory is that individuals optimize in an autarkic manner (as in Nash and Walrasian equilibrium). I here postulate an interdependent kind of optimizing behavior, called Kantian. It is shown that in simple economic models, when there are negative externalities (such as congestion effects from use of a commonly owned resource) or positive ...

  11. Dynamic Modeling and Simulation of a Switched Reluctance Motor in a Series Hybrid Electric Vehicle

    OpenAIRE

    Siavash Sadeghi; Mojtaba Mirsalim; Arash Hassanpour Isfahani

    2010-01-01

    Dynamic behavior analysis of electric motors is required in order to accuratelyevaluate the performance, energy consumption and pollution level of hybrid electricvehicles. Simulation tools for hybrid electric vehicles are divided into steady state anddynamic models. Tools with steady-state models are useful for system-level analysiswhereas tools that utilize dynamic models give in-depth information about the behavior ofsublevel components. For the accurate prediction of hybrid electric vehicl...

  12. An Interactive Personalized Recommendation System Using the Hybrid Algorithm Model

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2017-10-01

    Full Text Available With the rapid development of e-commerce, the contradiction between the disorder of business information and customer demand is increasingly prominent. This study aims to make e-commerce shopping more convenient, and avoid information overload, by an interactive personalized recommendation system using the hybrid algorithm model. The proposed model first uses various recommendation algorithms to get a list of original recommendation results. Combined with the customer’s feedback in an interactive manner, it then establishes the weights of corresponding recommendation algorithms. Finally, the synthetic formula of evidence theory is used to fuse the original results to obtain the final recommendation products. The recommendation performance of the proposed method is compared with that of traditional methods. The results of the experimental study through a Taobao online dress shop clearly show that the proposed method increases the efficiency of data mining in the consumer coverage, the consumer discovery accuracy and the recommendation recall. The hybrid recommendation algorithm complements the advantages of the existing recommendation algorithms in data mining. The interactive assigned-weight method meets consumer demand better and solves the problem of information overload. Meanwhile, our study offers important implications for e-commerce platform providers regarding the design of product recommendation systems.

  13. A Probability-Based Hybrid User Model for Recommendation System

    Directory of Open Access Journals (Sweden)

    Jia Hao

    2016-01-01

    Full Text Available With the rapid development of information communication technology, the available information or knowledge is exponentially increased, and this causes the well-known information overload phenomenon. This problem is more serious in product design corporations because over half of the valuable design time is consumed in knowledge acquisition, which highly extends the design cycle and weakens the competitiveness. Therefore, the recommender systems become very important in the domain of product domain. This research presents a probability-based hybrid user model, which is a combination of collaborative filtering and content-based filtering. This hybrid model utilizes user ratings and item topics or classes, which are available in the domain of product design, to predict the knowledge requirement. The comprehensive analysis of the experimental results shows that the proposed method gains better performance in most of the parameter settings. This work contributes a probability-based method to the community for implement recommender system when only user ratings and item topics are available.

  14. Hybrid quantum-classical modeling of quantum dot devices

    Science.gov (United States)

    Kantner, Markus; Mittnenzweig, Markus; Koprucki, Thomas

    2017-11-01

    The design of electrically driven quantum dot devices for quantum optical applications asks for modeling approaches combining classical device physics with quantum mechanics. We connect the well-established fields of semiclassical semiconductor transport theory and the theory of open quantum systems to meet this requirement. By coupling the van Roosbroeck system with a quantum master equation in Lindblad form, we introduce a new hybrid quantum-classical modeling approach, which provides a comprehensive description of quantum dot devices on multiple scales: it enables the calculation of quantum optical figures of merit and the spatially resolved simulation of the current flow in realistic semiconductor device geometries in a unified way. We construct the interface between both theories in such a way, that the resulting hybrid system obeys the fundamental axioms of (non)equilibrium thermodynamics. We show that our approach guarantees the conservation of charge, consistency with the thermodynamic equilibrium and the second law of thermodynamics. The feasibility of the approach is demonstrated by numerical simulations of an electrically driven single-photon source based on a single quantum dot in the stationary and transient operation regime.

  15. Axelrod Model of Social Influence with Cultural Hybridization

    Science.gov (United States)

    Radillo-Díaz, Alejandro; Pérez, Luis A.; Del Castillo-Mussot, Marcelo

    2012-10-01

    Since cultural interactions between a pair of social agents involve changes in both individuals, we present simulations of a new model based on Axelrod's homogenization mechanism that includes hybridization or mixture of the agents' features. In this new hybridization model, once a cultural feature of a pair of agents has been chosen for the interaction, the average of the values for this feature is reassigned as the new value for both agents after interaction. Moreover, a parameter representing social tolerance is implemented in order to quantify whether agents are similar enough to engage in interaction, as well as to determine whether they belong to the same cluster of similar agents after the system has reached the frozen state. The transitions from a homogeneous state to a fragmented one decrease in abruptness as tolerance is increased. Additionally, the entropy associated to the system presents a maximum within the transition, the width of which increases as tolerance does. Moreover, a plateau was found inside the transition for a low-tolerance system of agents with only two cultural features.

  16. Modelling the solar wind interaction with Mercury by a quasi-neutral hybrid model

    Directory of Open Access Journals (Sweden)

    E. Kallio

    Full Text Available Quasi-neutral hybrid model is a self-consistent modelling approach that includes positively charged particles and an electron fluid. The approach has received an increasing interest in space plasma physics research because it makes it possible to study several plasma physical processes that are difficult or impossible to model by self-consistent fluid models, such as the effects associated with the ions’ finite gyroradius, the velocity difference between different ion species, or the non-Maxwellian velocity distribution function. By now quasi-neutral hybrid models have been used to study the solar wind interaction with the non-magnetised Solar System bodies of Mars, Venus, Titan and comets. Localized, two-dimensional hybrid model runs have also been made to study terrestrial dayside magnetosheath. However, the Hermean plasma environment has not yet been analysed by a global quasi-neutral hybrid model.

    In this paper we present a new quasi-neutral hybrid model developed to study various processes associated with the Mercury-solar wind interaction. Emphasis is placed on addressing advantages and disadvantages of the approach to study different plasma physical processes near the planet. The basic assumptions of the approach and the algorithms used in the new model are thoroughly presented. Finally, some of the first three-dimensional hybrid model runs made for Mercury are presented.

    The resulting macroscopic plasma parameters and the morphology of the magnetic field demonstrate the applicability of the new approach to study the Mercury-solar wind interaction globally. In addition, the real advantage of the kinetic hybrid model approach is to study the property of individual ions, and the study clearly demonstrates the large potential of the approach to address these more detailed issues by a quasi-neutral hybrid model in the future.

    Key words. Magnetospheric physics

  17. Test scheduling optimization for 3D network-on-chip based on cloud evolutionary algorithm of Pareto multi-objective

    Science.gov (United States)

    Xu, Chuanpei; Niu, Junhao; Ling, Jing; Wang, Suyan

    2018-03-01

    In this paper, we present a parallel test strategy for bandwidth division multiplexing under the test access mechanism bandwidth constraint. The Pareto solution set is combined with a cloud evolutionary algorithm to optimize the test time and power consumption of a three-dimensional network-on-chip (3D NoC). In the proposed method, all individuals in the population are sorted in non-dominated order and allocated to the corresponding level. Individuals with extreme and similar characteristics are then removed. To increase the diversity of the population and prevent the algorithm from becoming stuck around local optima, a competition strategy is designed for the individuals. Finally, we adopt an elite reservation strategy and update the individuals according to the cloud model. Experimental results show that the proposed algorithm converges to the optimal Pareto solution set rapidly and accurately. This not only obtains the shortest test time, but also optimizes the power consumption of the 3D NoC.

  18. Electromagnetic moments of hadrons and quarks in a hybrid model

    International Nuclear Information System (INIS)

    Gerasimov, S.B.

    1989-01-01

    Magnetic moments of baryons are analyzed on the basis of general sum rules following from the theory of broken symmetries and quark models including the relativistic effects and hadronic corrections due to the meson exchange currents. A new sum rule is proposed for the hyperon magnetic moments, which is in accord with the most precise new data and also with a theory of the electromagnetic ΛΣ 0 mixing. The numerical values of the quark electromagnetic moments are obtained within a hybrid model treating the pion cloud effects through the local coupling of the pion field with the constituent massive quarks. Possible sensitivity of the weak neutral current magnetic moments to violation of the Okubo-Zweig-Izuki rule is emphasized nand discussed. 39 refs.; 1 fig

  19. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  20. Single Cell Dynamics Causes Pareto-Like Effect in Stimulated T Cell Populations.

    Science.gov (United States)

    Cosette, Jérémie; Moussy, Alice; Onodi, Fanny; Auffret-Cariou, Adrien; Neildez-Nguyen, Thi My Anh; Paldi, Andras; Stockholm, Daniel

    2015-12-09

    Cell fate choice during the process of differentiation may obey to deterministic or stochastic rules. In order to discriminate between these two strategies we used time-lapse microscopy of individual murine CD4 + T cells that allows investigating the dynamics of proliferation and fate commitment. We observed highly heterogeneous division and death rates between individual clones resulting in a Pareto-like dominance of a few clones at the end of the experiment. Commitment to the Treg fate was monitored using the expression of a GFP reporter gene under the control of the endogenous Foxp3 promoter. All possible combinations of proliferation and differentiation were observed and resulted in exclusively GFP-, GFP+ or mixed phenotype clones of very different population sizes. We simulated the process of proliferation and differentiation using a simple mathematical model of stochastic decision-making based on the experimentally observed parameters. The simulations show that a stochastic scenario is fully compatible with the observed Pareto-like imbalance in the final population.

  1. The Reduction of Modal Sensor Channels through a Pareto Chart Methodology

    Directory of Open Access Journals (Sweden)

    Kaci J. Lemler

    2015-01-01

    Full Text Available Presented herein is a new experimental sensor placement procedure developed to assist in placing sensors in key locations in an efficient method to reduce the number of channels for a full modal analysis. It is a fast, noncontact method that uses a laser vibrometer to gather a candidate set of sensor locations. These locations are then evaluated using a Pareto chart to obtain a reduced set of sensor locations that still captures the motion of the structure. The Pareto chart is employed to identify the points on a structure that have the largest reaction to an input excitation and thus reduce the number of channels while capturing the most significant data. This method enhances the correct and efficient placement of sensors which is crucial in modal testing. Previously this required the development and/or use of a complicated model or set of equations. This new technique is applied in a case study on a small unmanned aerial system. The test procedure is presented and the results are discussed.

  2. Hybrid continuum-coarse-grained modeling of erythrocytes

    Science.gov (United States)

    Lyu, Jinming; Chen, Paul G.; Boedec, Gwenn; Leonetti, Marc; Jaeger, Marc

    2018-06-01

    The red blood cell (RBC) membrane is a composite structure, consisting of a phospholipid bilayer and an underlying membrane-associated cytoskeleton. Both continuum and particle-based coarse-grained RBC models make use of a set of vertices connected by edges to represent the RBC membrane, which can be seen as a triangular surface mesh for the former and a spring network for the latter. Here, we present a modeling approach combining an existing continuum vesicle model with a coarse-grained model for the cytoskeleton. Compared to other two-component approaches, our method relies on only one mesh, representing the cytoskeleton, whose velocity in the tangential direction of the membrane may be different from that of the lipid bilayer. The finitely extensible nonlinear elastic (FENE) spring force law in combination with a repulsive force defined as a power function (POW), called FENE-POW, is used to describe the elastic properties of the RBC membrane. The mechanical interaction between the lipid bilayer and the cytoskeleton is explicitly computed and incorporated into the vesicle model. Our model includes the fundamental mechanical properties of the RBC membrane, namely fluidity and bending rigidity of the lipid bilayer, and shear elasticity of the cytoskeleton while maintaining surface-area and volume conservation constraint. We present three simulation examples to demonstrate the effectiveness of this hybrid continuum-coarse-grained model for the study of RBCs in fluid flows.

  3. A hybrid absorbing boundary condition for frequency-domain finite-difference modelling

    International Nuclear Information System (INIS)

    Ren, Zhiming; Liu, Yang

    2013-01-01

    Liu and Sen (2010 Geophysics 75 A1–6; 2012 Geophys. Prospect. 60 1114–32) proposed an efficient hybrid scheme to significantly absorb boundary reflections for acoustic and elastic wave modelling in the time domain. In this paper, we extend the hybrid absorbing boundary condition (ABC) into the frequency domain and develop specific strategies for regular-grid and staggered-grid modelling, respectively. Numerical modelling tests of acoustic, visco-acoustic, elastic and vertically transversely isotropic (VTI) equations show significant absorptions for frequency-domain modelling. The modelling results of the Marmousi model and the salt model also demonstrate the effectiveness of the hybrid ABC. For elastic modelling, the hybrid Higdon ABC and the hybrid Clayton and Engquist (CE) ABC are implemented, respectively. Numerical simulations show that the hybrid Higdon ABC gets better absorption than the hybrid CE ABC, especially for S-waves. We further compare the hybrid ABC with the classical perfectly matched layer (PML). Results show that the two ABCs cost the same computation time and memory space for the same absorption width. However, the hybrid ABC is more effective than the PML for the same small absorption width and the absorption effects of the two ABCs gradually become similar when the absorption width is increased. (paper)

  4. A new approach to flow simulation using hybrid models

    Science.gov (United States)

    Solgi, Abazar; Zarei, Heidar; Nourani, Vahid; Bahmani, Ramin

    2017-11-01

    The necessity of flow prediction in rivers, for proper management of water resource, and the need for determining the inflow to the dam reservoir, designing efficient flood warning systems and so forth, have always led water researchers to think about models with high-speed response and low error. In the recent years, the development of Artificial Neural Networks and Wavelet theory and using the combination of models help researchers to estimate the river flow better and better. In this study, daily and monthly scales were used for simulating the flow of Gamasiyab River, Nahavand, Iran. The first simulation was done using two types of ANN and ANFIS models. Then, using wavelet theory and decomposing input signals of the used parameters, sub-signals were obtained and were fed into the ANN and ANFIS to obtain hybrid models of WANN and WANFIS. In this study, in addition to the parameters of precipitation and flow, parameters of temperature and evaporation were used to analyze their effects on the simulation. The results showed that using wavelet transform improved the performance of the models in both monthly and daily scale. However, it had a better effect on the monthly scale and the WANFIS was the best model.

  5. Modeling integrated cellular machinery using hybrid Petri-Boolean networks.

    Directory of Open Access Journals (Sweden)

    Natalie Berestovsky

    Full Text Available The behavior and phenotypic changes of cells are governed by a cellular circuitry that represents a set of biochemical reactions. Based on biological functions, this circuitry is divided into three types of networks, each encoding for a major biological process: signal transduction, transcription regulation, and metabolism. This division has generally enabled taming computational complexity dealing with the entire system, allowed for using modeling techniques that are specific to each of the components, and achieved separation of the different time scales at which reactions in each of the three networks occur. Nonetheless, with this division comes loss of information and power needed to elucidate certain cellular phenomena. Within the cell, these three types of networks work in tandem, and each produces signals and/or substances that are used by the others to process information and operate normally. Therefore, computational techniques for modeling integrated cellular machinery are needed. In this work, we propose an integrated hybrid model (IHM that combines Petri nets and Boolean networks to model integrated cellular networks. Coupled with a stochastic simulation mechanism, the model simulates the dynamics of the integrated network, and can be perturbed to generate testable hypotheses. Our model is qualitative and is mostly built upon knowledge from the literature and requires fine-tuning of very few parameters. We validated our model on two systems: the transcriptional regulation of glucose metabolism in human cells, and cellular osmoregulation in S. cerevisiae. The model produced results that are in very good agreement with experimental data, and produces valid hypotheses. The abstract nature of our model and the ease of its construction makes it a very good candidate for modeling integrated networks from qualitative data. The results it produces can guide the practitioner to zoom into components and interconnections and investigate them

  6. The Pareto Analysis for Establishing Content Criteria in Surgical Training.

    Science.gov (United States)

    Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N

    2016-01-01

    Current surgical training is still highly dependent on expensive operating room (OR) experience. Although there have been many attempts to transfer more training to the skills laboratory, little research is focused on which technical behaviors can lead to the highest profit when they are trained outside the OR. The Pareto principle states that in any population that contributes to a common effect, a few account for the bulk of the effect. This principle has been widely used in business management to increase company profits. This study uses the Pareto principle for establishing content criteria for more efficient surgical training. A retrospective study was conducted to assess verbal guidance provided by 9 supervising surgeons to 12 trainees performing 64 laparoscopic cholecystectomies in the OR. The verbal corrections were documented, tallied, and clustered according to the aimed change in novice behavior. The corrections were rank ordered, and a cumulative distribution curve was used to calculate which corrections accounted for 80% of the total number of verbal corrections. In total, 253 different verbal corrections were uttered 1587 times and were categorized into 40 different clusters of aimed changes in novice behaviors. The 35 highest-ranking verbal corrections (14%) and the 11 highest-ranking clusters (28%) accounted for 80% of the total number of given verbal corrections. Following the Pareto principle, we were able to identify the aspects of trainee behavior that account for most corrections given by supervisors during a laparoscopic cholecystectomy on humans. This strategy can be used for the development of new training programs to prepare the trainee in advance for the challenges encountered in the clinical setting in an OR. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  7. Modeling and optimization of batteryless hybrid PV (photovoltaic)/Diesel systems for off-grid applications

    International Nuclear Information System (INIS)

    Tsuanyo, David; Azoumah, Yao; Aussel, Didier; Neveu, Pierre

    2015-01-01

    This paper presents a new model and optimization procedure for off-grid hybrid PV (photovoltaic)/Diesel systems operating without battery storage. The proposed technico-economic model takes into account the variability of both the solar irradiation and the electrical loads. It allows optimizing the design and the operation of the hybrid systems by searching their lowest LCOE (Levelized Cost of Electricity). Two cases have been investigated: identical Diesel generators and Diesel generators with different sizes, and both are compared to conventional standalone Diesel generator systems. For the same load profile, the optimization results show that the LCOE of the optimized batteryless hybrid solar PV/Diesel (0.289 €/kWh for the hybrid system with identical Diesel generators and 0.284 €/kWh for the hybrid system with different sizes of Diesel generators) is lower than the LCOE obtained with standalone Diesel generators (0.32 €/kWh for the both cases). The obtained results are then confirmed by HOMER (Hybrid Optimization Model for Electric Renewables) software. - Highlights: • A technico-economic model for optimal design and operation management of batteryless hybrid systems is developed. • The model allows optimizing design and operation of hybrid systems by ensuring their lowest LCOE. • The model was validated by HOMER. • Batteryless hybrid system are suitable for off-grid applications

  8. Modelling of hybrid energy system - Part I: Problem formulation and model development

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, Ajai; Saini, R.P.; Sharma, M.P. [Alternate Hydro Energy Centre, Indian Institute of Technology Roorkee, Roorkee, Uttarakhand 247667 (India)

    2011-02-15

    A well designed hybrid energy system can be cost effective, has a high reliability and can improve the quality of life in remote rural areas. The economic constraints can be met, if these systems are fundamentally well designed, use appropriate technology and make use effective dispatch control techniques. The first paper of this tri-series paper, presents the analysis and design of a mixed integer linear mathematical programming model (time series) to determine the optimal operation and cost optimization for a hybrid energy generation system consisting of a photovoltaic array, biomass (fuelwood), biogas, small/micro-hydro, a battery bank and a fossil fuel generator. The optimization is aimed at minimizing the cost function based on demand and potential constraints. Further, mathematical models of all other components of hybrid energy system are also developed. This is the generation mix of the remote rural of India; it may be applied to other rural areas also. (author)

  9. Simulation of hybrid vehicle propulsion with an advanced battery model

    Energy Technology Data Exchange (ETDEWEB)

    Nallabolu, S.; Kostetzer, L.; Rudnyi, E. [CADFEM GmbH, Grafing (Germany); Geppert, M.; Quinger, D. [LION Smart GmbH, Frieding (Germany)

    2011-07-01

    In the recent years there has been observed an increasing concern about global warming and greenhouse gas emissions. In addition to the environmental issues the predicted scarcity of oil supplies and the dramatic increase in oil price puts new demands on vehicle design. As a result energy efficiency and reduced emission have become one of main selling point for automobiles. Hybrid electric vehicles (HEV) have therefore become an interesting technology for the governments and automotive industries. HEV are more complicated compared to conventional vehicles due to the fact that these vehicles contain more electrical components such as electric machines, power electronics, electronic continuously variable transmissions (CVT), and embedded powertrain controllers. Advanced energy storage devices and energy converters, such as Li-ion batteries, ultracapacitors, and fuel cells are also considered. A detailed vehicle model used for an energy flow analysis and vehicle performance simulation is necessary. Computer simulation is indispensible to facilitate the examination of the vast hybrid electric vehicle design space with the aim to predict the vehicle performance over driving profiles, estimate fuel consumption and the pollution emissions. There are various types of mathematical models and simulators available to perform system simulation of vehicle propulsion. One of the standard methods to model the complete vehicle powertrain is ''backward quasistatic modeling''. In this method vehicle subsystems are defined based on experiential models in the form of look-up tables and efficiency maps. The interaction between adjacent subsystems of the vehicle is defined through the amount of power flow. Modeling the vehicle subsystems like motor, engine, gearbox and battery is under this technique is based on block diagrams. The vehicle model is applied in two case studies to evaluate the vehicle performance and fuel consumption. In the first case study the affect

  10. Recent developments on the UrQMD hybrid model

    Energy Technology Data Exchange (ETDEWEB)

    Steinheimer, J., E-mail: steinheimer@th.physik.uni-frankfurt.de; Nahrgang, M., E-mail: nahrgang@th.physik.uni-frankfurt.de; Gerhard, J., E-mail: jochen.gerhard@compeng.uni-frankfurt.de; Schramm, S., E-mail: schramm@fias.uni-frankfurt.de; Bleicher, M., E-mail: bleicher@fias.uni-frankfurt.de [Frankfurt Institute for Advanced Studies (FIAS) (Germany)

    2012-06-15

    We present recent results from the UrQMD hybrid approach investigating the influence of a deconfinement phase transition on the dynamics of hot and dense nuclear matter. In the hydrodynamic stage an equation of state that incorporates a critical end-point (CEP) in line with lattice data is used. The equation of state describes chiral restoration as well as the deconfinement phase transition. We compare the results from this new equation of state to results obtained by applying a hadron resonance gas equation of state, focusing on bulk observables. Furthermore we will discuss future improvements of the hydrodynamic model. This includes the formulation of chiral fluid dynamics to be able to study the effects of a chiral critical point as well as considerable improvements in terms of computational time which would open up possibilities for observables that require high statistics.

  11. Modeling of Hybrid Permanent Magnetic-Gas Bearings

    DEFF Research Database (Denmark)

    Morosi, Stefano; Santos, Ilmar

    2009-01-01

    Modern turbomachinery applications require nowadays ever-growing rotational speeds and high degree of reliability. It then becomes natural to focus the attention of the research to contact-free bearings elements. The present alternatives focus on gas lubricated journal bearings or magnetic bearings....... In the present paper, a detailed mathematical modeling of the gas bearing based on the compressible form of the Reynolds equation is presented. Perturbation theory is applied in order to identify the dynamic characteristic of the bearing. Due to the simple design of the magnetic bearings elements - being...... the rotor equilibrium position can be made independent on the rotational speed and applied load; it becomes function of the passive magnetic bearing offset. By adjusting the offset it is possible to significantly influence the dynamic coefficients of the hybrid bearing....

  12. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  13. RF modeling of the ITER-relevant lower hybrid antenna

    International Nuclear Information System (INIS)

    Hillairet, J.; Ceccuzzi, S.; Belo, J.; Marfisi, L.; Artaud, J.F.; Bae, Y.S.; Berger-By, G.; Bernard, J.M.; Cara, Ph.; Cardinali, A.; Castaldo, C.; Cesario, R.; Decker, J.; Delpech, L.; Ekedahl, A.; Garcia, J.; Garibaldi, P.; Goniche, M.; Guilhem, D.; Hoang, G.T.

    2011-01-01

    In the frame of the EFDA task HCD-08-03-01, a 5 GHz Lower Hybrid system which should be able to deliver 20 MW CW on ITER and sustain the expected high heat fluxes has been reviewed. The design and overall dimensions of the key RF elements of the launcher and its subsystem has been updated from the 2001 design in collaboration with ITER organization. Modeling of the LH wave propagation and absorption into the plasma shows that the optimal parallel index must be chosen between 1.9 and 2.0 for the ITER steady-state scenario. The present study has been made with n || = 2.0 but can be adapted for n || = 1.9. Individual components have been studied separately giving confidence on the global RF design of the whole antenna.

  14. Modelling and Investigation of a Hybrid Thermal Energy Harvester

    Directory of Open Access Journals (Sweden)

    Todorov Todor

    2018-01-01

    Full Text Available The presented paper deals with dynamical and experimental investigations of a hybrid energy harvester containing shape memory alloy (SMA wire and elastic cantilever with piezoelectric layer. The SMA wire periodically changes its temperature under the influence of a heated plate that approaches and moves away from the SMA wire. The change of SMA wire length causes rotation of the hot plate. The plate is heated by a heater with constant temperature. The repeated SMA wire extensions and contractions bend the piezoelectric cantilever which generates electric charges. The shape memory effect is presented as a temperature approximation of the Young’s modulus. A dynamical model of the energy harvester is created and some analytical investigations are presented. With the help of an experimental setup the acceleration, the force, the temperature, and the output voltage have been measured. The theoretical results are validated experimentally. Some conclusions are made about the best performance of the energy harvester.

  15. Exploring the lambda model of the hybrid superstring

    Energy Technology Data Exchange (ETDEWEB)

    Schmidtt, David M. [Instituto de Física Teórica IFT/UNESP,Rua Dr. Bento Teobaldo Ferraz 271, Bloco II, CEP 01140-070, São Paulo-SP (Brazil)

    2016-10-26

    The purpose of this contribution is to initiate the study of integrable deformations for different superstring theory formalisms that manifest the property of (classical) integrability. In this paper we choose the hybrid formalism of the superstring in the background AdS{sub 2}×S{sup 2} and explore in detail the most immediate consequences of its λ-deformation. The resulting action functional corresponds to the λ-model of the matter part of the fairly more sophisticated pure spinor formalism, which is also known to be classical integrable. In particular, the deformation preserves the integrability and the one-loop conformal invariance of its parent theory, hence being a marginal deformation.

  16. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    Science.gov (United States)

    Bang, Youngsuk

    hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.

  17. Bounded Model Checking and Inductive Verification of Hybrid Discrete-Continuous Systems

    DEFF Research Database (Denmark)

    Becker, Bernd; Behle, Markus; Eisenbrand, Fritz

    2004-01-01

    We present a concept to signicantly advance the state of the art for bounded model checking (BMC) and inductive verication (IV) of hybrid discrete-continuous systems. Our approach combines the expertise of partners coming from dierent domains, like hybrid systems modeling and digital circuit veri...

  18. Pedagogy and Process: A Case Study of Writing in a Hybrid Learning Model

    Science.gov (United States)

    Keiner, Jason F.

    2017-01-01

    This qualitative case study explored the perceived experiences and outcomes of writing in a hybrid model of instruction in a large suburban high school. In particular, the impact of a hybrid model on the writing process and on future writing performance were examined. In addition, teacher expectation and teacher attitude and their impact upon…

  19. Control-relevant modeling and simulation of a SOFC-GT hybrid system

    Directory of Open Access Journals (Sweden)

    Rambabu Kandepu

    2006-07-01

    Full Text Available In this paper, control-relevant models of the most important components in a SOFC-GT hybrid system are described. Dynamic simulations are performed on the overall hybrid system. The model is used to develop a simple control structure, but the simulations show that more elaborate control is needed.

  20. Control-relevant modeling and simulation of a SOFC-GT hybrid system

    OpenAIRE

    Rambabu Kandepu; Lars Imsland; Christoph Stiller; Bjarne A. Foss; Vinay Kariwala

    2006-01-01

    In this paper, control-relevant models of the most important components in a SOFC-GT hybrid system are described. Dynamic simulations are performed on the overall hybrid system. The model is used to develop a simple control structure, but the simulations show that more elaborate control is needed.

  1. A hybrid multiview stereo algorithm for modeling urban scenes.

    Science.gov (United States)

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  2. Status and modeling improvements of hybrid wind/PV/diesel power systems for Brazilian applications

    Energy Technology Data Exchange (ETDEWEB)

    McGowan, J.G.; Manwell, J.F.; Avelar, C. [Univ. of Massachusetts, Amherst, MA (United States); Taylor, R. [National Renewable Energy Lab., Golden, CO (United States)

    1997-12-31

    This paper present a summary of the ongoing work on the modeling and system design of hybrid wind/PV/diesel systems for two different sites in the Amazonia region of Brazil. The work incorporates the latest resource data and is based on the use of the Hybrid2 simulation code developed by the University of Massachusetts and NREL. Details of the baseline operating hybrid systems are reviewed, and the results of the latest detailed hybrid system evaluation for each site are summarized. Based on the system modeling results, separate recommendations for system modification and improvements are made.

  3. Towards a seascape typology. I. Zipf versus Pareto laws

    Science.gov (United States)

    Seuront, Laurent; Mitchell, James G.

    Two data analysis methods, referred to as the Zipf and Pareto methods, initially introduced in economics and linguistics two centuries ago and subsequently used in a wide range of fields (word frequency in languages and literature, human demographics, finance, city formation, genomics and physics), are described and proposed here as a potential tool to classify space-time patterns in marine ecology. The aim of this paper is, first, to present the theoretical bases of Zipf and Pareto laws, and to demonstrate that they are strictly equivalent. In that way, we provide a one-to-one correspondence between their characteristic exponents and argue that the choice of technique is a matter of convenience. Second, we argue that the appeal of this technique is that it is assumption-free for the distribution of the data and regularity of sampling interval, as well as being extremely easy to implement. Finally, in order to allow marine ecologists to identify and classify any structure in their data sets, we provide a step by step overview of the characteristic shapes expected for Zipf's law for the cases of randomness, power law behavior, power law behavior contaminated by internal and external noise, and competing power laws illustrated on the basis of typical ecological situations such as mixing processes involving non-interacting and interacting species, phytoplankton growth processes and differential grazing by zooplankton.

  4. PARETO OPTIMAL SOLUTIONS FOR MULTI-OBJECTIVE GENERALIZED ASSIGNMENT PROBLEM

    Directory of Open Access Journals (Sweden)

    S. Prakash

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The Multi-Objective Generalized Assignment Problem (MGAP with two objectives, where one objective is linear and the other one is non-linear, has been considered, with the constraints that a job is assigned to only one worker – though he may be assigned more than one job, depending upon the time available to him. An algorithm is proposed to find the set of Pareto optimal solutions of the problem, determining assignments of jobs to workers with two objectives without setting priorities for them. The two objectives are to minimise the total cost of the assignment and to reduce the time taken to complete all the jobs.

    AFRIKAANSE OPSOMMING: ‘n Multi-doelwit veralgemeende toekenningsprobleem (“multi-objective generalised assignment problem – MGAP” met twee doelwitte, waar die een lineêr en die ander nielineêr is nie, word bestudeer, met die randvoorwaarde dat ‘n taak slegs toegedeel word aan een werker – alhoewel meer as een taak aan hom toegedeel kan word sou die tyd beskikbaar wees. ‘n Algoritme word voorgestel om die stel Pareto-optimale oplossings te vind wat die taaktoedelings aan werkers onderhewig aan die twee doelwitte doen sonder dat prioriteite toegeken word. Die twee doelwitte is om die totale koste van die opdrag te minimiseer en om die tyd te verminder om al die take te voltooi.

  5. Determination of Pareto frontier in multi-objective maintenance optimization

    International Nuclear Information System (INIS)

    Certa, Antonella; Galante, Giacomo; Lupo, Toni; Passannanti, Gianfranco

    2011-01-01

    The objective of a maintenance policy generally is the global maintenance cost minimization that involves not only the direct costs for both the maintenance actions and the spare parts, but also those ones due to the system stop for preventive maintenance and the downtime for failure. For some operating systems, the failure event can be dangerous so that they are asked to operate assuring a very high reliability level between two consecutive fixed stops. The present paper attempts to individuate the set of elements on which performing maintenance actions so that the system can assure the required reliability level until the next fixed stop for maintenance, minimizing both the global maintenance cost and the total maintenance time. In order to solve the previous constrained multi-objective optimization problem, an effective approach is proposed to obtain the best solutions (that is the Pareto optimal frontier) among which the decision maker will choose the more suitable one. As well known, describing the whole Pareto optimal frontier generally is a troublesome task. The paper proposes an algorithm able to rapidly overcome this problem and its effectiveness is shown by an application to a case study regarding a complex series-parallel system.

  6. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    Science.gov (United States)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  7. Multi-level and hybrid modelling approaches for systems biology.

    Science.gov (United States)

    Bardini, R; Politano, G; Benso, A; Di Carlo, S

    2017-01-01

    During the last decades, high-throughput techniques allowed for the extraction of a huge amount of data from biological systems, unveiling more of their underling complexity. Biological systems encompass a wide range of space and time scales, functioning according to flexible hierarchies of mechanisms making an intertwined and dynamic interplay of regulations. This becomes particularly evident in processes such as ontogenesis, where regulative assets change according to process context and timing, making structural phenotype and architectural complexities emerge from a single cell, through local interactions. The information collected from biological systems are naturally organized according to the functional levels composing the system itself. In systems biology, biological information often comes from overlapping but different scientific domains, each one having its own way of representing phenomena under study. That is, the different parts of the system to be modelled may be described with different formalisms. For a model to have improved accuracy and capability for making a good knowledge base, it is good to comprise different system levels, suitably handling the relative formalisms. Models which are both multi-level and hybrid satisfy both these requirements, making a very useful tool in computational systems biology. This paper reviews some of the main contributions in this field.

  8. Mobile phone use while driving: a hybrid modeling approach.

    Science.gov (United States)

    Márquez, Luis; Cantillo, Víctor; Arellana, Julián

    2015-05-01

    The analysis of the effects that mobile phone use produces while driving is a topic of great interest for the scientific community. There is consensus that using a mobile phone while driving increases the risk of exposure to traffic accidents. The purpose of this research is to evaluate the drivers' behavior when they decide whether or not to use a mobile phone while driving. For that, a hybrid modeling approach that integrates a choice model with the latent variable "risk perception" was used. It was found that workers and individuals with the highest education level are more prone to use a mobile phone while driving than others. Also, "risk perception" is higher among individuals who have been previously fined and people who have been in an accident or almost been in an accident. It was also found that the tendency to use mobile phones while driving increases when the traffic speed reduces, but it decreases when the fine increases. Even though the urgency of the phone call is the most important explanatory variable in the choice model, the cost of the fine is an important attribute in order to control mobile phone use while driving. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index

    DEFF Research Database (Denmark)

    Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle

    2013-01-01

    We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...

  10. Strong Convergence Bound of the Pareto Index Estimator under Right Censoring

    Directory of Open Access Journals (Sweden)

    Peng Zuoxiang

    2010-01-01

    Full Text Available Let be a sequence of positive independent and identically distributed random variables with common Pareto-type distribution function as , where represents a slowly varying function at infinity. In this note we study the strong convergence bound of a kind of right censored Pareto index estimator under second-order regularly varying conditions.

  11. Spatial redistribution of irregularly-spaced Pareto fronts for more intuitive navigation and solution selection

    NARCIS (Netherlands)

    A. Bouter (Anton); K. Pirpinia (Kleopatra); T. Alderliesten (Tanja); P.A.N. Bosman (Peter)

    2017-01-01

    textabstractA multi-objective optimization approach is o.en followed by an a posteriori decision-making process, during which the most appropriate solution of the Pareto set is selected by a professional in the .eld. Conventional visualization methods do not correct for Pareto fronts with

  12. The feasibility of using Pareto fronts for comparison of treatment planning systems and delivery techniques

    DEFF Research Database (Denmark)

    Ottosson, Rickard O; Engstrom, Per E; Sjöström, David

    2008-01-01

    constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics...

  13. Vilfredo Pareto. L'economista alla luce delle lettere a Maffeo Pantaleoni. (Vilfredo Pareto. The economist in the light of his letters to Maffeo Pantaleoni

    Directory of Open Access Journals (Sweden)

    E. SCHNEIDER

    2014-07-01

    Full Text Available The article is part of a special issue on occasion of the publication of the entire scientific correspondence of Vilfredo Pareto with Maffeo Pantaleoni. The author reconstructs the beginning of their correspondence, the debate in pure mathematical economics and draws main conclusions on the different views of Pareto with respect to Marshal, Edgeworth and Fisher.JEL: B16, B31, C02, C60

  14. Kalman Filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry.

    Science.gov (United States)

    Zhang, Yuxin; Chen, Shuo; Deng, Kexin; Chen, Bingyao; Wei, Xing; Yang, Jiafei; Wang, Shi; Ying, Kui

    2017-01-01

    To develop a self-adaptive and fast thermometry method by combining the original hybrid magnetic resonance thermometry method and the bio heat transfer equation (BHTE) model. The proposed Kalman filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry, abbreviated as KalBHT hybrid method, introduced the BHTE model to synthesize a window on the regularization term of the hybrid algorithm, which leads to a self-adaptive regularization both spatially and temporally with change of temperature. Further, to decrease the sensitivity to accuracy of the BHTE model, Kalman filter is utilized to update the window at each iteration time. To investigate the effect of the proposed model, computer heating simulation, phantom microwave heating experiment and dynamic in-vivo model validation of liver and thoracic tumor were conducted in this study. The heating simulation indicates that the KalBHT hybrid algorithm achieves more accurate results without adjusting λ to a proper value in comparison to the hybrid algorithm. The results of the phantom heating experiment illustrate that the proposed model is able to follow temperature changes in the presence of motion and the temperature estimated also shows less noise in the background and surrounding the hot spot. The dynamic in-vivo model validation with heating simulation demonstrates that the proposed model has a higher convergence rate, more robustness to susceptibility problem surrounding the hot spot and more accuracy of temperature estimation. In the healthy liver experiment with heating simulation, the RMSE of the hot spot of the proposed model is reduced to about 50% compared to the RMSE of the original hybrid model and the convergence time becomes only about one fifth of the hybrid model. The proposed model is able to improve the accuracy of the original hybrid algorithm and accelerate the convergence rate of MR temperature estimation.

  15. Modeling, hybridization, and optimal charging of electrical energy storage systems

    Science.gov (United States)

    Parvini, Yasha

    The rising rate of global energy demand alongside the dwindling fossil fuel resources has motivated research for alternative and sustainable solutions. Within this area of research, electrical energy storage systems are pivotal in applications including electrified vehicles, renewable power generation, and electronic devices. The approach of this dissertation is to elucidate the bottlenecks of integrating supercapacitors and batteries in energy systems and propose solutions by the means of modeling, control, and experimental techniques. In the first step, the supercapacitor cell is modeled in order to gain fundamental understanding of its electrical and thermal dynamics. The dependence of electrical parameters on state of charge (SOC), current direction and magnitude (20-200 A), and temperatures ranging from -40°C to 60°C was embedded in this computationally efficient model. The coupled electro-thermal model was parameterized using specifically designed temporal experiments and then validated by the application of real world duty cycles. Driving range is one of the major challenges of electric vehicles compared to combustion vehicles. In order to shed light on the benefits of hybridizing a lead-acid driven electric vehicle via supercapacitors, a model was parameterized for the lead-acid battery and combined with the model already developed for the supercapacitor, to build the hybrid battery-supercapacitor model. A hardware in the loop (HIL) setup consisting of a custom built DC/DC converter, micro-controller (muC) to implement the power management strategy, 12V lead-acid battery, and a 16.2V supercapacitor module was built to perform the validation experiments. Charging electrical energy storage systems in an efficient and quick manner, motivated to solve an optimal control problem with the objective of maximizing the charging efficiency for supercapacitors, lead-acid, and lithium ion batteries. Pontryagins minimum principle was used to solve the problems

  16. Dynamic Model of Islamic Hybrid Securities: Empirical Evidence From Malaysia Islamic Capital Market

    Directory of Open Access Journals (Sweden)

    Jaafar Pyeman

    2016-12-01

    Full Text Available Capital structure selection is fundamentally important in corporate financial management as it influence on mutually return and risk to stakeholders. Despite of Malaysia’s position as one of the major players of Islamic Financial Market, there are still lack of studies has been conducted on the capital structure of shariah compliant firms especially related to hybrid securities. The objective of this study is to determine the hybrid securities issuance model among the shariah compliant firms in Malaysia. As such, this study is to expand the literature review by providing comprehensive analysis on the hybrid capital structure and to develop dynamic Islamic hybrid securities model for shariah compliant firms. We use panel data of 50 companies that have been issuing the hybrid securities from the year of 2004- 2012. The outcomes of the studies are based on the dynamic model GMM estimation for the determinants of hybrid securities. Based on our model, risk and growth are considered as the most determinant factors for issuing convertible bond and loan stock. These results suggest that, the firms that have high risk but having good growth prospect will choose hybrid securities of convertible bond. The model also support the backdoor equity listing hypothesis by Stein (1992 where the hybrid securities enable the profitable firms to venture into positive NPV project by issuing convertible bond as it offer lower coupon rate as compare to the normal debt rate

  17. Development of hybrid 3-D hydrological modeling for the NCAR Community Earth System Model (CESM)

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Xubin [Univ. of Arizona, Tucson, AZ (United States); Troch, Peter [Univ. of Arizona, Tucson, AZ (United States); Pelletier, Jon [Univ. of Arizona, Tucson, AZ (United States); Niu, Guo-Yue [Univ. of Arizona, Tucson, AZ (United States); Gochis, David [NCAR Research Applications Lab., Boulder, CO (United States)

    2015-11-15

    This is the Final Report of our four-year (3-year plus one-year no cost extension) collaborative project between the University of Arizona (UA) and the National Center for Atmospheric Research (NCAR). The overall objective of our project is to develop and evaluate the first hybrid 3-D hydrological model with a horizontal grid spacing of 1 km for the NCAR Community Earth System Model (CESM).

  18. A hybrid simulation model for a stable auroral arc

    Directory of Open Access Journals (Sweden)

    P. Janhunen

    Full Text Available We present a new type of hybrid simulation model, intended to simulate a single stable auroral arc in the latitude/altitude plane. The ionospheric ions are treated as particles, the electrons are assumed to follow a Boltzmann response and the magnetospheric ions are assumed to be so hot that they form a background population unaffected by the electric fields that arise. The system is driven by assumed parallel electron energisation causing a primary negative charge cloud and an associated potential structure to build up. The results show how a closed potential structure and density depletion of an auroral arc build up and how they decay after the driver is turned off. The model also produces upgoing energetic ion beams and predicts strong static perpendicular electric fields to be found in a relatively narrow altitude range (~ 5000–11 000 km.

    Key words. Magnetospheric physics (magnetosphere-ionosphere interactions; auroral phenomena – Space plasma physics (numerical simulation studies

  19. Hybrid CMS methods with model reduction for assembly of structures

    Science.gov (United States)

    Farhat, Charbel

    1991-01-01

    Future on-orbit structures will be designed and built in several stages, each with specific control requirements. Therefore there must be a methodology which can predict the dynamic characteristics of the assembled structure, based on the dynamic characteristics of the subassemblies and their interfaces. The methodology developed by CSC to address this issue is Hybrid Component Mode Synthesis (HCMS). HCMS distinguishes itself from standard component mode synthesis algorithms in the following features: (1) it does not require the subcomponents to have displacement compatible models, which makes it ideal for analyzing the deployment of heterogeneous flexible multibody systems, (2) it incorporates a second-level model reduction scheme at the interface, which makes it much faster than other algorithms and therefore suitable for control purposes, and (3) it does answer specific questions such as 'how does the global fundamental frequency vary if I change the physical parameters of substructure k by a specified amount?'. Because it is based on an energy principle rather than displacement compatibility, this methodology can also help the designer to define an assembly process. Current and future efforts are devoted to applying the HCMS method to design and analyze docking and berthing procedures in orbital construction.

  20. A Hybrid Fuzzy Model for Lean Product Development Performance Measurement

    Science.gov (United States)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    In the effort for manufacturing companies to meet up with the emerging consumer demands for mass customized products, many are turning to the application of lean in their product development process, and this is gradually moving from being a competitive advantage to a necessity. However, due to lack of clear understanding of the lean performance measurements, many of these companies are unable to implement and fully integrated the lean principle into their product development process. Extensive literature shows that only few studies have focus systematically on the lean product development performance (LPDP) evaluation. In order to fill this gap, the study therefore proposed a novel hybrid model based on Fuzzy Reasoning Approach (FRA), and the extension of Fuzzy-AHP and Fuzzy-TOPSIS methods for the assessment of the LPDP. Unlike the existing methods, the model considers the importance weight of each of the decision makers (Experts) since the performance criteria/attributes are required to be rated, and these experts have different level of expertise. The rating is done using a new fuzzy Likert rating scale (membership-scale) which is designed such that it can address problems resulting from information lost/distortion due to closed-form scaling and the ordinal nature of the existing Likert scale.

  1. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  2. A hybrid model of primary radiation damage in crystals

    International Nuclear Information System (INIS)

    Samarin, S.I.; Dremov, V.V.

    2009-01-01

    The paper offers a hybrid model which combines molecular dynamics and Monte Carlo (MD+MC) methods to describe primary radiation damage in crystals, caused by particles whose energies are no higher than several tens of keV. The particles are tracked in accord with equations of motion with account for pair interaction. The model also considers particle interaction with the mean-field potential (MFP) of the crystal. Only particles involved in cascading are tracked. Equations of motion for these particles include dissipative forces which describe energy exchange between cascade particles and electrons. New particles - the atoms of the crystal in the cascade region - have stochastic parameters (phase coordinates); they are sampled by the Monte Carlo method from the distribution that describes the classic canonical ensemble of non-interacting particles subjected to the external MFP. The introduction of particle interaction with the MFP helps avoid difficulties related to crystal stability and the choice of an adequate interparticle interaction potential in the traditional MD methods. Our technique is many times as fast as the traditional MD methods because we consider only particles which are involved in cascading and apply special methods to speedup the calculation of forces by accounting for the short-range pair potential used

  3. Hybrid network defense model based on fuzzy evaluation.

    Science.gov (United States)

    Cho, Ying-Chiang; Pan, Jen-Yi

    2014-01-01

    With sustained and rapid developments in the field of information technology, the issue of network security has become increasingly prominent. The theme of this study is network data security, with the test subject being a classified and sensitive network laboratory that belongs to the academic network. The analysis is based on the deficiencies and potential risks of the network's existing defense technology, characteristics of cyber attacks, and network security technologies. Subsequently, a distributed network security architecture using the technology of an intrusion prevention system is designed and implemented. In this paper, first, the overall design approach is presented. This design is used as the basis to establish a network defense model, an improvement over the traditional single-technology model that addresses the latter's inadequacies. Next, a distributed network security architecture is implemented, comprising a hybrid firewall, intrusion detection, virtual honeynet projects, and connectivity and interactivity between these three components. Finally, the proposed security system is tested. A statistical analysis of the test results verifies the feasibility and reliability of the proposed architecture. The findings of this study will potentially provide new ideas and stimuli for future designs of network security architecture.

  4. Zipf's law and influential factors of the Pareto exponent of the city size distribution: Evidence from China

    OpenAIRE

    GAO Hongying; WU Kangping

    2007-01-01

    This paper estimates the Pareto exponent of the city size (population size and economy size) distribution, all provinces, and three regions in China in 1997, 2000 and 2003 by OLS, comparatively analyzes the Pareto exponent cross section and times, and empirically analyzes the factors which impacts on the Pareto exponents of provinces. Our analyses show that the size distributions of cities in China follow the Pareto distribution and are of structural features. Variations in the value of the P...

  5. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    Science.gov (United States)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  6. Coordinated Pitch & Torque Control of Large-Scale Wind Turbine Based on Pareto Eciency Analysis

    DEFF Research Database (Denmark)

    Lin, Zhongwei; Chen, Zhenyu; Wu, Qiuwei

    2018-01-01

    For the existing pitch and torque control of the wind turbine generator system (WTGS), further development on coordinated control is necessary to improve effectiveness for practical applications. In this paper, the WTGS is modeled as a coupling combination of two subsystems: the generator torque...... control subsystem and blade pitch control subsystem. Then, the pole positions in each control subsystem are adjusted coordinately to evaluate the controller participation and used as the objective of optimization. A two-level parameters-controllers coordinated optimization scheme is proposed and applied...... to optimize the controller coordination based on the Pareto optimization theory. Three solutions are obtained through optimization, which includes the optimal torque solution, optimal power solution, and satisfactory solution. Detailed comparisons evaluate the performance of the three selected solutions...

  7. Pareto-optimal electricity tariff rates in the Republic of Armenia

    International Nuclear Information System (INIS)

    Kaiser, M.J.

    2000-01-01

    The economic impact of electricity tariff rates on the residential sector of Yerevan, Armenia, is examined. The effect of tariff design on revenue generation and equity measures is considered, and the combination of energy pricing and compensatory social policies which provides the best mix of efficiency and protection for poor households is examined. An equity measure is defined in terms of a cumulative distribution function which describes the percent of the population that spends x percent or less of their income on electricity consumption. An optimal (Pareto-efficient) tariff is designed based on the analysis of survey data and an econometric model, and the Armenian tariff rate effective 1 January 1997 to 15 September 1997 is shown to be non-optimal relative to this rate. 22 refs

  8. Pareto-optimal reversed-phase chromatography separation of three insulin variants with a solubility constraint.

    Science.gov (United States)

    Arkell, Karolina; Knutson, Hans-Kristian; Frederiksen, Søren S; Breil, Martin P; Nilsson, Bernt

    2018-01-12

    With the shift of focus of the regulatory bodies, from fixed process conditions towards flexible ones based on process understanding, model-based optimization is becoming an important tool for process development within the biopharmaceutical industry. In this paper, a multi-objective optimization study of separation of three insulin variants by reversed-phase chromatography (RPC) is presented. The decision variables were the load factor, the concentrations of ethanol and KCl in the eluent, and the cut points for the product pooling. In addition to the purity constraints, a solubility constraint on the total insulin concentration was applied. The insulin solubility is a function of the ethanol concentration in the mobile phase, and the main aim was to investigate the effect of this constraint on the maximal productivity. Multi-objective optimization was performed with and without the solubility constraint, and visualized as Pareto fronts, showing the optimal combinations of the two objectives productivity and yield for each case. Comparison of the constrained and unconstrained Pareto fronts showed that the former diverges when the constraint becomes active, because the increase in productivity with decreasing yield is almost halted. Consequently, we suggest the operating point at which the total outlet concentration of insulin reaches the solubility limit as the most suitable one. According to the results from the constrained optimizations, the maximal productivity on the C 4 adsorbent (0.41 kg/(m 3  column h)) is less than half of that on the C 18 adsorbent (0.87 kg/(m 3  column h)). This is partly caused by the higher selectivity between the insulin variants on the C 18 adsorbent, but the main reason is the difference in how the solubility constraint affects the processes. Since the optimal ethanol concentration for elution on the C 18 adsorbent is higher than for the C 4 one, the insulin solubility is also higher, allowing a higher pool concentration

  9. Three hybridization models based on local search scheme for job shop scheduling problem

    Science.gov (United States)

    Balbi Fraga, Tatiana

    2015-05-01

    This work presents three different hybridization models based on the general schema of Local Search Heuristics, named Hybrid Successive Application, Hybrid Neighborhood, and Hybrid Improved Neighborhood. Despite similar approaches might have already been presented in the literature in other contexts, in this work these models are applied to analyzes the solution of the job shop scheduling problem, with the heuristics Taboo Search and Particle Swarm Optimization. Besides, we investigate some aspects that must be considered in order to achieve better solutions than those obtained by the original heuristics. The results demonstrate that the algorithms derived from these three hybrid models are more robust than the original algorithms and able to get better results than those found by the single Taboo Search.

  10. Modeling and design of a high-performance hybrid actuator

    Science.gov (United States)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-12-01

    This paper presents the model and design of a novel hybrid piezoelectric actuator which provides high active and passive performances for smart structural systems. The actuator is composed of a pair of curved pre-stressed piezoelectric actuators, so-called commercially THUNDER actuators, installed opposite each other using two clamping mechanisms constructed of in-plane fixable hinges, grippers and solid links. A fully mathematical model is developed to describe the active and passive dynamics of the actuator and investigate the effects of its geometrical parameters on the dynamic stiffness, free displacement and blocked force properties. Among the literature that deals with piezoelectric actuators in which THUNDER elements are used as a source of electromechanical power, the proposed study is unique in that it presents a mathematical model that has the ability to predict the actuator characteristics and achieve other phenomena, such as resonances, mode shapes, phase shifts, dips, etc. For model validation, the measurements of the free dynamic response per unit voltage and passive acceleration transmissibility of a particular actuator design are used to check the accuracy of the results predicted by the model. The results reveal that there is a good agreement between the model and experiment. Another experiment is performed to teste the linearity of the actuator system by examining the variation of the output dynamic responses with varying forces and voltages at different frequencies. From the results, it can be concluded that the actuator acts approximately as a linear system at frequencies up to 1000 Hz. A parametric study is achieved here by applying the developed model to analyze the influence of the geometrical parameters of the fixable hinges on the active and passive actuator properties. The model predictions in the frequency range of 0-1000 Hz show that the hinge thickness, radius, and opening angle parameters have great effects on the frequency dynamic

  11. Patient feature based dosimetric Pareto front prediction in esophageal cancer radiotherapy.

    Science.gov (United States)

    Wang, Jiazhou; Jin, Xiance; Zhao, Kuaike; Peng, Jiayuan; Xie, Jiang; Chen, Junchao; Zhang, Zhen; Studenski, Matthew; Hu, Weigang

    2015-02-01

    To investigate the feasibility of the dosimetric Pareto front (PF) prediction based on patient's anatomic and dosimetric parameters for esophageal cancer patients. Eighty esophagus patients in the authors' institution were enrolled in this study. A total of 2928 intensity-modulated radiotherapy plans were obtained and used to generate PF for each patient. On average, each patient had 36.6 plans. The anatomic and dosimetric features were extracted from these plans. The mean lung dose (MLD), mean heart dose (MHD), spinal cord max dose, and PTV homogeneity index were recorded for each plan. Principal component analysis was used to extract overlap volume histogram (OVH) features between PTV and other organs at risk. The full dataset was separated into two parts; a training dataset and a validation dataset. The prediction outcomes were the MHD and MLD. The spearman's rank correlation coefficient was used to evaluate the correlation between the anatomical features and dosimetric features. The stepwise multiple regression method was used to fit the PF. The cross validation method was used to evaluate the model. With 1000 repetitions, the mean prediction error of the MHD was 469 cGy. The most correlated factor was the first principal components of the OVH between heart and PTV and the overlap between heart and PTV in Z-axis. The mean prediction error of the MLD was 284 cGy. The most correlated factors were the first principal components of the OVH between heart and PTV and the overlap between lung and PTV in Z-axis. It is feasible to use patients' anatomic and dosimetric features to generate a predicted Pareto front. Additional samples and further studies are required improve the prediction model.

  12. Hybrid model for forecasting time series with trend, seasonal and salendar variation patterns

    Science.gov (United States)

    Suhartono; Rahayu, S. P.; Prastyo, D. D.; Wijayanti, D. G. P.; Juliyanto

    2017-09-01

    Most of the monthly time series data in economics and business in Indonesia and other Moslem countries not only contain trend and seasonal, but also affected by two types of calendar variation effects, i.e. the effect of the number of working days or trading and holiday effects. The purpose of this research is to develop a hybrid model or a combination of several forecasting models to predict time series that contain trend, seasonal and calendar variation patterns. This hybrid model is a combination of classical models (namely time series regression and ARIMA model) and/or modern methods (artificial intelligence method, i.e. Artificial Neural Networks). A simulation study was used to show that the proposed procedure for building the hybrid model could work well for forecasting time series with trend, seasonal and calendar variation patterns. Furthermore, the proposed hybrid model is applied for forecasting real data, i.e. monthly data about inflow and outflow of currency at Bank Indonesia. The results show that the hybrid model tend to provide more accurate forecasts than individual forecasting models. Moreover, this result is also in line with the third results of the M3 competition, i.e. the hybrid model on average provides a more accurate forecast than the individual model.

  13. Dictatorship, liberalism and the Pareto rule: Possible and impossible

    Directory of Open Access Journals (Sweden)

    Boričić Branislav

    2009-01-01

    Full Text Available The current economic crisis has shaken belief in the capacity of neoliberal 'free market' policies. Numerous supports of state intervention have arisen, and the interest for social choice theory has revived. In this paper we consider three standard properties for aggregating individual into social preferences: dictatorship, liberalism and the Pareto rule, and their formal negations. The context of the pure first-order classical logic makes it possible to show how some combinations of the above mentioned conditions, under the hypothesis of unrestricted domain, form simple and reasonable examples of possible or impossible social choice systems. Due to their simplicity, these examples, including the famous 'liberal paradox', could have a particular didactic value.

  14. Optimal PMU Placement with Uncertainty Using Pareto Method

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2012-01-01

    Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.

  15. Pareto analysis of critical factors affecting technical institution evaluation

    Directory of Open Access Journals (Sweden)

    Victor Gambhir

    2012-08-01

    Full Text Available With the change of education policy in 1991, more and more technical institutions are being set up in India. Some of these institutions provide quality education, but others are merely concentrating on quantity. These stakeholders are in a state of confusion about decision to select the best institute for their higher educational studies. Although various agencies including print media provide ranking of these institutions every year, but their results are controversial and biased. In this paper, the authors have made an endeavor to find the critical factors for technical institution evaluation from literature survey. A Pareto analysis has also been performed to find the intensity of these critical factors in evaluation. This will not only help the stake holders in taking right decisions but will also help the management of institutions in benchmarking for identifying the most important critical areas to improve the existing system. This will in turn help Indian economy.

  16. Pareto optimization of an industrial ecosystem: sustainability maximization

    Directory of Open Access Journals (Sweden)

    J. G. M.-S. Monteiro

    2010-09-01

    Full Text Available This work investigates a procedure to design an Industrial Ecosystem for sequestrating CO2 and consuming glycerol in a Chemical Complex with 15 integrated processes. The Complex is responsible for the production of methanol, ethylene oxide, ammonia, urea, dimethyl carbonate, ethylene glycol, glycerol carbonate, β-carotene, 1,2-propanediol and olefins, and is simulated using UNISIM Design (Honeywell. The process environmental impact (EI is calculated using the Waste Reduction Algorithm, while Profit (P is estimated using classic cost correlations. MATLAB (The Mathworks Inc is connected to UNISIM to enable optimization. The objective is granting maximum process sustainability, which involves finding a compromise between high profitability and low environmental impact. Sustainability maximization is therefore understood as a multi-criteria optimization problem, addressed by means of the Pareto optimization methodology for trading off P vs. EI.

  17. Concept analysis of moral courage in nursing: A hybrid model.

    Science.gov (United States)

    Sadooghiasl, Afsaneh; Parvizy, Soroor; Ebadi, Abbas

    2018-02-01

    Moral courage is one of the most fundamental virtues in the nursing profession, however, little attention has been paid to it. As a result, no exact and clear definition of moral courage has ever been accessible. This study is carried out for the purposes of defining and clarifying its concept in the nursing profession. This study used a hybrid model of concept analysis comprising three phases, namely, a theoretical phase, field work phase, and a final analysis phase. To find relevant literature, electronic search of valid databases was utilized using keywords related to the concept of courage. Field work data were collected over an 11 months' time period from 2013 to 2014. In the field work phase, in-depth interviews were performed with 10 nurses. The conventional content analysis was used in two theoretical and field work phases using Graneheim and Lundman stages, and the results were combined in the final analysis phase. Ethical consideration: Permission for this study was obtained from the ethics committee of Tehran University of Medical Sciences. Oral and written informed consent was received from the participants. From the sum of 750 gained titles in theoretical phase, 26 texts were analyzed. The analysis resulted in 494 codes in text analysis and 226 codes in interview analysis. The literature review in the theoretical phase revealed two features of inherent-transcendental characteristics, two of which possessed a difficult nature. Working in the field phase added moral self-actualization characteristic, rationalism, spiritual beliefs, and scientific-professional qualifications to the feature of the concept. Moral courage is a pure and prominent characteristic of human beings. The antecedents of moral courage include model orientation, model acceptance, rationalism, individual excellence, acquiring academic and professional qualification, spiritual beliefs, organizational support, organizational repression, and internal and external personal barriers

  18. Modeling, control, and simulation of grid connected intelligent hybrid battery/photovoltaic system using new hybrid fuzzy-neural method.

    Science.gov (United States)

    Rezvani, Alireza; Khalili, Abbas; Mazareie, Alireza; Gandomkar, Majid

    2016-07-01

    Nowadays, photovoltaic (PV) generation is growing increasingly fast as a renewable energy source. Nevertheless, the drawback of the PV system is its dependence on weather conditions. Therefore, battery energy storage (BES) can be considered to assist for a stable and reliable output from PV generation system for loads and improve the dynamic performance of the whole generation system in grid connected mode. In this paper, a novel topology of intelligent hybrid generation systems with PV and BES in a DC-coupled structure is presented. Each photovoltaic cell has a specific point named maximum power point on its operational curve (i.e. current-voltage or power-voltage curve) in which it can generate maximum power. Irradiance and temperature changes affect these operational curves. Therefore, the nonlinear characteristic of maximum power point to environment has caused to development of different maximum power point tracking techniques. In order to capture the maximum power point (MPP), a hybrid fuzzy-neural maximum power point tracking (MPPT) method is applied in the PV system. Obtained results represent the effectiveness and superiority of the proposed method, and the average tracking efficiency of the hybrid fuzzy-neural is incremented by approximately two percentage points in comparison to the conventional methods. It has the advantages of robustness, fast response and good performance. A detailed mathematical model and a control approach of a three-phase grid-connected intelligent hybrid system have been proposed using Matlab/Simulink. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Daily air quality index forecasting with hybrid models: A case in China

    International Nuclear Information System (INIS)

    Zhu, Suling; Lian, Xiuyuan; Liu, Haixia; Hu, Jianming; Wang, Yuanyuan; Che, Jinxing

    2017-01-01

    Air quality is closely related to quality of life. Air pollution forecasting plays a vital role in air pollution warnings and controlling. However, it is difficult to attain accurate forecasts for air pollution indexes because the original data are non-stationary and chaotic. The existing forecasting methods, such as multiple linear models, autoregressive integrated moving average (ARIMA) and support vector regression (SVR), cannot fully capture the information from series of pollution indexes. Therefore, new effective techniques need to be proposed to forecast air pollution indexes. The main purpose of this research is to develop effective forecasting models for regional air quality indexes (AQI) to address the problems above and enhance forecasting accuracy. Therefore, two hybrid models (EMD-SVR-Hybrid and EMD-IMFs-Hybrid) are proposed to forecast AQI data. The main steps of the EMD-SVR-Hybrid model are as follows: the data preprocessing technique EMD (empirical mode decomposition) is utilized to sift the original AQI data to obtain one group of smoother IMFs (intrinsic mode functions) and a noise series, where the IMFs contain the important information (level, fluctuations and others) from the original AQI series. LS-SVR is applied to forecast the sum of the IMFs, and then, S-ARIMA (seasonal ARIMA) is employed to forecast the residual sequence of LS-SVR. In addition, EMD-IMFs-Hybrid first separately forecasts the IMFs via statistical models and sums the forecasting results of the IMFs as EMD-IMFs. Then, S-ARIMA is employed to forecast the residuals of EMD-IMFs. To certify the proposed hybrid model, AQI data from June 2014 to August 2015 collected from Xingtai in China are utilized as a test case to investigate the empirical research. In terms of some of the forecasting assessment measures, the AQI forecasting results of Xingtai show that the two proposed hybrid models are superior to ARIMA, SVR, GRNN, EMD-GRNN, Wavelet-GRNN and Wavelet-SVR. Therefore, the

  20. Develop a Hybrid Coordinate Ocean Model with Data Assimilation Capabilities

    National Research Council Canada - National Science Library

    Thacker, W. C

    2003-01-01

    .... The objectives of the research are as follows: (1) to develop a methodology for assimilating temperature and salinity profiles from XBT, CTD, and ARGO float data that accommodates the peculiarities of HYCOM's hybrid vertical coordinates, allowing...

  1. A Hybrid Physical and Maximum-Entropy Landslide Susceptibility Model

    Directory of Open Access Journals (Sweden)

    Jerry Davis

    2015-06-01

    Full Text Available The clear need for accurate landslide susceptibility mapping has led to multiple approaches. Physical models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical methods can include other factors influencing slope stability such as distance to roads, but rely on good landslide inventories. The maximum entropy (MaxEnt model has been widely and successfully used in species distribution mapping, because data on absence are often uncertain. Similarly, knowledge about the absence of landslides is often limited due to mapping scale or methodology. In this paper a hybrid approach is described that combines the physically-based landslide susceptibility model “Stability INdex MAPping” (SINMAP with MaxEnt. This method is tested in a coastal watershed in Pacifica, CA, USA, with a well-documented landslide history including 3 inventories of 154 scars on 1941 imagery, 142 in 1975, and 253 in 1983. Results indicate that SINMAP alone overestimated susceptibility due to insufficient data on root cohesion. Models were compared using SINMAP stability index (SI or slope alone, and SI or slope in combination with other environmental factors: curvature, a 50-m trail buffer, vegetation, and geology. For 1941 and 1975, using slope alone was similar to using SI alone; however in 1983 SI alone creates an Areas Under the receiver operator Curve (AUC of 0.785, compared with 0.749 for slope alone. In maximum-entropy models created using all environmental factors, the stability index (SI from SINMAP represented the greatest contributions in all three years (1941: 48.1%; 1975: 35.3; and 1983: 48%, with AUC of 0.795, 0822, and 0.859, respectively; however; using slope instead of SI created similar overall AUC values, likely due to the combined effect with plan curvature indicating focused hydrologic inputs and vegetation identifying the effect of root cohesion

  2. Dynamic Modeling and Simulation on a Hybrid Power System for Electric Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Hong-Wen He

    2010-11-01

    Full Text Available Hybrid power systems, formed by combining high-energy-density batteries and high-power-density ultracapacitors in appropriate ways, provide high-performance and high-efficiency power systems for electric vehicle applications. This paper first establishes dynamic models for the ultracapacitor, the battery and a passive hybrid power system, and then based on the dynamic models a comparative simulation between a battery only power system and the proposed hybrid power system was done under the UDDS (Urban Dynamometer Driving Schedule. The simulation results showed that the hybrid power system could greatly optimize and improve the efficiency of the batteries and their dynamic current was also decreased due to the participation of the ultracapacitors, which would have a good influence on batteries’ cycle life. Finally, the parameter matching for the passive hybrid power system was studied by simulation and comparisons.

  3. HyLTL: a temporal logic for model checking hybrid systems

    Directory of Open Access Journals (Sweden)

    Davide Bresolin

    2013-08-01

    Full Text Available The model-checking problem for hybrid systems is a well known challenge in the scientific community. Most of the existing approaches and tools are limited to safety properties only, or operates by transforming the hybrid system to be verified into a discrete one, thus loosing information on the continuous dynamics of the system. In this paper we present a logic for specifying complex properties of hybrid systems called HyLTL, and we show how it is possible to solve the model checking problem by translating the formula into an equivalent hybrid automaton. In this way the problem is reduced to a reachability problem on hybrid automata that can be solved by using existing tools.

  4. Hybrid modelling of bed-discordant river confluences

    Science.gov (United States)

    Franca, M. J.; Guillén-Ludeña, S.; Cheng, Z.; Cardoso, A. H.; Constantinescu, G.

    2016-12-01

    In fluvial networks, tributaries are the main providers of sediment and water to the main rivers. Furthermore, confluences are environmental hotspots since they provide ecological connectivity and flow and morphology diversity. Mountain confluences, in particular, are characterized by narrow and steep tributaries that provide important sediment load to the confluence, whereas the main channel supplies the dominant flow discharge. This results in a marked bed discordance between the tributary and main channel. This discordance has been observed to be a key feature that alters the dynamics of the confluence, when compared to concordant confluences. The processes of initiation and maintenance of the morphology of confluences is still unknown, and research linking morphodynamics and hydrodynamics of river confluences is required to understand this. Here, a hybrid approach combining laboratory experiments made in a live-bed model of a river confluence, with 3D numerical simulations using advanced turbulence models is presented. We use the laboratory experiments performed by Guillén-Ludeña et al. (2016) for a 70o channel confluence, which focused on sediment transport and morphology changes rather than on the structure of the flow. Highly eddy resolving simulations were performed for two extreme bathymetric conditions, at the start of the experiment and at equilibrium scour conditions. The first allows to understand the initiation mechanisms which will condition later the equilibrium morphology. The second allows to understand the hydrodynamics actions which keep the equilibrium morphology. The patterns of the mean flow, turbulence and dynamics of the large-scale coherent structures, show how the main sediment-entrainment mechanisms evolve during the scour process. The present results contribute to a better understanding of the interaction between bed morphology and flow dynamics at discordant mountain river confluences.

  5. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    Science.gov (United States)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  6. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    International Nuclear Information System (INIS)

    Hoffmann, Aswin L; Siem, Alex Y D; Hertog, Dick den; Kaanders, Johannes H A M; Huizenga, Henk

    2006-01-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning

  7. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  8. Evaluation of vertical coordinate and vertical mixing algorithms in the HYbrid-Coordinate Ocean Model (HYCOM)

    Science.gov (United States)

    Halliwell, George R.

    Vertical coordinate and vertical mixing algorithms included in the HYbrid Coordinate Ocean Model (HYCOM) are evaluated in low-resolution climatological simulations of the Atlantic Ocean. The hybrid vertical coordinates are isopycnic in the deep ocean interior, but smoothly transition to level (pressure) coordinates near the ocean surface, to sigma coordinates in shallow water regions, and back again to level coordinates in very shallow water. By comparing simulations to climatology, the best model performance is realized using hybrid coordinates in conjunction with one of the three available differential vertical mixing models: the nonlocal K-Profile Parameterization, the NASA GISS level 2 turbulence closure, and the Mellor-Yamada level 2.5 turbulence closure. Good performance is also achieved using the quasi-slab Price-Weller-Pinkel dynamical instability model. Differences among these simulations are too small relative to other errors and biases to identify the "best" vertical mixing model for low-resolution climate simulations. Model performance deteriorates slightly when the Kraus-Turner slab mixed layer model is used with hybrid coordinates. This deterioration is smallest when solar radiation penetrates beneath the mixed layer and when shear instability mixing is included. A simulation performed using isopycnic coordinates to emulate the Miami Isopycnic Coordinate Ocean Model (MICOM), which uses Kraus-Turner mixing without penetrating shortwave radiation and shear instability mixing, demonstrates that the advantages of switching from isopycnic to hybrid coordinates and including more sophisticated turbulence closures outweigh the negative numerical effects of maintaining hybrid vertical coordinates.

  9. Probabilistic modelling and analysis of stand-alone hybrid power systems

    International Nuclear Information System (INIS)

    Lujano-Rojas, Juan M.; Dufo-López, Rodolfo; Bernal-Agustín, José L.

    2013-01-01

    As a part of the Hybrid Intelligent Algorithm, a model based on an ANN (artificial neural network) has been proposed in this paper to represent hybrid system behaviour considering the uncertainty related to wind speed and solar radiation, battery bank lifetime, and fuel prices. The Hybrid Intelligent Algorithm suggests a combination of probabilistic analysis based on a Monte Carlo simulation approach and artificial neural network training embedded in a genetic algorithm optimisation model. The installation of a typical hybrid system was analysed. Probabilistic analysis was used to generate an input–output dataset of 519 samples that was later used to train the ANNs to reduce the computational effort required. The generalisation ability of the ANNs was measured in terms of RMSE (Root Mean Square Error), MBE (Mean Bias Error), MAE (Mean Absolute Error), and R-squared estimators using another data group of 200 samples. The results obtained from the estimation of the expected energy not supplied, the probability of a determined reliability level, and the estimation of expected value of net present cost show that the presented model is able to represent the main characteristics of a typical hybrid power system under uncertain operating conditions. - Highlights: • This paper presents a probabilistic model for stand-alone hybrid power system. • The model considers the main sources of uncertainty related to renewable resources. • The Hybrid Intelligent Algorithm has been applied to represent hybrid system behaviour. • The installation of a typical hybrid system was analysed. • The results obtained from the study case validate the presented model

  10. Hybrid ATDL-gamma distribution model for predicting area source acid gas concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Jakeman, A J; Taylor, J A

    1985-01-01

    An air quality model is developed to predict the distribution of concentrations of acid gas in an urban airshed. The model is hybrid in character, combining reliable features of a deterministic ATDL-based model with statistical distributional approaches. The gamma distribution was identified from a range of distributional models as the best model. The paper shows that the assumptions of a previous hybrid model may be relaxed and presents a methodology for characterizing the uncertainty associated with model predictions. Results are demonstrated for the 98-percentile predictions of 24-h average data over annual periods at six monitoring sites. This percentile relates to the World Health Organization goal for acid gas concentrations.

  11. A hybrid fuzzy multi-criteria decision making model for green ...

    African Journals Online (AJOL)

    A hybrid fuzzy multi-criteria decision making model for green supplier selection. ... Hence,supplier selection is significant factor in supply chain success. ... reduce purchasing cost, lead time and improve quality and environmental issue.

  12. Hybrid Computational Model for High-Altitude Aeroassist Vehicles, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A hybrid continuum/noncontinuum computational model will be developed for analyzing the aerodynamics and heating on aeroassist vehicles. Unique features of this...

  13. Activity Recognition Using Hybrid Generative/Discriminative Models on Home Environments Using Binary Sensors

    Directory of Open Access Journals (Sweden)

    Araceli Sanchis

    2013-04-01

    Full Text Available Activities of daily living are good indicators of elderly health status, and activity recognition in smart environments is a well-known problem that has been previously addressed by several studies. In this paper, we describe the use of two powerful machine learning schemes, ANN (Artificial Neural Network and SVM (Support Vector Machines, within the framework of HMM (Hidden Markov Model in order to tackle the task of activity recognition in a home setting. The output scores of the discriminative models, after processing, are used as observation probabilities of the hybrid approach. We evaluate our approach by comparing these hybrid models with other classical activity recognition methods using five real datasets. We show how the hybrid models achieve significantly better recognition performance, with significance level p < 0:05, proving that the hybrid approach is better suited for the addressed domain.

  14. A new hybrid model optimized by an intelligent optimization algorithm for wind speed forecasting

    International Nuclear Information System (INIS)

    Su, Zhongyue; Wang, Jianzhou; Lu, Haiyan; Zhao, Ge

    2014-01-01

    Highlights: • A new hybrid model is developed for wind speed forecasting. • The model is based on the Kalman filter and the ARIMA. • An intelligent optimization method is employed in the hybrid model. • The new hybrid model has good performance in western China. - Abstract: Forecasting the wind speed is indispensable in wind-related engineering studies and is important in the management of wind farms. As a technique essential for the future of clean energy systems, reducing the forecasting errors related to wind speed has always been an important research subject. In this paper, an optimized hybrid method based on the Autoregressive Integrated Moving Average (ARIMA) and Kalman filter is proposed to forecast the daily mean wind speed in western China. This approach employs Particle Swarm Optimization (PSO) as an intelligent optimization algorithm to optimize the parameters of the ARIMA model, which develops a hybrid model that is best adapted to the data set, increasing the fitting accuracy and avoiding over-fitting. The proposed method is subsequently examined on the wind farms of western China, where the proposed hybrid model is shown to perform effectively and steadily

  15. The Incompatibility of Pareto Optimality and Dominant-Strategy Incentive Compatibility in Sufficiently-Anonymous Budget-Constrained Quasilinear Settings

    Directory of Open Access Journals (Sweden)

    Rica Gonen

    2013-11-01

    Full Text Available We analyze the space of deterministic, dominant-strategy incentive compatible, individually rational and Pareto optimal combinatorial auctions. We examine a model with multidimensional types, nonidentical items, private values and quasilinear preferences for the players with one relaxation; the players are subject to publicly-known budget constraints. We show that the space includes dictatorial mechanisms and that if dictatorial mechanisms are ruled out by a natural anonymity property, then an impossibility of design is revealed. The same impossibility naturally extends to other abstract mechanisms with an arbitrary outcome set if one maintains the original assumptions of players with quasilinear utilities, public budgets and nonnegative prices.

  16. Using Coevolution Genetic Algorithm with Pareto Principles to Solve Project Scheduling Problem under Duration and Cost Constraints

    Directory of Open Access Journals (Sweden)

    Alexandr Victorovich Budylskiy

    2014-06-01

    Full Text Available This article considers the multicriteria optimization approach using the modified genetic algorithm to solve the project-scheduling problem under duration and cost constraints. The work contains the list of choices for solving this problem. The multicriteria optimization approach is justified here. The study describes the Pareto principles, which are used in the modified genetic algorithm. We identify the mathematical model of the project-scheduling problem. We introduced the modified genetic algorithm, the ranking strategies, the elitism approaches. The article includes the example.

  17. Hybrid Electric Vehicle Experimental Model with CAN Network Real Time Control

    Directory of Open Access Journals (Sweden)

    RATOI, M.

    2010-05-01

    Full Text Available In this paper an experimental model with a distributed control system of a hybrid electrical vehicle is presented. A communication CAN network of high speed (1 Mbps assures a distributed control of the all components. The modeling and the control of different operating regimes are realized on an experimental test-bench of a hybrid electrical vehicle. The experimental results concerning the variations of the mains variables (currents, torques, speeds are presented.

  18. The existence of fertile hybrids of closely related model earthworm species, Eisenia andrei and E. fetida.

    Directory of Open Access Journals (Sweden)

    Barbara Plytycz

    Full Text Available Lumbricid earthworms Eisenia andrei (Ea and E. fetida (Ef are simultaneous hermaphrodites with reciprocal insemination capable of self-fertilization while the existence of hybridization of these two species was still debatable. During the present investigation fertile hybrids of Ea and Ef were detected. Virgin specimens of Ea and Ef were laboratory crossed (Ea+Ef and their progeny was doubly identified. 1 -identified by species-specific maternally derived haploid mitochondrial DNA sequences of the COI gene being either 'a' for worms hatched from Ea ova or 'f' for worms hatched from Ef ova. 2 -identified by the diploid maternal/paternal nuclear DNA sequences of 28s rRNA gene being either 'AA' for Ea, 'FF' for Ef, or AF/FA for their hybrids derived either from the 'aA' or 'fF' ova, respectively. Among offspring of Ea+Ef pairs in F1 generation there were mainly aAA and fFF earthworms resulted from the facilitated self-fertilization and some aAF hybrids from aA ova but none fFA hybrids from fF ova. In F2 generation resulting from aAF hybrids mated with aAA a new generations of aAA and aAF hybrids were noticed, while aAF hybrids mated with fFF gave fFF and both aAF and fFA hybrids. Hybrids intercrossed together produced plenty of cocoons but no hatchlings independently whether aAF+aAF or aAF+fFA were mated. These results indicated that Ea and Ef species, easy to maintain in laboratory and commonly used as convenient models in biomedicine and ecotoxicology, may also serve in studies on molecular basis of interspecific barriers and mechanisms of introgression and speciation. Hypothetically, their asymmetrical hybridization can be modified by some external factors.

  19. Model-on-Demand Predictive Control for Nonlinear Hybrid Systems With Application to Adaptive Behavioral Interventions

    Science.gov (United States)

    Nandola, Naresh N.; Rivera, Daniel E.

    2011-01-01

    This paper presents a data-centric modeling and predictive control approach for nonlinear hybrid systems. System identification of hybrid systems represents a challenging problem because model parameters depend on the mode or operating point of the system. The proposed algorithm applies Model-on-Demand (MoD) estimation to generate a local linear approximation of the nonlinear hybrid system at each time step, using a small subset of data selected by an adaptive bandwidth selector. The appeal of the MoD approach lies in the fact that model parameters are estimated based on a current operating point; hence estimation of locations or modes governed by autonomous discrete events is achieved automatically. The local MoD model is then converted into a mixed logical dynamical (MLD) system representation which can be used directly in a model predictive control (MPC) law for hybrid systems using multiple-degree-of-freedom tuning. The effectiveness of the proposed MoD predictive control algorithm for nonlinear hybrid systems is demonstrated on a hypothetical adaptive behavioral intervention problem inspired by Fast Track, a real-life preventive intervention for improving parental function and reducing conduct disorder in at-risk children. Simulation results demonstrate that the proposed algorithm can be useful for adaptive intervention problems exhibiting both nonlinear and hybrid character. PMID:21874087

  20. Using the hybrid fuzzy goal programming model and hybrid genetic algorithm to solve a multi-objective location routing problem for infectious waste disposal

    Directory of Open Access Journals (Sweden)

    Narong Wichapa

    2017-11-01

    Originality/value: The novelty of the proposed methodologies, hybrid fuzzy goal programming model, is the simultaneous combination of both intangible and tangible factors in order to choose new suitable locations, and the hybrid genetic algorithm can be used to determine the optimal routes which provide a minimum number of vehicles and minimum transportation cost under the actual situation, efficiently.

  1. Synthesis of a hybrid model of the VSC FACTS devices and HVDC technologies

    Science.gov (United States)

    Borovikov, Yu S.; Gusev, A. S.; Sulaymanov, A. O.; Ufa, R. A.

    2014-10-01

    The motivation of the presented research is based on the need for development of new methods and tools for adequate simulation of FACTS devices and HVDC systems as part of real electric power systems (EPS). The Research object: An alternative hybrid approach for synthesizing VSC-FACTS and -HVDC hybrid model is proposed. The results: the VSC- FACTS and -HVDC hybrid model is designed in accordance with the presented concepts of hybrid simulation. The developed model allows us to carry out adequate simulation in real time of all the processes in HVDC, FACTS devices and EPS as a whole without any decomposition and limitation on their duration, and also use the developed tool for effective solution of a design, operational and research tasks of EPS containing such devices.

  2. Synthesis of a hybrid model of the VSC FACTS devices and HVDC technologies

    International Nuclear Information System (INIS)

    Borovikov, Yu S; Gusev, A S; Sulaymanov, A O; Ufa, R A

    2014-01-01

    The motivation of the presented research is based on the need for development of new methods and tools for adequate simulation of FACTS devices and HVDC systems as part of real electric power systems (EPS). The Research object: An alternative hybrid approach for synthesizing VSC-FACTS and -HVDC hybrid model is proposed. The results: the VSC- FACTS and -HVDC hybrid model is designed in accordance with the presented concepts of hybrid simulation. The developed model allows us to carry out adequate simulation in real time of all the processes in HVDC, FACTS devices and EPS as a whole without any decomposition and limitation on their duration, and also use the developed tool for effective solution of a design, operational and research tasks of EPS containing such devices

  3. Eco-efficient based logistics network design in hybrid manufacturing/ remanufacturing system in low-carbon economy

    Directory of Open Access Journals (Sweden)

    Yacan Wang

    2013-03-01

    Full Text Available Purpose: Low-carbon economy requires the pursuit of eco-efficiency, which is a win-win situation between economic and environmental efficiency. In this paper the question of trading off the economic and environmental effects embodied in eco-efficiency in the hybrid manufacturing/remanufacturing logistics network design in the context of low-carbon economy is examined.Design/methodology/approach: A multi-objective mixed integer linear programming model to find the optimal facility locations and materials flow allocation is established. In the objective function, three minimum targets are set: economic cost, CO2 emission and waste generation. Through an iterative algorithm, the Pareto Boundary of the problem is obtained.Findings: The results of numeric study show that in order to achieve a Pareto improvement over an original system, three of the critical rates (i.e. return rate, recovery rate, and cost substitute rate should be increased.Practical implications: To meet the need of low-carbon dioxide, an iso- CO2 emission curve in which decision makers have a series of optimal choices with the same CO2 emission but different cost and waste generation is plotted. Each choice may have different network design but all of these are Pareto optimal solutions, which provide a comprehensive evaluation of both economics and ecology for the decision making.Originality/value: This research chooses carbon emission as one of the three objective functions and uses Pareto sets to analyze how to balance profitability and environmental impacts in designing remanufacturing closed-loop supply chain in the context of low-carbon economy.

  4. Accident investigation of construction sites in Qom city using Pareto chart (2009-2012

    Directory of Open Access Journals (Sweden)

    M. H. Beheshti

    2015-07-01

    .Conclusions: Employing Pareto charts as a method for analyzing and identification of accident causes can have an effective role in the management of work-related accidents, proper allocation of funds and time.

  5. Applying a Hybrid Model: Can It Enhance Student Learning Outcomes?

    Science.gov (United States)

    Potter, Jodi

    2015-01-01

    There has been a marked increase in the use of online learning over the past decade. There remains conflict in the current body of research on the efficacy of online versus face to face learning in these environments. One resolution of these issues is the hybrid learning option which is a combination of face-to-face classroom instruction with…

  6. Model-based health monitoring of hybrid systems

    CERN Document Server

    Wang, Danwei; Low, Chang Boon; Arogeti, Shai

    2013-01-01

    Offers in-depth comprehensive study on health monitoring for hybrid systems Includes new concepts, such as GARR, mode tracking and multiple failure prognosis Contains many examples, making the developed techniques easily understandable and accessible Introduces state-of-the-art algorithms and methodologies from experienced researchers

  7. A hybrid model for the play hysteresis operator

    Czech Academy of Sciences Publication Activity Database

    Al Janaideh, M.; Naldi, R.; Marconi, L.; Krejčí, Pavel

    2013-01-01

    Roč. 430, 1 December (2013), s. 95-98 ISSN 0921-4526 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : hysteresis * hybrid * play Subject RIV: BA - General Mathematics Impact factor: 1.276, year: 2013 http://www.sciencedirect.com/science/article/pii/S0921452613004146

  8. A Hybrid Method for Modeling and Solving Supply Chain Optimization Problems with Soft and Logical Constraints

    Directory of Open Access Journals (Sweden)

    Paweł Sitek

    2016-01-01

    Full Text Available This paper presents a hybrid method for modeling and solving supply chain optimization problems with soft, hard, and logical constraints. Ability to implement soft and logical constraints is a very important functionality for supply chain optimization models. Such constraints are particularly useful for modeling problems resulting from commercial agreements, contracts, competition, technology, safety, and environmental conditions. Two programming and solving environments, mathematical programming (MP and constraint logic programming (CLP, were combined in the hybrid method. This integration, hybridization, and the adequate multidimensional transformation of the problem (as a presolving method helped to substantially reduce the search space of combinatorial models for supply chain optimization problems. The operation research MP and declarative CLP, where constraints are modeled in different ways and different solving procedures are implemented, were linked together to use the strengths of both. This approach is particularly important for the decision and combinatorial optimization models with the objective function and constraints, there are many decision variables, and these are summed (common in manufacturing, supply chain management, project management, and logistic problems. The ECLiPSe system with Eplex library was proposed to implement a hybrid method. Additionally, the proposed hybrid transformed model is compared with the MILP-Mixed Integer Linear Programming model on the same data instances. For illustrative models, its use allowed finding optimal solutions eight to one hundred times faster and reducing the size of the combinatorial problem to a significant extent.

  9. Using the hybrid fuzzy goal programming model and hybrid genetic algorithm to solve a multi-objective location routing problem for infectious waste disposaL

    Energy Technology Data Exchange (ETDEWEB)

    Wichapa, Narong; Khokhajaikiat, Porntep

    2017-07-01

    Disposal of infectious waste remains one of the most serious problems in the social and environmental domains of almost every nation. Selection of new suitable locations and finding the optimal set of transport routes to transport infectious waste, namely location routing problem for infectious waste disposal, is one of the major problems in hazardous waste management. Design/methodology/approach: Due to the complexity of this problem, location routing problem for a case study, forty hospitals and three candidate municipalities in sub-Northeastern Thailand, was divided into two phases. The first phase is to choose suitable municipalities using hybrid fuzzy goal programming model which hybridizes the fuzzy analytic hierarchy process and fuzzy goal programming. The second phase is to find the optimal routes for each selected municipality using hybrid genetic algorithm which hybridizes the genetic algorithm and local searches including 2-Opt-move, Insertion-move and ?-interchange-move. Findings: The results indicate that the hybrid fuzzy goal programming model can guide the selection of new suitable municipalities, and the hybrid genetic algorithm can provide the optimal routes for a fleet of vehicles effectively. Originality/value: The novelty of the proposed methodologies, hybrid fuzzy goal programming model, is the simultaneous combination of both intangible and tangible factors in order to choose new suitable locations, and the hybrid genetic algorithm can be used to determine the optimal routes which provide a minimum number of vehicles and minimum transportation cost under the actual situation, efficiently.

  10. Using the hybrid fuzzy goal programming model and hybrid genetic algorithm to solve a multi-objective location routing problem for infectious waste disposaL

    International Nuclear Information System (INIS)

    Wichapa, Narong; Khokhajaikiat, Porntep

    2017-01-01

    Disposal of infectious waste remains one of the most serious problems in the social and environmental domains of almost every nation. Selection of new suitable locations and finding the optimal set of transport routes to transport infectious waste, namely location routing problem for infectious waste disposal, is one of the major problems in hazardous waste management. Design/methodology/approach: Due to the complexity of this problem, location routing problem for a case study, forty hospitals and three candidate municipalities in sub-Northeastern Thailand, was divided into two phases. The first phase is to choose suitable municipalities using hybrid fuzzy goal programming model which hybridizes the fuzzy analytic hierarchy process and fuzzy goal programming. The second phase is to find the optimal routes for each selected municipality using hybrid genetic algorithm which hybridizes the genetic algorithm and local searches including 2-Opt-move, Insertion-move and ?-interchange-move. Findings: The results indicate that the hybrid fuzzy goal programming model can guide the selection of new suitable municipalities, and the hybrid genetic algorithm can provide the optimal routes for a fleet of vehicles effectively. Originality/value: The novelty of the proposed methodologies, hybrid fuzzy goal programming model, is the simultaneous combination of both intangible and tangible factors in order to choose new suitable locations, and the hybrid genetic algorithm can be used to determine the optimal routes which provide a minimum number of vehicles and minimum transportation cost under the actual situation, efficiently.

  11. Computing the Pareto-Nash equilibrium set in finite multi-objective mixed-strategy games

    Directory of Open Access Journals (Sweden)

    Victoria Lozan

    2013-10-01

    Full Text Available The Pareto-Nash equilibrium set (PNES is described as intersection of graphs of efficient response mappings. The problem of PNES computing in finite multi-objective mixed-strategy games (Pareto-Nash games is considered. A method for PNES computing is studied. Mathematics Subject Classification 2010: 91A05, 91A06, 91A10, 91A43, 91A44.

  12. A divide and conquer approach to determine the Pareto frontier for optimization of protein engineering experiments

    Science.gov (United States)

    He, Lu; Friedman, Alan M.; Bailey-Kellogg, Chris

    2016-01-01

    In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability vs. novelty, affinity vs. specificity, activity vs. immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not “dominated”; i.e., no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), in order to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, PEPFR (Protein Engineering Pareto FRontier), that hierarchically subdivides the objective space, employing appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. PMID:22180081

  13. Strong Convergence Bound of the Pareto Index Estimator under Right Censoring

    Directory of Open Access Journals (Sweden)

    Bao Tao

    2010-01-01

    Full Text Available Let {Xn,n≥1} be a sequence of positive independent and identically distributed random variables with common Pareto-type distribution function F(x=1−x−1/γlF(x as γ>0, where lF(x represents a slowly varying function at infinity. In this note we study the strong convergence bound of a kind of right censored Pareto index estimator under second-order regularly varying conditions.

  14. Modeling and Optimal Control of a Class of Warfare Hybrid Dynamic Systems Based on Lanchester (n,1) Attrition Model

    OpenAIRE

    Chen, Xiangyong; Zhang, Ancai

    2014-01-01

    For the particularity of warfare hybrid dynamic process, a class of warfare hybrid dynamic systems is established based on Lanchester equation in a (n,1) battle, where a heterogeneous force of n different troop types faces a homogeneous force. This model can be characterized by the interaction of continuous-time models (governed by Lanchester equation), and discrete event systems (described by variable tactics). Furthermore, an expository discussion is presented on an optimal variable tact...

  15. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    Science.gov (United States)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  16. Pareto Efficient Solutions of Attack-Defence Trees

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Nielson, Flemming

    2015-01-01

    Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes, such as proba......Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes......, such as probability or cost of attacks and defences. In case of multiple parameters most analytical methods optimise one parameter at a time, e.g., minimise cost or maximise probability of an attack. Such methods may lead to sub-optimal solutions when optimising conflicting parameters, e.g., minimising cost while...... maximising probability. In order to tackle this challenge, we devise automated techniques that optimise all parameters at once. Moreover, in the case of conflicting parameters our techniques compute the set of all optimal solutions, defined in terms of Pareto efficiency. The developments are carried out...

  17. The geometry of the Pareto front in biological phenotype space

    Science.gov (United States)

    Sheftel, Hila; Shoval, Oren; Mayo, Avi; Alon, Uri

    2013-01-01

    When organisms perform a single task, selection leads to phenotypes that maximize performance at that task. When organisms need to perform multiple tasks, a trade-off arises because no phenotype can optimize all tasks. Recent work addressed this question, and assumed that the performance at each task decays with distance in trait space from the best phenotype at that task. Under this assumption, the best-fitness solutions (termed the Pareto front) lie on simple low-dimensional shapes in trait space: line segments, triangles and other polygons. The vertices of these polygons are specialists at a single task. Here, we generalize this finding, by considering performance functions of general form, not necessarily functions that decay monotonically with distance from their peak. We find that, except for performance functions with highly eccentric contours, simple shapes in phenotype space are still found, but with mildly curving edges instead of straight ones. In a wide range of systems, complex data on multiple quantitative traits, which might be expected to fill a high-dimensional phenotype space, is predicted instead to collapse onto low-dimensional shapes; phenotypes near the vertices of these shapes are predicted to be specialists, and can thus suggest which tasks may be at play. PMID:23789060

  18. Investigating actinide compounds within a hybrid MCSCF-DFT model

    International Nuclear Information System (INIS)

    Fromager, E.; Jensen, H.J.A.; Wahlin, P.; Real, F.; Wahlgren, U.

    2007-01-01

    Complete text of publication follows: Investigations of actinide chemistry with quantum chemical methods still remain a complicated task since it requires an accurate and efficient treatment of the environment (crystal or solvent) as well as relativistic and electron correlation effects. Concerning the latter, the current correlated methods, based on either Density-Functional Theory (DFT) or Wave-Function Theory (WFT), have their advantages and drawbacks. On the one hand, Kohn-Sham DFT (KS-DFT) calculates the dynamic correlation quite accurately and at a fairly low computational cost. However, it does not treat adequately the static correlation, which is significant in some actinide compounds because of the near-degeneracy of the 5f orbitals: a first example is the bent geometry obtained in KS-DFT(B3LYP) for the neptunyl ion NpO 2 3+ , which is found to be linear within a Multi-Configurational Self-Consistent Field (MCSCF) model [1]. A second one is the stable and bent geometry obtained in KS-DFT(B3LYP) for the plutonyl ion PuO 2 4+ , which disintegrates at the MCSCF level [1]. On the other hand, WFT can describe the static correlation, using for example a MCSCF model, but then an important part of the dynamic correlation has to be neglected. This can be recovered with perturbation-theory based methods like for example CASPT2 or NEVPT2, but their computational complexity prevents large scale calculations. It is therefore of great interest to develop a hybrid MCSCF-DFT model which combines the best of both WFT and DFT approaches. The merge of WFT and DFT can be achieved by splitting the two-electron interaction into long-range and short-range parts [2]. The long-range part is then treated by WFT and the short-range part by DFT. We use the so-called 'erf' long-range interaction erf(μr 12 )/r 12 , which is based on the standard error function, and where μ is a free parameter which controls the long/short-range decomposition. The newly proposed recipe for the

  19. Multiobjective Optimization of Linear Cooperative Spectrum Sensing: Pareto Solutions and Refinement.

    Science.gov (United States)

    Yuan, Wei; You, Xinge; Xu, Jing; Leung, Henry; Zhang, Tianhang; Chen, Chun Lung Philip

    2016-01-01

    In linear cooperative spectrum sensing, the weights of secondary users and detection threshold should be optimally chosen to minimize missed detection probability and to maximize secondary network throughput. Since these two objectives are not completely compatible, we study this problem from the viewpoint of multiple-objective optimization. We aim to obtain a set of evenly distributed Pareto solutions. To this end, here, we introduce the normal constraint (NC) method to transform the problem into a set of single-objective optimization (SOO) problems. Each SOO problem usually results in a Pareto solution. However, NC does not provide any solution method to these SOO problems, nor any indication on the optimal number of Pareto solutions. Furthermore, NC has no preference over all Pareto solutions, while a designer may be only interested in some of them. In this paper, we employ a stochastic global optimization algorithm to solve the SOO problems, and then propose a simple method to determine the optimal number of Pareto solutions under a computational complexity constraint. In addition, we extend NC to refine the Pareto solutions and select the ones of interest. Finally, we verify the effectiveness and efficiency of the proposed methods through computer simulations.

  20. Diversity comparison of Pareto front approximations in many-objective optimization.

    Science.gov (United States)

    Li, Miqing; Yang, Shengxiang; Liu, Xiaohui

    2014-12-01

    Diversity assessment of Pareto front approximations is an important issue in the stochastic multiobjective optimization community. Most of the diversity indicators in the literature were designed to work for any number of objectives of Pareto front approximations in principle, but in practice many of these indicators are infeasible or not workable when the number of objectives is large. In this paper, we propose a diversity comparison indicator (DCI) to assess the diversity of Pareto front approximations in many-objective optimization. DCI evaluates relative quality of different Pareto front approximations rather than provides an absolute measure of distribution for a single approximation. In DCI, all the concerned approximations are put into a grid environment so that there are some hyperboxes containing one or more solutions. The proposed indicator only considers the contribution of different approximations to nonempty hyperboxes. Therefore, the computational cost does not increase exponentially with the number of objectives. In fact, the implementation of DCI is of quadratic time complexity, which is fully independent of the number of divisions used in grid. Systematic experiments are conducted using three groups of artificial Pareto front approximations and seven groups of real Pareto front approximations with different numbers of objectives to verify the effectiveness of DCI. Moreover, a comparison with two diversity indicators used widely in many-objective optimization is made analytically and empirically. Finally, a parametric investigation reveals interesting insights of the division number in grid and also offers some suggested settings to the users with different preferences.

  1. Extending the Generalised Pareto Distribution for Novelty Detection in High-Dimensional Spaces.

    Science.gov (United States)

    Clifton, David A; Clifton, Lei; Hugueny, Samuel; Tarassenko, Lionel

    2014-01-01

    Novelty detection involves the construction of a "model of normality", and then classifies test data as being either "normal" or "abnormal" with respect to that model. For this reason, it is often termed one-class classification. The approach is suitable for cases in which examples of "normal" behaviour are commonly available, but in which cases of "abnormal" data are comparatively rare. When performing novelty detection, we are typically most interested in the tails of the normal model, because it is in these tails that a decision boundary between "normal" and "abnormal" areas of data space usually lies. Extreme value statistics provides an appropriate theoretical framework for modelling the tails of univariate (or low-dimensional) distributions, using the generalised Pareto distribution (GPD), which can be demonstrated to be the limiting distribution for data occurring within the tails of most practically-encountered probability distributions. This paper provides an extension of the GPD, allowing the modelling of probability distributions of arbitrarily high dimension, such as occurs when using complex, multimodel, multivariate distributions for performing novelty detection in most real-life cases. We demonstrate our extension to the GPD using examples from patient physiological monitoring, in which we have acquired data from hospital patients in large clinical studies of high-acuity wards, and in which we wish to determine "abnormal" patient data, such that early warning of patient physiological deterioration may be provided.

  2. Multi-Objective Optimization of a Hybrid ESS Based on Optimal Energy Management Strategy for LHDs

    Directory of Open Access Journals (Sweden)

    Jiajun Liu

    2017-10-01

    Full Text Available Energy storage systems (ESS play an important role in the performance of mining vehicles. A hybrid ESS combining both batteries (BTs and supercapacitors (SCs is one of the most promising solutions. As a case study, this paper discusses the optimal hybrid ESS sizing and energy management strategy (EMS of 14-ton underground load-haul-dump vehicles (LHDs. Three novel contributions are added to the relevant literature. First, a multi-objective optimization is formulated regarding energy consumption and the total cost of a hybrid ESS, which are the key factors of LHDs, and a battery capacity degradation model is used. During the process, dynamic programming (DP-based EMS is employed to obtain the optimal energy consumption and hybrid ESS power profiles. Second, a 10-year life cycle cost model of a hybrid ESS for LHDs is established to calculate the total cost, including capital cost, operating cost, and replacement cost. According to the optimization results, three solutions chosen from the Pareto front are compared comprehensively, and the optimal one is selected. Finally, the optimal and battery-only options are compared quantitatively using the same objectives, and the hybrid ESS is found to be a more economical and efficient option.

  3. Modelling biochemical networks with intrinsic time delays: a hybrid semi-parametric approach

    Directory of Open Access Journals (Sweden)

    Oliveira Rui

    2010-09-01

    Full Text Available Abstract Background This paper presents a method for modelling dynamical biochemical networks with intrinsic time delays. Since the fundamental mechanisms leading to such delays are many times unknown, non conventional modelling approaches become necessary. Herein, a hybrid semi-parametric identification methodology is proposed in which discrete time series are incorporated into fundamental material balance models. This integration results in hybrid delay differential equations which can be applied to identify unknown cellular dynamics. Results The proposed hybrid modelling methodology was evaluated using two case studies. The first of these deals with dynamic modelling of transcriptional factor A in mammalian cells. The protein transport from the cytosol to the nucleus introduced a delay that was accounted for by discrete time series formulation. The second case study focused on a simple network with distributed time delays that demonstrated that the discrete time delay formalism has broad applicability to both discrete and distributed delay problems. Conclusions Significantly better prediction qualities of the novel hybrid model were obtained when compared to dynamical structures without time delays, being the more distinctive the more significant the underlying system delay is. The identification of the system delays by studies of different discrete modelling delays was enabled by the proposed structure. Further, it was shown that the hybrid discrete delay methodology is not limited to discrete delay systems. The proposed method is a powerful tool to identify time delays in ill-defined biochemical networks.

  4. Finite-Control-Set Model Predictive Control (FCS-MPC) for Islanded Hybrid Microgrids

    OpenAIRE

    Yi, Zhehan; Babqi, Abdulrahman J.; Wang, Yishen; Shi, Di; Etemadi, Amir H.; Wang, Zhiwei; Huang, Bibin

    2018-01-01

    Microgrids consisting of multiple distributed energy resources (DERs) provide a promising solution to integrate renewable energies, e.g., solar photovoltaic (PV) systems. Hybrid AC/DC microgrids leverage the merits of both AC and DC power systems. In this paper, a control strategy for islanded multi-bus hybrid microgrids is proposed based on the Finite-Control-Set Model Predictive Control (FCS-MPC) technologies. The control loops are expedited by predicting the future states and determining t...

  5. Modeling of the electron distribution based on bremsstrahlung emission during lower hybrid current drive on PLT

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, J.E.; von Goeler, S.; Bernabei, S.; Bitter, M.; Chu, T.K.; Efthimion, P.; Fisch, N.; Hooke, W.; Hosea, J.; Jobes, F.

    1985-03-01

    Lower hybrid current drive requires the generation of a high energy electron tail anisotropic in velocity. Measurements of bremsstrahlung emission produced by this tail are compared with the calculated emission from reasonable model distributions. The physical basis and the sensitivity of this modeling process are described and the plasma properties of current driven discharges which can be derived from the model are discussed.

  6. A hybrid model for the investigation of heavy ion collisions at intermediate energies

    International Nuclear Information System (INIS)

    Heide, B.M.

    1995-09-01

    The following topics were dealt with: The coupling of the Botzmann-Uehling-Uhlenbeck (BUU) model with Kopenhagen multifragmentation model realising a new hybrid model, application on 197 Au+ 197 Au reactions between 100 and 250 A.MeV, calculation of the chracteristics of the fragmentation system including mass number, excitation energy, angular momenta, two-particle correlation function

  7. A hybrid hydrostatic and non-hydrostatic numerical model for shallow flow simulations

    Science.gov (United States)

    Zhang, Jingxin; Liang, Dongfang; Liu, Hua

    2018-05-01

    Hydrodynamics of geophysical flows in oceanic shelves, estuaries, and rivers, are often studied by solving shallow water model equations. Although hydrostatic models are accurate and cost efficient for many natural flows, there are situations where the hydrostatic assumption is invalid, whereby a fully hydrodynamic model is necessary to increase simulation accuracy. There is a growing concern about the decrease of the computational cost of non-hydrostatic pressure models to improve the range of their applications in large-scale flows with complex geometries. This study describes a hybrid hydrostatic and non-hydrostatic model to increase the efficiency of simulating shallow water flows. The basic numerical model is a three-dimensional hydrostatic model solved by the finite volume method (FVM) applied to unstructured grids. Herein, a second-order total variation diminishing (TVD) scheme is adopted. Using a predictor-corrector method to calculate the non-hydrostatic pressure, we extended the hydrostatic model to a fully hydrodynamic model. By localising the computational domain in the corrector step for non-hydrostatic pressure calculations, a hybrid model was developed. There was no prior special treatment on mode switching, and the developed numerical codes were highly efficient and robust. The hybrid model is applicable to the simulation of shallow flows when non-hydrostatic pressure is predominant only in the local domain. Beyond the non-hydrostatic domain, the hydrostatic model is still accurate. The applicability of the hybrid method was validated using several study cases.

  8. Modeling of the electron distribution based on bremsstrahlung emission during lower hybrid current drive on PLT

    International Nuclear Information System (INIS)

    Stevens, J.E.; von Goeler, S.; Bernabei, S.

    1985-03-01

    Lower hybrid current drive requires the generation of a high energy electron tail anisotropic in velocity. Measurements of bremsstrahlung emission produced by this tail are compared with the calculated emission from reasonable model distributions. The physical basis and the sensitivity of this modeling process are described and the plasma properties of current driven discharges which can be derived from the model are discussed

  9. Sensitivity analysis for decision-making using the MORE method-A Pareto approach

    International Nuclear Information System (INIS)

    Ravalico, Jakin K.; Maier, Holger R.; Dandy, Graeme C.

    2009-01-01

    Integrated Assessment Modelling (IAM) incorporates knowledge from different disciplines to provide an overarching assessment of the impact of different management decisions. The complex nature of these models, which often include non-linearities and feedback loops, requires special attention for sensitivity analysis. This is especially true when the models are used to form the basis of management decisions, where it is important to assess how sensitive the decisions being made are to changes in model parameters. This research proposes an extension to the Management Option Rank Equivalence (MORE) method of sensitivity analysis; a new method of sensitivity analysis developed specifically for use in IAM and decision-making. The extension proposes using a multi-objective Pareto optimal search to locate minimum combined parameter changes that result in a change in the preferred management option. It is demonstrated through a case study of the Namoi River, where results show that the extension to MORE is able to provide sensitivity information for individual parameters that takes into account simultaneous variations in all parameters. Furthermore, the increased sensitivities to individual parameters that are discovered when joint parameter variation is taken into account shows the importance of ensuring that any sensitivity analysis accounts for these changes.

  10. Adaptive control using a hybrid-neural model: application to a polymerisation reactor

    Directory of Open Access Journals (Sweden)

    Cubillos F.

    2001-01-01

    Full Text Available This work presents the use of a hybrid-neural model for predictive control of a plug flow polymerisation reactor. The hybrid-neural model (HNM is based on fundamental conservation laws associated with a neural network (NN used to model the uncertain parameters. By simulations, the performance of this approach was studied for a peroxide-initiated styrene tubular reactor. The HNM was synthesised for a CSTR reactor with a radial basis function neural net (RBFN used to estimate the reaction rates recursively. The adaptive HNM was incorporated in two model predictive control strategies, a direct synthesis scheme and an optimum steady state scheme. Tests for servo and regulator control showed excellent behaviour following different setpoint variations, and rejecting perturbations. The good generalisation and training capacities of hybrid models, associated with the simplicity and robustness characteristics of the MPC formulations, make an attractive combination for the control of a polymerisation reactor.

  11. Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes

    Energy Technology Data Exchange (ETDEWEB)

    Felice, Maria V., E-mail: maria.felice@bristol.ac.uk [Department of Mechanical Engineering, University of Bristol, Bristol, U.K. and NDE Laboratory, Rolls-Royce plc., Bristol (United Kingdom); Velichko, Alexander, E-mail: p.wilcox@bristol.ac.uk; Wilcox, Paul D., E-mail: p.wilcox@bristol.ac.uk [Department of Mechanical Engineering, University of Bristol, Bristol (United Kingdom); Barden, Tim; Dunhill, Tony [NDE Laboratory, Rolls-Royce plc., Bristol (United Kingdom)

    2015-03-31

    Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.

  12. Several comparison result of two types of equilibrium (Pareto Schemes and Stackelberg Scheme) of game theory approach in probabilistic vendor – buyer supply chain system with imperfect quality

    Science.gov (United States)

    Setiawan, R.

    2018-05-01

    In this paper, Economic Order Quantity (EOQ) of the vendor-buyer supply-chain model under a probabilistic condition with imperfect quality items has been analysed. The analysis is delivered using two concepts in game theory approach, which is Stackelberg equilibrium and Pareto Optimal, under non-cooperative and cooperative games, respectively. Another result is getting acomparison of theoptimal result between integrated scheme and game theory approach based on analytical and numerical result using appropriate simulation data.

  13. New Method of Selecting Efficient Project Portfolios in the Presence of Hybrid Uncertainty

    Directory of Open Access Journals (Sweden)

    Bogdan Rębiasz

    2016-01-01

    Full Text Available A new methods of selecting efficient project portfolios in the presence of hybrid uncertainty has been presented. Pareto optimal solutions have been defined by an algorithm for generating project portfolios. The method presented allows us to select efficient project portfolios taking into account statistical and economic dependencies between projects when some of the parameters used in the calculation of effectiveness can be expressed in the form of an interactive possibility distribution and some in the form of a probability distribution. The procedure for processing such hybrid data combines stochastic simulation with nonlinear programming. The interaction between data are modeled by correlation matrices and the interval regression. Economic dependences are taken into account by the equations balancing the production capacity of the company. The practical example presented indicates that an interaction between projects has a significant impact on the results of calculations. (original abstract

  14. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    Science.gov (United States)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  15. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    Science.gov (United States)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  16. Modelling of cardiovascular system: development of a hybrid (numerical-physical) model.

    Science.gov (United States)

    Ferrari, G; Kozarski, M; De Lazzari, C; Górczyńska, K; Mimmo, R; Guaragno, M; Tosti, G; Darowski, M

    2003-12-01

    Physical models of the circulation are used for research, training and for testing of implantable active and passive circulatory prosthetic and assistance devices. However, in comparison with numerical models, they are rigid and expensive. To overcome these limitations, we have developed a model of the circulation based on the merging of a lumped parameter physical model into a numerical one (producing therefore a hybrid). The physical model is limited to the barest essentials and, in this application, developed to test the principle, it is a windkessel representing the systemic arterial tree. The lumped parameters numerical model was developed in LabVIEW environment and represents pulmonary and systemic circulation (except the systemic arterial tree). Based on the equivalence between hydraulic and electrical circuits, this prototype was developed connecting the numerical model to an electrical circuit--the physical model. This specific solution is valid mainly educationally but permits the development of software and the verification of preliminary results without using cumbersome hydraulic circuits. The interfaces between numerical and electrical circuits are set up by a voltage controlled current generator and a voltage controlled voltage generator. The behavior of the model is analyzed based on the ventricular pressure-volume loops and on the time course of arterial and ventricular pressures and flow in different circulatory conditions. The model can represent hemodynamic relationships in different ventricular and circulatory conditions.

  17. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    International Nuclear Information System (INIS)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup

    2015-01-01

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF

  18. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-02-15

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.

  19. Mechanical Properties of Graphene Nanoplatelet/Carbon Fiber/Epoxy Hybrid Composites: Multiscale Modeling and Experiments

    Science.gov (United States)

    Hadden, C. M.; Klimek-McDonald, D. R.; Pineda, E. J.; King, J. A.; Reichanadter, A. M.; Miskioglu, I.; Gowtham, S.; Odegard, G. M.

    2015-01-01

    Because of the relatively high specific mechanical properties of carbon fiber/epoxy composite materials, they are often used as structural components in aerospace applications. Graphene nanoplatelets (GNPs) can be added to the epoxy matrix to improve the overall mechanical properties of the composite. The resulting GNP/carbon fiber/epoxy hybrid composites have been studied using multiscale modeling to determine the influence of GNP volume fraction, epoxy crosslink density, and GNP dispersion on the mechanical performance. The hierarchical multiscale modeling approach developed herein includes Molecular Dynamics (MD) and micromechanical modeling, and it is validated with experimental testing of the same hybrid composite material system. The results indicate that the multiscale modeling approach is accurate and provides physical insight into the composite mechanical behavior. Also, the results quantify the substantial impact of GNP volume fraction and dispersion on the transverse mechanical properties of the hybrid composite, while the effect on the axial properties is shown to be insignificant.

  20. Mechanical Properties of Graphene Nanoplatelet Carbon Fiber Epoxy Hybrid Composites: Multiscale Modeling and Experiments

    Science.gov (United States)

    Hadden, Cameron M.; Klimek-McDonald, Danielle R.; Pineda, Evan J.; King, Julie A.; Reichanadter, Alex M.; Miskioglu, Ibrahim; Gowtham, S.; Odegard, Gregory M.

    2015-01-01

    Because of the relatively high specific mechanical properties of carbon fiber/epoxy composite materials, they are often used as structural components in aerospace applications. Graphene nanoplatelets (GNPs) can be added to the epoxy matrix to improve the overall mechanical properties of the composite. The resulting GNP/carbon fiber/epoxy hybrid composites have been studied using multiscale modeling to determine the influence of GNP volume fraction, epoxy crosslink density, and GNP dispersion on the mechanical performance. The hierarchical multiscale modeling approach developed herein includes Molecular Dynamics (MD) and micromechanical modeling, and it is validated with experimental testing of the same hybrid composite material system. The results indicate that the multiscale modeling approach is accurate and provides physical insight into the composite mechanical behavior. Also, the results quantify the substantial impact of GNP volume fraction and dispersion on the transverse mechanical properties of the hybrid composite, while the effect on the axial properties is shown to be insignificant.

  1. Modeling and Simulation of Renewable Hybrid Power System using Matlab Simulink Environment

    Directory of Open Access Journals (Sweden)

    Cristian Dragoş Dumitru

    2010-12-01

    Full Text Available The paper presents the modeling of a solar-wind-hydroelectric hybrid system in Matlab/Simulink environment. The application is useful for analysis and simulation of a real hybrid solar-wind-hydroelectric system connected to a public grid. Application is built on modular architecture to facilitate easy study of each component module influence. Blocks like wind model, solar model, hydroelectric model, energy conversion and load are implemented and the results of simulation are also presented. As an example, one of the most important studies is the behavior of hybrid system which allows employing renewable and variable in time energy sources while providing a continuous supply. Application represents a useful tool in research activity and also in teaching

  2. Coupled thermal model of photovoltaic-thermoelectric hybrid panel for sample cities in Europe

    DEFF Research Database (Denmark)

    Rezaniakolaei, Alireza; Sera, Dezso; Rosendahl, Lasse Aistrup

    2016-01-01

    of the hybrid system under different weather conditions. The model takes into account solar irradiation, wind speed and ambient temperature as well as convective and radiated heat losses from the front and rear surfaces of the panel. The model is developed for three sample cities in Europe with different......In general, modeling of photovoltaic-thermoelectric (PV/TEG) hybrid panels have been mostly simplified and disconnected from the actual ambient conditions and thermal losses from the panel. In this study, a thermally coupled model of PV/TEG panel is established to precisely predict performance...... weather conditions. The results show that radiated heat loss from the front surface and the convective heat loss due to the wind speed are the most critical parameters on performance of the hybrid panel performance. The results also indicate that, with existing thermoelectric materials, the power...

  3. Non-Poisson counting statistics of a hybrid G-M counter dead time model

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Jae, Moosung; Gardner, Robin P.

    2007-01-01

    The counting statistics of a G-M counter with a considerable dead time event rate deviates from Poisson statistics. Important characteristics such as observed counting rates as a function true counting rates, variances and interval distributions were analyzed for three dead time models, non-paralyzable, paralyzable and hybrid, with the help of GMSIM, a Monte Carlo dead time effect simulator. The simulation results showed good agreements with the models in observed counting rates and variances. It was found through GMSIM simulations that the interval distribution for the hybrid model showed three distinctive regions, a complete cutoff region for the duration of the total dead time, a degraded exponential and an enhanced exponential regions. By measuring the cutoff and the duration of degraded exponential from the pulse interval distribution, it is possible to evaluate the two dead times in the hybrid model

  4. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  5. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  6. Fluid Petri Nets and hybrid model-checking: a comparative case study

    International Nuclear Information System (INIS)

    Gribaudo, M.; Horvath, A.; Bobbio, A.; Tronci, E.; Ciancamerla, E.; Minichino, M.

    2003-01-01

    The modeling and analysis of hybrid systems is a recent and challenging research area which is actually dominated by two main lines: a functional analysis based on the description of the system in terms of discrete state (hybrid) automata (whose goal is to ascertain conformity and reachability properties), and a stochastic analysis (whose aim is to provide performance and dependability measures). This paper investigates a unifying view between formal methods and stochastic methods by proposing an analysis methodology of hybrid systems based on Fluid Petri Nets (FPNs). FPNs can be analyzed directly using appropriate tools. Our paper shows that the same FPN model can be fed to different functional analyzers for model checking. In order to extensively explore the capability of the technique, we have converted the original FPN into languages for discrete as well as hybrid as well as stochastic model checkers. In this way, a first comparison among the modeling power of well known tools can be carried out. Our approach is illustrated by means of a 'real world' hybrid system: the temperature control system of a co-generative plant

  7. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    Science.gov (United States)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  8. Modeling hydraulic regenerative hybrid vehicles using AMESim and Matlab/Simulink

    Science.gov (United States)

    Lynn, Alfred; Smid, Edzko; Eshraghi, Moji; Caldwell, Niall; Woody, Dan

    2005-05-01

    This paper presents the overview of the simulation modeling of a hydraulic system with regenerative braking used to improve vehicle emissions and fuel economy. Two simulation software packages were used together to enhance the simulation capability for fuel economy results and development of vehicle and hybrid control strategy. AMESim, a hydraulic simulation software package modeled the complex hydraulic circuit and component hardware and was interlinked with a Matlab/Simulink model of the vehicle, engine and the control strategy required to operate the vehicle and the hydraulic hybrid system through various North American and European drive cycles.

  9. Hybrid modeling approach to improve the forecasting capability for the gaseous radionuclide in a nuclear site

    International Nuclear Information System (INIS)

    Jeong, Hyojoon; Hwang, Wontae; Kim, Eunhan; Han, Moonhee

    2012-01-01

    Highlights: ► This study is to improve the reliability of air dispersion modeling. ► Tracer experiments assumed gaseous radionuclides were conducted at a nuclear site. ► The performance of a hybrid modeling combined ISC with ANFIS was investigated.. ► Hybrid modeling approach shows better performance rather than a single ISC model. - Abstract: Predicted air concentrations of radioactive materials are important for an environmental impact assessment for the public health. In this study, the performance of a hybrid modeling combined with the industrial source complex (ISC) model and an adaptive neuro-fuzzy inference system (ANFIS) for predicting tracer concentrations was investigated. Tracer dispersion experiments were performed to produce the field data assuming the accidental release of radioactive material. ANFIS was trained in order that the outputs of the ISC model are similar to the measured data. Judging from the higher correlation coefficients between the measured and the calculated ones, the hybrid modeling approach could be an appropriate technique for an improvement of the modeling capability to predict the air concentrations for radioactive materials.

  10. A hybrid modeling with data assimilation to evaluate human exposure level

    Science.gov (United States)

    Koo, Y. S.; Cheong, H. K.; Choi, D.; Kim, A. L.; Yun, H. Y.

    2015-12-01

    Exposure models are designed to better represent human contact with PM (Particulate Matter) and other air pollutants such as CO, SO2, O3, and NO2. The exposure concentrations of the air pollutants to human are determined by global and regional long range transport of global and regional scales from Europe and China as well as local emissions from urban and road vehicle sources. To assess the exposure level in detail, the multiple scale influence from background to local sources should be considered. A hybrid air quality modeling methodology combing a grid-based chemical transport model with a local plume dispersion model was used to provide spatially and temporally resolved air quality concentration for human exposure levels in Korea. In the hybrid modeling approach, concentrations from a grid-based chemical transport model and a local plume dispersion model are added to provide contributions from photochemical interactions, long-range (regional) transport and local-scale dispersion. The CAMx (Comprehensive Air quality Model with Extensions was used for the background concentrations from anthropogenic and natural emissions in East Asia including Korea while the road dispersion by vehicle emission was calculated by CALPUFF model. The total exposure level of the pollutants was finally assessed by summing the background and road contributions. In the hybrid modeling, the data assimilation method based on the optimal interpolation was applied to overcome the discrepancies between the model predicted concentrations and observations. The air quality data from the air quality monitoring stations in Korea. The spatial resolution of the hybrid model was 50m for the Seoul Metropolitan Ares. This example clearly demonstrates that the exposure level could be estimated to the fine scale for the exposure assessment by using the hybrid modeling approach with data assimilation.

  11. Modeling and performance analysis of a concentrated photovoltaic–thermoelectric hybrid power generation system

    International Nuclear Information System (INIS)

    Lamba, Ravita; Kaushik, S.C.

    2016-01-01

    Highlights: • Thermodynamic model of concentrated photovoltaic–thermoelectric system is analysed. • Thomson effect reduces the power output of PV, TE and hybrid PV–TEG system. • Effect of thermocouple number, irradiance, PV and TE current have been studied. • The optimum concentration ratio for maximum power output has been found out. • The overall efficiency and power output of hybrid PV–TEG system has been improved. - Abstract: In this study, a thermodynamic model for analysing the performance of a concentrated photovoltaic–thermoelectric generator (CPV–TEG) hybrid system including Thomson effect in conjunction with Seebeck, Joule and Fourier heat conduction effects has been developed and simulated in MATALB environment. The expressions for calculating the temperature of photovoltaic (PV) module, hot and cold sides of thermoelectric (TE) module are derived analytically as well. The effect of concentration ratio, number of thermocouples in TE module, solar irradiance, PV module current and TE module current on power output and efficiency of the PV, TEG and hybrid PV–TEG system have been studied. The optimum concentration ratio corresponding to maximum power output of the hybrid system has been found out. It has been observed that by considering Thomson effect in TEG module, the power output of the PV, TE and hybrid PV–TEG systems decreases and at C = 1 and 5, it reduces the power output of hybrid system by 0.7% and 4.78% respectively. The results of this study may provide basis for performance optimization of a practical irreversible CPV–TEG hybrid system.

  12. Diversity shrinkage: Cross-validating pareto-optimal weights to enhance diversity via hiring practices.

    Science.gov (United States)

    Song, Q Chelsea; Wee, Serena; Newman, Daniel A

    2017-12-01

    To reduce adverse impact potential and improve diversity outcomes from personnel selection, one promising technique is De Corte, Lievens, and Sackett's (2007) Pareto-optimal weighting strategy. De Corte et al.'s strategy has been demonstrated on (a) a composite of cognitive and noncognitive (e.g., personality) tests (De Corte, Lievens, & Sackett, 2008) and (b) a composite of specific cognitive ability subtests (Wee, Newman, & Joseph, 2014). Both studies illustrated how Pareto-weighting (in contrast to unit weighting) could lead to substantial improvement in diversity outcomes (i.e., diversity improvement), sometimes more than doubling the number of job offers for minority applicants. The current work addresses a key limitation of the technique-the possibility of shrinkage, especially diversity shrinkage, in the Pareto-optimal solutions. Using Monte Carlo simulations, sample size and predictor combinations were varied and cross-validated Pareto-optimal solutions were obtained. Although diversity shrinkage was sizable for a composite of cognitive and noncognitive predictors when sample size was at or below 500, diversity shrinkage was typically negligible for a composite of specific cognitive subtest predictors when sample size was at least 100. Diversity shrinkage was larger when the Pareto-optimal solution suggested substantial diversity improvement. When sample size was at least 100, cross-validated Pareto-optimal weights typically outperformed unit weights-suggesting that diversity improvement is often possible, despite diversity shrinkage. Implications for Pareto-optimal weighting, adverse impact, sample size of validation studies, and optimizing the diversity-job performance tradeoff are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Image Restoration Based on the Hybrid Total-Variation-Type Model

    OpenAIRE

    Shi, Baoli; Pang, Zhi-Feng; Yang, Yu-Fei

    2012-01-01

    We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two ${L}^{1}$ -norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM) to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method call...

  14. Hybrid microscopic depletion model in nodal code DYN3D

    International Nuclear Information System (INIS)

    Bilodid, Y.; Kotlyar, D.; Shwageraus, E.; Fridman, E.; Kliem, S.

    2016-01-01

    Highlights: • A new hybrid method of accounting for spectral history effects is proposed. • Local concentrations of over 1000 nuclides are calculated using micro depletion. • The new method is implemented in nodal code DYN3D and verified. - Abstract: The paper presents a general hybrid method that combines the micro-depletion technique with correction of micro- and macro-diffusion parameters to account for the spectral history effects. The fuel in a core is subjected to time- and space-dependent operational conditions (e.g. coolant density), which cannot be predicted in advance. However, lattice codes assume some average conditions to generate cross sections (XS) for nodal diffusion codes such as DYN3D. Deviation of local operational history from average conditions leads to accumulation of errors in XS, which is referred as spectral history effects. Various methods to account for the spectral history effects, such as spectral index, burnup-averaged operational parameters and micro-depletion, were implemented in some nodal codes. Recently, an alternative method, which characterizes fuel depletion state by burnup and 239 Pu concentration (denoted as Pu-correction) was proposed, implemented in nodal code DYN3D and verified for a wide range of history effects. The method is computationally efficient, however, it has applicability limitations. The current study seeks to improve the accuracy and applicability range of Pu-correction method. The proposed hybrid method combines the micro-depletion method with a XS characterization technique similar to the Pu-correction method. The method was implemented in DYN3D and verified on multiple test cases. The results obtained with DYN3D were compared to those obtained with Monte Carlo code Serpent, which was also used to generate the XS. The observed differences are within the statistical uncertainties.

  15. PUMP: analog-hybrid reactor coolant hydraulic transient model

    International Nuclear Information System (INIS)

    Grandia, M.R.

    1976-03-01

    The PUMP hybrid computer code simulates flow and pressure distribution; it is used to determine real time response to starting and tripping all combinations of PWR reactor coolant pumps in a closed, pressurized, four-pump, two-loop primary system. The simulation includes the description of flow, pressure, speed, and torque relationships derived through pump affinity laws and from vendor-supplied pump zone maps to describe pump dynamic characteristics. The program affords great flexibility in the type of transients that can be simulated

  16. Improving Hybrid III injury assessment in steering wheel rim to chest impacts using responses from finite element Hybrid III and human body model.

    Science.gov (United States)

    Holmqvist, Kristian; Davidsson, Johan; Mendoza-Vazquez, Manuel; Rundberget, Peter; Svensson, Mats Y; Thorn, Stefan; Törnvall, Fredrik

    2014-01-01

    The main aim of this study was to improve the quality of injury risk assessments in steering wheel rim to chest impacts when using the Hybrid III crash test dummy in frontal heavy goods vehicle (HGV) collision tests. Correction factors for chest injury criteria were calculated as the model chest injury parameter ratios between finite element (FE) Hybrid III, evaluated in relevant load cases, and the Total Human Model for Safety (THUMS). This is proposed to be used to compensate Hybrid III measurements in crash tests where steering wheel rim to chest impacts occur. The study was conducted in an FE environment using an FE-Hybrid III model and the THUMS. Two impactor shapes were used, a circular hub and a long, thin horizontal bar. Chest impacts at velocities ranging from 3.0 to 6.0 m/s were simulated at 3 impact height levels. A ratio between FE-Hybrid III and THUMS chest injury parameters, maximum chest compression C max, and maximum viscous criterion VC max, were calculated for the different chest impact conditions to form a set of correction factors. The definition of the correction factor is based on the assumption that the response from a circular hub impact to the middle of the chest is well characterized and that injury risk measures are independent of impact height. The current limits for these chest injury criteria were used as a basis to develop correction factors that compensate for the limitations in biofidelity of the Hybrid III in steering wheel rim to chest impacts. The hub and bar impactors produced considerably higher C max and VC max responses in the THUMS compared to the FE-Hybrid III. The correction factor for the responses of the FE-Hybrid III showed that the criteria responses for the bar impactor were consistently overestimated. Ratios based on Hybrid III and THUMS responses provided correction factors for the Hybrid III responses ranging from 0.84 to 0.93. These factors can be used to estimate C max and VC max values when the Hybrid III is

  17. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  18. A four-stage hybrid model for hydrological time series forecasting.

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  19. Examining the Etiology of Reading Disability as Conceptualized by the Hybrid Model

    Science.gov (United States)

    Erbeli, Florina; Hart, Sara A.; Wagner, Richard K.; Taylor, Jeanette

    2018-01-01

    A fairly recent definition of reading disability (RD) is that in the form of a hybrid model. The model views RD as a latent construct that is manifested through various observable unexpected impairments in reading-related skills and through inadequate response to intervention. The current report evaluated this new conceptualization of RD from an…

  20. Assessing the Therapeutic Environment in Hybrid Models of Treatment: Prisoner Perceptions of Staff

    Science.gov (United States)

    Kubiak, Sheryl Pimlott

    2009-01-01

    Hybrid treatment models within prisons are staffed by both criminal justice and treatment professionals. Because these models may be indicative of future trends, examining the perceptions of prisoners/participants may provide important information. This study examines the perceptions of male and female inmates in three prisons, comparing those in…

  1. New Hybrid Variational Recovery Model for Blurred Images with Multiplicative Noise

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Zeng, Tieyong

    2013-01-01

    A new hybrid variational model for recovering blurred images in the presence of multiplicative noise is proposed. Inspired by previous work on multiplicative noise removal, an I-divergence technique is used to build a strictly convex model under a condition that ensures the uniqueness...

  2. Rapidity distributions of hadrons in the HydHSD hybrid model

    Energy Technology Data Exchange (ETDEWEB)

    Khvorostukhin, A. S., E-mail: hvorost@theor.jinr.ru; Toneev, V. D. [Joint Institute for Nuclear Research (Russian Federation)

    2017-03-15

    A multistage hybrid model intended for describing heavy-ion interactions in the energy region of the NICA collider under construction in Dubna is proposed. The model combines the initial, fast, interaction stage described by the model of hadron string dynamics (HSD) and the subsequent evolution that the expanding system formed at the first stage experiences at the second stage and which one treats on the basis of ideal hydrodynamics; after the completion of the second stage, the particles involved may still undergo rescattering (third interaction stage). The model admits three freeze-out scenarios: isochronous, isothermal, and isoenergetic. Generally, the HydHSD hybrid model developed in the present study provides fairly good agreement with available experimental data on proton rapidity spectra. It is shown that, within this hybrid model, the two-humped structure of proton rapidity distributions can be obtained either by increasing the freeze-out temperature and energy density or by more lately going over to the hydrodynamic stage. Although the proposed hybrid model reproduces rapidity spectra of protons, it is unable to describe rapidity distributions of pions, systematically underestimating their yield. It is necessary to refine the model by including viscosity effects at the hydrodynamic stage of evolution of the system and by considering in more detail the third interaction stage.

  3. Hybrid modelling framework by using mathematics-based and information-based methods

    International Nuclear Information System (INIS)

    Ghaboussi, J; Kim, J; Elnashai, A

    2010-01-01

    Mathematics-based computational mechanics involves idealization in going from the observed behaviour of a system into mathematical equations representing the underlying mechanics of that behaviour. Idealization may lead mathematical models that exclude certain aspects of the complex behaviour that may be significant. An alternative approach is data-centric modelling that constitutes a fundamental shift from mathematical equations to data that contain the required information about the underlying mechanics. However, purely data-centric methods often fail for infrequent events and large state changes. In this article, a new hybrid modelling framework is proposed to improve accuracy in simulation of real-world systems. In the hybrid framework, a mathematical model is complemented by information-based components. The role of informational components is to model aspects which the mathematical model leaves out. The missing aspects are extracted and identified through Autoprogressive Algorithms. The proposed hybrid modelling framework has a wide range of potential applications for natural and engineered systems. The potential of the hybrid methodology is illustrated through modelling highly pinched hysteretic behaviour of beam-to-column connections in steel frames.

  4. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    Science.gov (United States)

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  5. Data Fusion Modeling for an RT3102 and Dewetron System Application in Hybrid Vehicle Stability Testing

    Directory of Open Access Journals (Sweden)

    Zhibin Miao

    2015-08-01

    Full Text Available More and more hybrid electric vehicles are driven since they offer such advantages as energy savings and better active safety performance. Hybrid vehicles have two or more power driving systems and frequently switch working condition, so controlling stability is very important. In this work, a two-stage Kalman algorithm method is used to fuse data in hybrid vehicle stability testing. First, the RT3102 navigation system and Dewetron system are introduced. Second, a modeling of data fusion is proposed based on the Kalman filter. Then, this modeling is simulated and tested on a sample vehicle, using Carsim and Simulink software to test the results. The results showed the merits of this modeling.

  6. Modular modeling and simulation of hybrid power trains; Modulare Modellbildung und Simulation von hybriden Antriebstraengen

    Energy Technology Data Exchange (ETDEWEB)

    Kelz, Gerald; Hirschberg, Wolfgang [Inst. fuer Fahrzeugtechnik, Technische Univ. Graz (Austria)

    2009-07-01

    The power train of a hybrid vehicle is considerably more complex than that of conventional vehicles. Whilst the topology of a conventional vehicle is normally fixed, the arrangement of the power train components for innovative propulsion systems is a flexible one. The aim is to find those topologies and configurations which are optimal for the intended use. Fuel consumption potentials can be derived with the aid of vehicle longitudinal dynamics simulation. Mostly these simulations are carried out using commercial software which is optimized for the standard topology and do not offer the flexibility to calculate arbitrary topologies. This article covers the modular modeling and the fuel consumption simulation of complex hybrid power trains for topology analysis. A component library for the development of arbitrary hybrid propulsion systems is introduced. The focus lies on an efficient and fast modeling which provides exact simulation results. Several models of power train components are introduced. (orig.)

  7. Design and fabrication of a hybrid maglev model employing PML and SML

    Science.gov (United States)

    Sun, R. X.; Zheng, J.; Zhan, L. J.; Huang, S. Y.; Li, H. T.; Deng, Z. G.

    2017-10-01

    A hybrid maglev model combining permanent magnet levitation (PML) and superconducting magnetic levitation (SML) was designed and fabricated to explore a heavy-load levitation system advancing in passive stability and simple structure. In this system, the PML was designed to levitate the load, and the SML was introduced to guarantee the stability. In order to realize different working gaps of the two maglev components, linear bearings were applied to connect the PML layer (for load) and the SML layer (for stability) of the hybrid maglev model. Experimental results indicate that the hybrid maglev model possesses excellent advantages of heavy-load ability and passive stability at the same time. This work presents a possible way to realize a heavy-load passive maglev concept.

  8. Seasonal and Non-Seasonal Generalized Pareto Distribution to Estimate Extreme Significant Wave Height in The Banda Sea

    Science.gov (United States)

    Nursamsiah; Nugroho Sugianto, Denny; Suprijanto, Jusup; Munasik; Yulianto, Bambang

    2018-02-01

    The information of extreme wave height return level was required for maritime planning and management. The recommendation methods in analyzing extreme wave were better distributed by Generalized Pareto Distribution (GPD). Seasonal variation was often considered in the extreme wave model. This research aims to identify the best model of GPD by considering a seasonal variation of the extreme wave. By using percentile 95 % as the threshold of extreme significant wave height, the seasonal GPD and non-seasonal GPD fitted. The Kolmogorov-Smirnov test was applied to identify the goodness of fit of the GPD model. The return value from seasonal and non-seasonal GPD was compared with the definition of return value as criteria. The Kolmogorov-Smirnov test result shows that GPD fits data very well both seasonal and non-seasonal model. The seasonal return value gives better information about the wave height characteristics.

  9. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    Directory of Open Access Journals (Sweden)

    René Felix Reinhart

    2017-02-01

    Full Text Available Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  10. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    Science.gov (United States)

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  11. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    Science.gov (United States)

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-01-01

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697

  12. Study on driver model for hybrid truck based on driving simulator experimental results

    Directory of Open Access Journals (Sweden)

    Dam Hoang Phuc

    2018-04-01

    Full Text Available In this paper, a proposed car-following driver model taking into account some features of both the compensatory and anticipatory model representing the human pedal operation has been verified by driving simulator experiments with several real drivers. The comparison between computer simulations performed by determined model parameters with the experimental results confirm the correctness of this mathematical driver model and identified model parameters. Then the driver model is joined to a hybrid vehicle dynamics model and the moderate car following maneuver simulations with various driver parameters are conducted to investigate influences of driver parameters on vehicle dynamics response and fuel economy. Finally, major driver parameters involved in the longitudinal control of drivers are clarified. Keywords: Driver model, Driver-vehicle closed-loop system, Car Following, Driving simulator/hybrid electric vehicle (B1

  13. Seeking deep convective parameter updates that improve tropical Pacific climatology in CESM using Pareto fronts

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2016-12-01

    Despite increasing complexity and process representation in global climate models (GCMs), accurate climate simulation is limited by uncertainties in sub-grid scale model physics, where cloud processes and precipitation occur, and the interaction with large-scale dynamics. Identifying highly sensitive parameters and constraining them against observations is therefore a valuable step in narrowing uncertainty. However, changes in parameterizations often improve some variables or aspects of the simulation while degrading others. This analysis addresses means of improving GCM simulation of present-day tropical Pacific climate in the face of these tradeoffs. Focusing on the deep convection scheme in the fully coupled Community Earth System Model (CESM) version 1, four parameters were systematically sampled, and a metamodel or model emulator was used to reconstruct the parameter space of this perturbed physics ensemble. Using this metamodel, a Pareto front is constructed to visualize multiobjective tradeoffs in model performance, and results highlight the most important aspects of model physics as well as the most sensitive parameter ranges. For example, parameter tradeoffs arise in the tropical Pacific where precipitation cannot improve without sea surface temperature getting worse. Tropical precipitation sensitivity is found to be highly nonlinear for low values of entrainment in convecting plumes, though it is fairly insensitive at the high end of the plausible range. Increasing the adjustment timescale for convective closure causes the centroid of tropical precipitation to vary as much as two degrees latitude, highlighting the effect these physics can have on large-scale features of the hydrological cycle. The optimization procedure suggests that simultaneously increasing the maximum downdraft mass flux fraction and the adjustment timescale can yield improvements to surface temperature and column water vapor without degrading the simulation of precipitation. These

  14. Hybrid Spatial Data Model for Indoor Space: Combined Topology and Grid

    Directory of Open Access Journals (Sweden)

    Zhiyong Lin

    2017-11-01

    Full Text Available The construction and application of an indoor spatial data model is an important prerequisite to meet the requirements of diversified indoor spatial location services. The traditional indoor spatial topology model focuses on the construction of topology information. It has high path analysis and query efficiency, but ignores the spatial location information. The grid model retains the plane position information by grid, but increases the data volume and complexity of the model and reduces the efficiency of the model analysis. This paper presents a hybrid model for interior space based on topology and grid. Based on the spatial meshing and spatial division of the interior space, the model retains the position information and topological connectivity information of the interior space by establishing the connection or affiliation between the grid subspace and the topological subspace. The model improves the speed of interior spatial analysis and solves the problem of the topology information and location information updates not being synchronized. In this study, the A* shortest path query efficiency of typical daily indoor activities under the grid model and the hybrid model were compared for the indoor plane of an apartment and a shopping mall. The results obtained show that the hybrid model is 43% higher than the A* algorithm of the grid model as a result of the existence of topology communication information. This paper provides a useful idea for the establishment of a highly efficient and highly available interior spatial data model.

  15. Efficiently approximating the Pareto frontier: Hydropower dam placement in the Amazon basin

    Science.gov (United States)

    Wu, Xiaojian; Gomes-Selman, Jonathan; Shi, Qinru; Xue, Yexiang; Garcia-Villacorta, Roosevelt; Anderson, Elizabeth; Sethi, Suresh; Steinschneider, Scott; Flecker, Alexander; Gomes, Carla P.

    2018-01-01

    Real–world problems are often not fully characterized by a single optimal solution, as they frequently involve multiple competing objectives; it is therefore important to identify the so-called Pareto frontier, which captures solution trade-offs. We propose a fully polynomial-time approximation scheme based on Dynamic Programming (DP) for computing a polynomially succinct curve that approximates the Pareto frontier to within an arbitrarily small > 0 on treestructured networks. Given a set of objectives, our approximation scheme runs in time polynomial in the size of the instance and 1/. We also propose a Mixed Integer Programming (MIP) scheme to approximate the Pareto frontier. The DP and MIP Pareto frontier approaches have complementary strengths and are surprisingly effective. We provide empirical results showing that our methods outperform other approaches in efficiency and accuracy. Our work is motivated by a problem in computational sustainability concerning the proliferation of hydropower dams throughout the Amazon basin. Our goal is to support decision-makers in evaluating impacted ecosystem services on the full scale of the Amazon basin. Our work is general and can be applied to approximate the Pareto frontier of a variety of multiobjective problems on tree-structured networks.

  16. Level Diagrams analysis of Pareto Front for multiobjective system redundancy allocation

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2011-01-01

    Reliability-based and risk-informed design, operation, maintenance and regulation lead to multiobjective (multicriteria) optimization problems. In this context, the Pareto Front and Set found in a multiobjective optimality search provide a family of solutions among which the decision maker has to look for the best choice according to his or her preferences. Efficient visualization techniques for Pareto Front and Set analyses are needed for helping decision makers in the selection task. In this paper, we consider the multiobjective optimization of system redundancy allocation and use the recently introduced Level Diagrams technique for graphically representing the resulting Pareto Front and Set. Each objective and decision variable is represented on separate diagrams where the points of the Pareto Front and Set are positioned according to their proximity to ideally optimal points, as measured by a metric of normalized objective values. All diagrams are synchronized across all objectives and decision variables. On the basis of the analysis of the Level Diagrams, we introduce a procedure for reducing the number of solutions in the Pareto Front; from the reduced set of solutions, the decision maker can more easily identify his or her preferred solution.

  17. Modeling, analysis and control of fuel cell hybrid power systems

    Science.gov (United States)

    Suh, Kyung Won

    Transient performance is a key characteristic of fuel cells, that is sometimes more critical than efficiency, due to the importance of accepting unpredictable electric loads. To fulfill the transient requirement in vehicle propulsion and portable fuel cell applications, a fuel cell stack is typically coupled with a battery through a DC/DC converter to form a hybrid power system. Although many power management strategies already exist, they all rely on low level controllers that realize the power split. In this dissertation we design controllers that realize various power split strategies by directly manipulating physical actuators (low level commands). We maintain the causality of the electric dynamics (voltage and current) and investigate how the electric architecture affects the hybridization level and the power management. We first establish the performance limitations associated with a stand-alone and power-autonomous fuel cell system that is not supplemented by an additional energy storage and powers all its auxiliary components by itself. Specifically, we examine the transient performance in fuel cell power delivery as it is limited by the air supplied by a compressor driven by the fuel cell itself. The performance limitations arise from the intrinsic coupling in the fluid and electrical domain between the compressor and the fuel cell stack. Feedforward and feedback control strategies are used to demonstrate these limitations analytically and with simulations. Experimental tests on a small commercial fuel cell auxiliary power unit (APU) confirm the dynamics and the identified limitations. The dynamics associated with the integration of a fuel cell system and a DC/DC converter is then investigated. Decentralized and fully centralized (using linear quadratic techniques) controllers are designed to regulate the power system voltage and to prevent fuel cell oxygen starvation. Regulating these two performance variables is a difficult task and requires a compromise

  18. Switched causual modeling of transmission with clutch in hybrid electric vehicles

    OpenAIRE

    LHOMME, W; TRIGUI, R; DELARU, P; JEANNERET, B; BOUSCAUROL, A; BADIN, F

    2008-01-01

    Certain difficulties arise when attempting to model a clutch in a power train transmission due to its nonlinear behavior. Two different states have to be taken into account-the first being when the clutch is locked and the second being when the clutch is slipping. In this paper, a clutch model is developed using the Energetic Macroscopic Representation, which is, in turn, used in the modeling of complete hybrid electric vehicles (HEVs). Two different models are used, and a specific condition ...

  19. Plausible carrier transport model in organic-inorganic hybrid perovskite resistive memory devices

    Science.gov (United States)

    Park, Nayoung; Kwon, Yongwoo; Choi, Jaeho; Jang, Ho Won; Cha, Pil-Ryung

    2018-04-01

    We demonstrate thermally assisted hopping (TAH) as an appropriate carrier transport model for CH3NH3PbI3 resistive memories. Organic semiconductors, including organic-inorganic hybrid perovskites, have been previously speculated to follow the space-charge-limited conduction (SCLC) model. However, the SCLC model cannot reproduce the temperature dependence of experimental current-voltage curves. Instead, the TAH model with temperature-dependent trap densities and a constant trap level are demonstrated to well reproduce the experimental results.

  20. Solving Problem of Graph Isomorphism by Membrane-Quantum Hybrid Model

    Directory of Open Access Journals (Sweden)

    Artiom Alhazov

    2015-10-01

    Full Text Available This work presents the application of new parallelization methods based on membrane-quantum hybrid computing to graph isomorphism problem solving. Applied membrane-quantum hybrid computational model was developed by authors. Massive parallelism of unconventional computing is used to implement classic brute force algorithm efficiently. This approach does not suppose any restrictions of considered graphs types. The estimated performance of the model is less then quadratic that makes a very good result for the problem of \\textbf{NP} complexity.

  1. Numerical modeling of lower hybrid heating and current drive

    International Nuclear Information System (INIS)

    Valeo, E.J.; Eder, D.C.

    1986-03-01

    The generation of currents in toroidal plasma by application of waves in the lower hybrid frequency range involves the interplay of several physical phenomena which include: wave propagation in toroidal geometry, absorption via wave-particle resonances, the quasilinear generation of strongly nonequilibrium electron and ion distribution functions, and the self-consistent evolution of the current density in such a nonequilibrium plasma. We describe a code, LHMOD, which we have developed to treat these aspects of current drive and heating in tokamaks. We present results obtained by applying the code to a computation of current ramp-up and to an investigation of the possible importance of minority hydrogen absorption in a deuterium plasma as the ''density limit'' to current drive is approached

  2. Gas ultracentrifuge separative parameters modeling using hybrid neural networks

    International Nuclear Information System (INIS)

    Crus, Maria Ursulina de Lima

    2005-01-01

    A hybrid neural network is developed for the calculation of the separative performance of an ultracentrifuge. A feed forward neural network is trained to estimate the internal flow parameters of a gas ultracentrifuge, and then these parameters are applied in the diffusion equation. For this study, a 573 experimental data set is used to establish the relation between the separative performance and the controlled variables. The process control variables considered are: the feed flow rate F, the cut θ and the product pressure Pp. The mechanical arrangements consider the radial waste scoop dimension, the rotating baffle size D s and the axial feed location Z E . The methodology was validated through the comparison of the calculated separative performance with experimental values. This methodology may be applied to other processes, just by adapting the phenomenological procedures. (author)

  3. A Hybrid Wind-Farm Parametrization for Mesoscale and Climate Models

    Science.gov (United States)

    Pan, Yang; Archer, Cristina L.

    2018-04-01

    To better understand the potential impact of wind farms on weather and climate at the regional to global scales, a new hybrid wind-farm parametrization is proposed for mesoscale and climate models. The proposed parametrization is a hybrid model because it is not based on physical processes or conservation laws, but on the multiple linear regression of the results of large-eddy simulations (LES) with the geometric properties of the wind-farm layout (e.g., the blockage ratio and blockage distance). The innovative aspect is that each wind turbine is treated individually based on its position in the farm and on the wind direction by predicting the velocity upstream of each turbine. The turbine-induced forces and added turbulence kinetic energy (TKE) are first derived analytically and then implemented in the Weather Research and Forecasting model. Idealized simulations of the offshore Lillgrund wind farm are conducted. The wind-speed deficit and TKE predicted with the hybrid model are in excellent agreement with those from the LES results, while the wind-power production estimated with the hybrid model is within 10% of that observed. Three additional wind farms with larger inter-turbine spacing than at Lillgrund are also considered, and a similar agreement with LES results is found, proving that the hybrid parametrization works well with any wind farm regardless of the spacing between turbines. These results indicate the wind-turbine position, wind direction, and added TKE are essential in accounting for the wind-farm effects on the surroundings, for which the hybrid wind-farm parametrization is a promising tool.

  4. A global hybrid coupled model based on atmosphere-SST feedbacks

    Energy Technology Data Exchange (ETDEWEB)

    Cimatoribus, Andrea A.; Drijfhout, Sybren S. [Royal Netherlands Meteorological Institute, De Bilt (Netherlands); Dijkstra, Henk A. [Utrecht University, Institute for Marine and Atmospheric Research Utrecht, Utrecht (Netherlands)

    2012-02-15

    A global hybrid coupled model is developed, with the aim of studying the effects of ocean-atmosphere feedbacks on the stability of the Atlantic meridional overturning circulation. The model includes a global ocean general circulation model and a statistical atmosphere model. The statistical atmosphere model is based on linear regressions of data from a fully coupled climate model on sea surface temperature both locally and hemispherically averaged, being the footprint of Atlantic meridional overturning variability. It provides dynamic boundary conditions to the ocean model for heat, freshwater and wind-stress. A basic but consistent representation of ocean-atmosphere feedbacks is captured in the hybrid coupled model and it is more than 10 times faster than the fully coupled climate model. The hybrid coupled model reaches a steady state with a climate close to the one of the fully coupled climate model, and the two models also have a similar response (collapse) of the Atlantic meridional overturning circulation to a freshwater hosing applied in the northern North Atlantic. (orig.)

  5. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    Science.gov (United States)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  6. Modeling and Simulation of Multi-scale Environmental Systems with Generalized Hybrid Petri Nets

    Directory of Open Access Journals (Sweden)

    Mostafa eHerajy

    2015-07-01

    Full Text Available Predicting and studying the dynamics and properties of environmental systems necessitates the construction and simulation of mathematical models entailing different levels of complexities. Such type of computational experiments often require the combination of discrete and continuous variables as well as processes operating at different time scales. Furthermore, the iterative steps of constructing and analyzing environmental models might involve researchers with different background. Hybrid Petri nets may contribute in overcoming such challenges as they facilitate the implementation of systems integrating discrete and continuous dynamics. Additionally, the visual depiction of model components will inevitably help to bridge the gap between scientists with distinct expertise working on the same problem. Thus, modeling environmental systems with hybrid Petri nets enables the construction of complex processes while keeping the models comprehensible for researchers working on the same project with significantly divergent education path. In this paper we propose the utilization of a special class of hybrid Petri nets, Generalized Hybrid Petri Nets (GHPN, to model and simulate environmental systems exposing processes interacting at different time-scales. GHPN integrate stochastic and deterministic semantics as well as other types of special basic events. Moreover, a case study is presented to illustrate the use of GHPN in constructing and simulating multi-timescale environmental scenarios.

  7. A control-oriented cycle-life model for hybrid electric vehicle lithium-ion batteries

    International Nuclear Information System (INIS)

    Suri, Girish; Onori, Simona

    2016-01-01

    In this paper, a semi-empirical Lithium-iron phosphate-graphite battery aging model is identified over data mimicking actual cycling conditions that a hybrid electric vehicle battery encounters under real driving scenarios. The aging model is then used to construct the severity factor map, used to characterize relative aging of the battery under different operating conditions. This is used as a battery degradation criterion within a multi-objective optimization problem where battery aging minimization is to be achieved along with fuel consumption minimization. The method proposed is general and can be applied to other battery chemistry as well as different vehicular applications. Finally, simulations conducted using a hybrid electric vehicle simulator show how the two modeling tools developed in this paper, i.e., the severity factor map and the aging model, can be effectively used in a multi-objective optimization problem to predict and control battery degradation. - Highlights: • Battery aging model for hybrid electric vehicles using real driving conditions data. • Development of a modeling tool to assess battery degradation for real time optimization. • "3"1P NMR analysis of an enzyme-treated extract showed expected hydrolysis of P forms. • Development of an energy management strategy to minimize battery degradation. • Simulation results from hybrid electric vehicle simulator.

  8. Pareto-Optimization of HTS CICC for High-Current Applications in Self-Field

    Directory of Open Access Journals (Sweden)

    Giordano Tomassetti

    2018-01-01

    Full Text Available The ENEA superconductivity laboratory developed a novel design for Cable-in-Conduit Conductors (CICCs comprised of stacks of 2nd-generation REBCO coated conductors. In its original version, the cable was made up of 150 HTS tapes distributed in five slots, twisted along an aluminum core. In this work, taking advantage of a 2D finite element model, able to estimate the cable’s current distribution in the cross-section, a multiobjective optimization procedure was implemented. The aim of optimization was to simultaneously maximize both engineering current density and total current flowing inside the tapes when operating in self-field, by varying the cross-section layout. Since the optimization process involved both integer and real geometrical variables, the choice of an evolutionary search algorithm was strictly necessary. The use of an evolutionary algorithm in the frame of a multiple objective optimization made it an obliged choice to numerically approach the problem using a nonstandard fast-converging optimization algorithm. By means of this algorithm, the Pareto frontiers for the different configurations were calculated, providing a powerful tool for the designer to achieve the desired preliminary operating conditions in terms of engineering current density and/or total current, depending on the specific application field, that is, power transmission cable and bus bar systems.

  9. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2014-01-01

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratio (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.

  10. Pareto frontier analyses based decision making tool for transportation of hazardous waste

    International Nuclear Information System (INIS)

    Das, Arup; Mazumder, T.N.; Gupta, A.K.

    2012-01-01

    Highlights: ► Posteriori method using multi-objective approach to solve bi-objective routing problem. ► System optimization (with multiple source–destination pairs) in a capacity constrained network using non-dominated sorting. ► Tools like cost elasticity and angle based focus used to analyze Pareto frontier to aid stakeholders make informed decisions. ► A real life case study of Kolkata Metropolitan Area to explain the workability of the model. - Abstract: Transportation of hazardous wastes through a region poses immense threat on the development along its road network. The risk to the population, exposed to such activities, has been documented in the past. However, a comprehensive framework for routing hazardous wastes has often been overlooked. A regional Hazardous Waste Management scheme should incorporate a comprehensive framework for hazardous waste transportation. This framework would incorporate the various stakeholders involved in decision making. Hence, a multi-objective approach is required to safeguard the interest of all the concerned stakeholders. The objective of this study is to design a methodology for routing of hazardous wastes between the generating units and the disposal facilities through a capacity constrained network. The proposed methodology uses posteriori method with multi-objective approach to find non-dominated solutions for the system consisting of multiple origins and destinations. A case study of transportation of hazardous wastes in Kolkata Metropolitan Area has also been provided to elucidate the methodology.

  11. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    International Nuclear Information System (INIS)

    Wang, Yongbo; Wu, Huapeng; Handroos, Heikki

    2013-01-01

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device

  12. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  13. Modeling the geometric formation and powder deposition mass in laser induction hybrid cladding

    International Nuclear Information System (INIS)

    Huang, Yong Jun; Yuan, Sheng Fa

    2012-01-01

    A new laser induction hybrid cladding technique on cylinder work piece is presented. Based on a series of laser induction hybrid experiments by off axial powder feeding, the predicting models of individual clad geometric formation and powder catchment were developed in terms of powder feeding rate, laser special energy and induction energy density using multiple regression analysis. In addition, confirmation tests were performed to make a comparison between the predicting results and measured ones. Via the experiments and analysis, the conclusions can be lead to that the process parameters have crucial influence on the clad geometric formation and powder catchment, and that the predicting model reflects well the relationship between the clad geometric formation and process parameters in laser induction hybrid cladding

  14. Comparison of a hybrid model to a global model of atmospheric pressure radio-frequency capacitive discharges

    International Nuclear Information System (INIS)

    Lazzaroni, C; Lieberman, M A; Lichtenberg, A J; Chabert, P

    2012-01-01

    A one-dimensional hybrid analytical-numerical global model of atmospheric pressure radio-frequency (rf) driven capacitive discharges, previously developed, is compared with a basic global model. A helium feed gas with small admixtures of oxygen is studied. For the hybrid model, the electrical characteristics are calculated analytically as a current-driven homogeneous discharge. The electron power balance is solved analytically to determine a time-varying Maxwellian electron temperature, which oscillates on the rf timescale. Averaging over the rf period yields effective rate coefficients for gas phase activated processes. For the basic global model, the electron temperature is constant in time and the sheath physics is neglected. For both models, the particle balance relations for all species are integrated numerically to determine the equilibrium discharge parameters. Variations of discharge parameters with composition and rf power are determined and compared. The rate coefficients for electron-activated processes are strongly temperature dependent, leading to significantly larger neutral and charged particle densities for the hybrid model. For small devices, finite sheath widths limit the operating regimes to low O 2 fractions. This is captured by the hybrid model but cannot be predicted from the basic global model.

  15. Modelling of JET hybrid scenarios with GLF23 transport model: E × B shear stabilization of anomalous transport

    NARCIS (Netherlands)

    Voitsekhovitch, I.; Belo, da Silva Ares; Citrin, J.; Fable, E.; Ferreira, J.; Garcia, J.; Garzotti, L.; Hobirk, J.; Hogeweij, G. M. D.; Joffrin, E.; Kochl, F.; Litaudon, X.; Moradi, S.; Nabais, F.; JET-EFDA Contributors,; EU-ITM ITER Scenario Modelling group,

    2014-01-01

    The E × B shear stabilization of anomalous transport in JET hybrid discharges is studied via self-consistent predictive modelling of electron and ion temperature, ion density and toroidal rotation velocity performed with the GLF23 model. The E × B shear

  16. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  17. Modeling and Optimal Control of a Class of Warfare Hybrid Dynamic Systems Based on Lanchester (n,1 Attrition Model

    Directory of Open Access Journals (Sweden)

    Xiangyong Chen

    2014-01-01

    hybrid dynamic systems is established based on Lanchester equation in a (n,1 battle, where a heterogeneous force of n different troop types faces a homogeneous force. This model can be characterized by the interaction of continuous-time models (governed by Lanchester equation, and discrete event systems (described by variable tactics. Furthermore, an expository discussion is presented on an optimal variable tactics control problem for warfare hybrid dynamic system. The optimal control strategies are designed based on dynamic programming and differential game theory. As an example of the consequences of this optimal control problem, we take the (2, 1 case and solve the optimal strategies in a (2, 1 case. Simulation results show the feasibility of warfare hybrid system model and the effectiveness of the optimal control strategies designed.

  18. Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks

    Directory of Open Access Journals (Sweden)

    José Raúl Machado-Fernández

    2016-12-01

    Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.

  19. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  20. Thermal modeling of a hydraulic hybrid vehicle transmission based on thermodynamic analysis

    International Nuclear Information System (INIS)

    Kwon, Hyukjoon; Sprengel, Michael; Ivantysynova, Monika

    2016-01-01

    Hybrid vehicles have become a popular alternative to conventional powertrain architectures by offering improved fuel efficiency along with a range of environmental benefits. Hydraulic Hybrid Vehicles (HHV) offer one approach to hybridization with many benefits over competing technologies. Among these benefits are lower component costs, more environmentally friendly construction materials, and the ability to recover a greater quantity of energy during regenerative braking which make HHVs partially well suited to urban environments. In order to further the knowledge base regarding HHVs, this paper explores the thermodynamic characteristics of such a system. A system model is detailed for both the hydraulic and thermal components of a closed circuit hydraulic hybrid transmission following the FTP-72 driving cycle. Among the new techniques proposed in this paper is a novel method for capturing rapid thermal transients. This paper concludes by comparing the results of this model with experimental data gathered on a Hardware-in-the-Loop (HIL) transmission dynamometer possessing the same architecture, components, and driving cycle used within the simulation model. This approach can be used for several applications such as thermal stability analysis of HHVs, optimal thermal management, and analysis of the system's thermodynamic efficiency. - Highlights: • Thermal modeling for HHVs is introduced. • A model for the hydraulic and thermal system is developed for HHVs. • A novel method for capturing rapid thermal transients is proposed. • The thermodynamic system diagram of a series HHV is predicted.